jonathan
|
|
fef01b9d-1d9a-4022-a02c-24c2343faa8c
|
2025/06/13 22:39:06
|
2025/06/13 22:39:06
|
2025/06/13 22:39:07
|
34 ms
|
342 ms
|
SHOW TABLES IN `onetableschema`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2757, tableName#2758, isTemporary#2759]
+- 'UnresolvedNamespace [onetableschema]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2757, tableName#2758, isTemporary#2759]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Optimized Logical Plan ==
CommandResult [namespace#2757, tableName#2758, isTemporary#2759], ShowTables [namespace#2757, tableName#2758, isTemporary#2759], V2SessionCatalog(spark_catalog), [onetableschema], [[0,200000000e,300000000c,0,656c626174656e6f,616d65686373,73657079746c6c61,74736574]]
+- ShowTables [namespace#2757, tableName#2758, isTemporary#2759]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Physical Plan ==
CommandResult [namespace#2757, tableName#2758, isTemporary#2759]
+- ShowTables [namespace#2757, tableName#2758, isTemporary#2759], V2SessionCatalog(spark_catalog), [onetableschema]
|
jonathan
|
|
fea01ec8-cffd-4fd3-8768-91a727bf47fe
|
2025/06/13 23:21:10
|
2025/06/13 23:21:11
|
2025/06/13 23:21:11
|
53 ms
|
397 ms
|
SHOW TABLES IN `c3ba675f1fb64660ba4a90155b35924e`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3380, tableName#3381, isTemporary#3382]
+- 'UnresolvedNamespace [c3ba675f1fb64660ba4a90155b35924e]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3380, tableName#3381, isTemporary#3382]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Optimized Logical Plan ==
CommandResult [namespace#3380, tableName#3381, isTemporary#3382], ShowTables [namespace#3380, tableName#3381, isTemporary#3382], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e], [[0,2000000020,400000000c,0,6635373661623363,3036363436626631,3531303961346162,6534323935336235,69746e656469796d,72656966]]
+- ShowTables [namespace#3380, tableName#3381, isTemporary#3382]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Physical Plan ==
CommandResult [namespace#3380, tableName#3381, isTemporary#3382]
+- ShowTables [namespace#3380, tableName#3381, isTemporary#3382], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
|
jonathan
|
|
fe2eeb0c-fe37-4f9f-849d-846445773af0
|
2025/06/13 22:54:11
|
2025/06/13 22:54:11
|
2025/06/13 22:54:12
|
26 ms
|
345 ms
|
SHOW TABLES IN `onetableschema`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3061, tableName#3062, isTemporary#3063]
+- 'UnresolvedNamespace [onetableschema]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3061, tableName#3062, isTemporary#3063]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Optimized Logical Plan ==
CommandResult [namespace#3061, tableName#3062, isTemporary#3063], ShowTables [namespace#3061, tableName#3062, isTemporary#3063], V2SessionCatalog(spark_catalog), [onetableschema], [[0,200000000e,300000000c,0,656c626174656e6f,616d65686373,73657079746c6c61,74736574]]
+- ShowTables [namespace#3061, tableName#3062, isTemporary#3063]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Physical Plan ==
CommandResult [namespace#3061, tableName#3062, isTemporary#3063]
+- ShowTables [namespace#3061, tableName#3062, isTemporary#3063], V2SessionCatalog(spark_catalog), [onetableschema]
|
jonathon
|
|
fddd6370-e99c-4ab9-81e3-a34c12f9ed88
|
2025/06/14 01:23:33
|
2025/06/14 01:23:33
|
2025/06/14 01:23:33
|
38 ms
|
271 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4415, value#4416, meaning#4417, Since version#4418], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4415, value#4416, meaning#4417, Since version#4418]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
fd73230f-0d28-4938-91c3-3beae7d8300b
|
2025/06/13 06:57:07
|
2025/06/13 06:57:07
|
2025/06/13 06:57:07
|
27 ms
|
122 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
fc40a089-9a06-46e1-a564-f544f85f247b
|
2025/06/14 01:47:47
|
2025/06/14 01:47:47
|
2025/06/14 01:47:47
|
26 ms
|
119 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
[43]
|
fb16b2b0-de8d-49bf-b0a7-b6bb85b04e2b
|
2025/06/14 01:23:35
|
2025/06/14 01:23:35
|
2025/06/14 01:23:35
|
235 ms
|
390 ms
|
SELECT C_0 AS C_14, C_1 AS C_13, C_43 AS C_12, C_4331 AS C_18, C_4332 AS C_20, C_4333 AS C_16, C_6 AS C_19, C_8 AS C_17, C_5 AS C_15, C_7 AS C_22, C_9 AS C_25, C_11 AS C_23, C_10 AS C_21, C_3 AS C_24 FROM (SELECT C_64656661756c745f616972706f727473.`id` AS C_0, C_64656661756c745f616972706f727473.`type` AS C_1, C_64656661756c745f616972706f727473.`name` AS C_43, C_64656661756c745f616972706f727473.`lat` AS C_2, C_64656661756c745f616972706f727473.`lon` AS C_4, C_64656661756c745f616972706f727473.`elev` AS C_3, C_64656661756c745f616972706f727473.`continent` AS C_6, C_64656661756c745f616972706f727473.`country` AS C_8, C_64656661756c745f616972706f727473.`region` AS C_5, C_64656661756c745f616972706f727473.`city` AS C_7, C_64656661756c745f616972706f727473.`iata` AS C_9, C_64656661756c745f616972706f727473.`code` AS C_11, C_64656661756c745f616972706f727473.`gps` AS C_10, (round((C_64656661756c745f616972706f727473.`lat` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4331, (round((C_64656661756c745f616972706f727473.`lon` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4332, (round((C_64656661756c745f616972706f727473.`elev` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4333 FROM `default`.`airports` C_64656661756c745f616972706f727473 WHERE ((C_64656661756c745f616972706f727473.`lon` <= (- 1.040500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lon` >= (- 1.110500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lat` >= 4.100000000000000E+001) AND (C_64656661756c745f616972706f727473.`lat` <= 4.500000000000000E+001)) ) C_4954424c ORDER BY C_24 DESC LIMIT 5
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'GlobalLimit 5
+- 'LocalLimit 5
+- 'Sort ['C_24 DESC NULLS LAST], true
+- 'Project ['C_0 AS C_14#4502, 'C_1 AS C_13#4503, 'C_43 AS C_12#4504, 'C_4331 AS C_18#4505, 'C_4332 AS C_20#4506, 'C_4333 AS C_16#4507, 'C_6 AS C_19#4508, 'C_8 AS C_17#4509, 'C_5 AS C_15#4510, 'C_7 AS C_22#4511, 'C_9 AS C_25#4512, 'C_11 AS C_23#4513, 'C_10 AS C_21#4514, 'C_3 AS C_24#4515]
+- 'SubqueryAlias C_4954424c
+- 'Project ['C_64656661756c745f616972706f727473.id AS C_0#4486, 'C_64656661756c745f616972706f727473.type AS C_1#4487, 'C_64656661756c745f616972706f727473.name AS C_43#4488, 'C_64656661756c745f616972706f727473.lat AS C_2#4489, 'C_64656661756c745f616972706f727473.lon AS C_4#4490, 'C_64656661756c745f616972706f727473.elev AS C_3#4491, 'C_64656661756c745f616972706f727473.continent AS C_6#4492, 'C_64656661756c745f616972706f727473.country AS C_8#4493, 'C_64656661756c745f616972706f727473.region AS C_5#4494, 'C_64656661756c745f616972706f727473.city AS C_7#4495, 'C_64656661756c745f616972706f727473.iata AS C_9#4496, 'C_64656661756c745f616972706f727473.code AS C_11#4497, 'C_64656661756c745f616972706f727473.gps AS C_10#4498, ('round(('C_64656661756c745f616972706f727473.lat * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4331#4499, ('round(('C_64656661756c745f616972706f727473.lon * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4332#4500, ('round(('C_64656661756c745f616972706f727473.elev * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4333#4501]
+- 'Filter ((('C_64656661756c745f616972706f727473.lon <= -104.05) AND ('C_64656661756c745f616972706f727473.lon >= -111.05)) AND (('C_64656661756c745f616972706f727473.lat >= 41.0) AND ('C_64656661756c745f616972706f727473.lat <= 45.0)))
+- 'SubqueryAlias C_64656661756c745f616972706f727473
+- 'UnresolvedRelation [default, airports], [], false
== Analyzed Logical Plan ==
C_14: string, C_13: string, C_12: string, C_18: double, C_20: double, C_16: double, C_19: string, C_17: string, C_15: string, C_22: string, C_25: string, C_23: string, C_21: string, C_24: double
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_24#4515 DESC NULLS LAST], true
+- Project [C_0#4486 AS C_14#4502, C_1#4487 AS C_13#4503, C_43#4488 AS C_12#4504, C_4331#4499 AS C_18#4505, C_4332#4500 AS C_20#4506, C_4333#4501 AS C_16#4507, C_6#4492 AS C_19#4508, C_8#4493 AS C_17#4509, C_5#4494 AS C_15#4510, C_7#4495 AS C_22#4511, C_9#4496 AS C_25#4512, C_11#4497 AS C_23#4513, C_10#4498 AS C_21#4514, C_3#4491 AS C_24#4515]
+- SubqueryAlias C_4954424c
+- Project [id#4516 AS C_0#4486, type#4517 AS C_1#4487, name#4518 AS C_43#4488, lat#4519 AS C_2#4489, lon#4520 AS C_4#4490, elev#4521 AS C_3#4491, continent#4522 AS C_6#4492, country#4523 AS C_8#4493, region#4524 AS C_5#4494, city#4525 AS C_7#4495, iata#4526 AS C_9#4496, code#4527 AS C_11#4497, gps#4528 AS C_10#4498, (round((lat#4519 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4331#4499, (round((lon#4520 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4332#4500, (round((elev#4521 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4333#4501]
+- Filter (((lon#4520 <= -104.05) AND (lon#4520 >= -111.05)) AND ((lat#4519 >= 41.0) AND (lat#4519 <= 45.0)))
+- SubqueryAlias C_64656661756c745f616972706f727473
+- SubqueryAlias spark_catalog.default.airports
+- Relation spark_catalog.default.airports[id#4516,type#4517,name#4518,lat#4519,lon#4520,elev#4521,continent#4522,country#4523,region#4524,city#4525,iata#4526,code#4527,gps#4528] parquet
== Optimized Logical Plan ==
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_24#4515 DESC NULLS LAST], true
+- Project [id#4516 AS C_14#4502, type#4517 AS C_13#4503, name#4518 AS C_12#4504, (round((lat#4519 * 1000.0), 0) / 1000.0) AS C_18#4505, (round((lon#4520 * 1000.0), 0) / 1000.0) AS C_20#4506, (round((elev#4521 * 1000.0), 0) / 1000.0) AS C_16#4507, continent#4522 AS C_19#4508, country#4523 AS C_17#4509, region#4524 AS C_15#4510, city#4525 AS C_22#4511, iata#4526 AS C_25#4512, code#4527 AS C_23#4513, gps#4528 AS C_21#4514, elev#4521 AS C_24#4515]
+- Filter ((isnotnull(lon#4520) AND isnotnull(lat#4519)) AND (((lon#4520 <= -104.05) AND (lon#4520 >= -111.05)) AND ((lat#4519 >= 41.0) AND (lat#4519 <= 45.0))))
+- Relation spark_catalog.default.airports[id#4516,type#4517,name#4518,lat#4519,lon#4520,elev#4521,continent#4522,country#4523,region#4524,city#4525,iata#4526,code#4527,gps#4528] parquet
== Physical Plan ==
TakeOrderedAndProject(limit=5, orderBy=[C_24#4515 DESC NULLS LAST], output=[C_14#4502,C_13#4503,C_12#4504,C_18#4505,C_20#4506,C_16#4507,C_19#4508,C_17#4509,C_15#4510,C_22#4511,C_25#4512,C_23#4513,C_21#4514,C_24#4515])
+- *(1) Project [id#4516 AS C_14#4502, type#4517 AS C_13#4503, name#4518 AS C_12#4504, (round((lat#4519 * 1000.0), 0) / 1000.0) AS C_18#4505, (round((lon#4520 * 1000.0), 0) / 1000.0) AS C_20#4506, (round((elev#4521 * 1000.0), 0) / 1000.0) AS C_16#4507, continent#4522 AS C_19#4508, country#4523 AS C_17#4509, region#4524 AS C_15#4510, city#4525 AS C_22#4511, iata#4526 AS C_25#4512, code#4527 AS C_23#4513, gps#4528 AS C_21#4514, elev#4521 AS C_24#4515]
+- *(1) Filter (((((isnotnull(lon#4520) AND isnotnull(lat#4519)) AND (lon#4520 <= -104.05)) AND (lon#4520 >= -111.05)) AND (lat#4519 >= 41.0)) AND (lat#4519 <= 45.0))
+- *(1) ColumnarToRow
+- FileScan parquet spark_catalog.default.airports[id#4516,type#4517,name#4518,lat#4519,lon#4520,elev#4521,continent#4522,country#4523,region#4524,city#4525,iata#4526,code#4527,gps#4528] Batched: true, DataFilters: [isnotnull(lon#4520), isnotnull(lat#4519), (lon#4520 <= -104.05), (lon#4520 >= -111.05), (lat#451..., Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/home/acdcadmin/spark-warehouse/airports], PartitionFilters: [], PushedFilters: [IsNotNull(lon), IsNotNull(lat), LessThanOrEqual(lon,-104.05), GreaterThanOrEqual(lon,-111.05), G..., ReadSchema: struct<id:string,type:string,name:string,lat:double,lon:double,elev:double,continent:string,count...
|
jonathon
|
[44]
|
f9847174-6e75-4840-9e85-bace463b6e36
|
2025/06/14 01:46:19
|
2025/06/14 01:46:19
|
2025/06/14 01:46:20
|
180 ms
|
277 ms
|
SELECT C_5 AS C_18, C_6 AS C_19, C_0 AS C_15, C_4331 AS C_22, C_4332 AS C_25, C_4333 AS C_20, C_4 AS C_23, C_43 AS C_14, C_11 AS C_12, C_8 AS C_17, C_7 AS C_24, C_9 AS C_21, C_10 AS C_13, C_3 AS C_16 FROM (SELECT C_64656661756c745f616972706f727473.`id` AS C_5, C_64656661756c745f616972706f727473.`type` AS C_6, C_64656661756c745f616972706f727473.`name` AS C_0, C_64656661756c745f616972706f727473.`lat` AS C_1, C_64656661756c745f616972706f727473.`lon` AS C_2, C_64656661756c745f616972706f727473.`elev` AS C_3, C_64656661756c745f616972706f727473.`continent` AS C_4, C_64656661756c745f616972706f727473.`country` AS C_43, C_64656661756c745f616972706f727473.`region` AS C_11, C_64656661756c745f616972706f727473.`city` AS C_8, C_64656661756c745f616972706f727473.`iata` AS C_7, C_64656661756c745f616972706f727473.`code` AS C_9, C_64656661756c745f616972706f727473.`gps` AS C_10, (round((C_64656661756c745f616972706f727473.`lat` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4331, (round((C_64656661756c745f616972706f727473.`lon` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4332, (round((C_64656661756c745f616972706f727473.`elev` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4333 FROM `default`.`airports` C_64656661756c745f616972706f727473 WHERE ((C_64656661756c745f616972706f727473.`lon` <= (- 1.040500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lon` >= (- 1.110500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lat` >= 4.100000000000000E+001) AND (C_64656661756c745f616972706f727473.`lat` <= 4.500000000000000E+001)) ) C_4954424c ORDER BY C_16 DESC LIMIT 5
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'GlobalLimit 5
+- 'LocalLimit 5
+- 'Sort ['C_16 DESC NULLS LAST], true
+- 'Project ['C_5 AS C_18#4646, 'C_6 AS C_19#4647, 'C_0 AS C_15#4648, 'C_4331 AS C_22#4649, 'C_4332 AS C_25#4650, 'C_4333 AS C_20#4651, 'C_4 AS C_23#4652, 'C_43 AS C_14#4653, 'C_11 AS C_12#4654, 'C_8 AS C_17#4655, 'C_7 AS C_24#4656, 'C_9 AS C_21#4657, 'C_10 AS C_13#4658, 'C_3 AS C_16#4659]
+- 'SubqueryAlias C_4954424c
+- 'Project ['C_64656661756c745f616972706f727473.id AS C_5#4630, 'C_64656661756c745f616972706f727473.type AS C_6#4631, 'C_64656661756c745f616972706f727473.name AS C_0#4632, 'C_64656661756c745f616972706f727473.lat AS C_1#4633, 'C_64656661756c745f616972706f727473.lon AS C_2#4634, 'C_64656661756c745f616972706f727473.elev AS C_3#4635, 'C_64656661756c745f616972706f727473.continent AS C_4#4636, 'C_64656661756c745f616972706f727473.country AS C_43#4637, 'C_64656661756c745f616972706f727473.region AS C_11#4638, 'C_64656661756c745f616972706f727473.city AS C_8#4639, 'C_64656661756c745f616972706f727473.iata AS C_7#4640, 'C_64656661756c745f616972706f727473.code AS C_9#4641, 'C_64656661756c745f616972706f727473.gps AS C_10#4642, ('round(('C_64656661756c745f616972706f727473.lat * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4331#4643, ('round(('C_64656661756c745f616972706f727473.lon * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4332#4644, ('round(('C_64656661756c745f616972706f727473.elev * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4333#4645]
+- 'Filter ((('C_64656661756c745f616972706f727473.lon <= -104.05) AND ('C_64656661756c745f616972706f727473.lon >= -111.05)) AND (('C_64656661756c745f616972706f727473.lat >= 41.0) AND ('C_64656661756c745f616972706f727473.lat <= 45.0)))
+- 'SubqueryAlias C_64656661756c745f616972706f727473
+- 'UnresolvedRelation [default, airports], [], false
== Analyzed Logical Plan ==
C_18: string, C_19: string, C_15: string, C_22: double, C_25: double, C_20: double, C_23: string, C_14: string, C_12: string, C_17: string, C_24: string, C_21: string, C_13: string, C_16: double
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_16#4659 DESC NULLS LAST], true
+- Project [C_5#4630 AS C_18#4646, C_6#4631 AS C_19#4647, C_0#4632 AS C_15#4648, C_4331#4643 AS C_22#4649, C_4332#4644 AS C_25#4650, C_4333#4645 AS C_20#4651, C_4#4636 AS C_23#4652, C_43#4637 AS C_14#4653, C_11#4638 AS C_12#4654, C_8#4639 AS C_17#4655, C_7#4640 AS C_24#4656, C_9#4641 AS C_21#4657, C_10#4642 AS C_13#4658, C_3#4635 AS C_16#4659]
+- SubqueryAlias C_4954424c
+- Project [id#4660 AS C_5#4630, type#4661 AS C_6#4631, name#4662 AS C_0#4632, lat#4663 AS C_1#4633, lon#4664 AS C_2#4634, elev#4665 AS C_3#4635, continent#4666 AS C_4#4636, country#4667 AS C_43#4637, region#4668 AS C_11#4638, city#4669 AS C_8#4639, iata#4670 AS C_7#4640, code#4671 AS C_9#4641, gps#4672 AS C_10#4642, (round((lat#4663 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4331#4643, (round((lon#4664 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4332#4644, (round((elev#4665 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4333#4645]
+- Filter (((lon#4664 <= -104.05) AND (lon#4664 >= -111.05)) AND ((lat#4663 >= 41.0) AND (lat#4663 <= 45.0)))
+- SubqueryAlias C_64656661756c745f616972706f727473
+- SubqueryAlias spark_catalog.default.airports
+- Relation spark_catalog.default.airports[id#4660,type#4661,name#4662,lat#4663,lon#4664,elev#4665,continent#4666,country#4667,region#4668,city#4669,iata#4670,code#4671,gps#4672] parquet
== Optimized Logical Plan ==
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_16#4659 DESC NULLS LAST], true
+- Project [id#4660 AS C_18#4646, type#4661 AS C_19#4647, name#4662 AS C_15#4648, (round((lat#4663 * 1000.0), 0) / 1000.0) AS C_22#4649, (round((lon#4664 * 1000.0), 0) / 1000.0) AS C_25#4650, (round((elev#4665 * 1000.0), 0) / 1000.0) AS C_20#4651, continent#4666 AS C_23#4652, country#4667 AS C_14#4653, region#4668 AS C_12#4654, city#4669 AS C_17#4655, iata#4670 AS C_24#4656, code#4671 AS C_21#4657, gps#4672 AS C_13#4658, elev#4665 AS C_16#4659]
+- Filter ((isnotnull(lon#4664) AND isnotnull(lat#4663)) AND (((lon#4664 <= -104.05) AND (lon#4664 >= -111.05)) AND ((lat#4663 >= 41.0) AND (lat#4663 <= 45.0))))
+- Relation spark_catalog.default.airports[id#4660,type#4661,name#4662,lat#4663,lon#4664,elev#4665,continent#4666,country#4667,region#4668,city#4669,iata#4670,code#4671,gps#4672] parquet
== Physical Plan ==
TakeOrderedAndProject(limit=5, orderBy=[C_16#4659 DESC NULLS LAST], output=[C_18#4646,C_19#4647,C_15#4648,C_22#4649,C_25#4650,C_20#4651,C_23#4652,C_14#4653,C_12#4654,C_17#4655,C_24#4656,C_21#4657,C_13#4658,C_16#4659])
+- *(1) Project [id#4660 AS C_18#4646, type#4661 AS C_19#4647, name#4662 AS C_15#4648, (round((lat#4663 * 1000.0), 0) / 1000.0) AS C_22#4649, (round((lon#4664 * 1000.0), 0) / 1000.0) AS C_25#4650, (round((elev#4665 * 1000.0), 0) / 1000.0) AS C_20#4651, continent#4666 AS C_23#4652, country#4667 AS C_14#4653, region#4668 AS C_12#4654, city#4669 AS C_17#4655, iata#4670 AS C_24#4656, code#4671 AS C_21#4657, gps#4672 AS C_13#4658, elev#4665 AS C_16#4659]
+- *(1) Filter (((((isnotnull(lon#4664) AND isnotnull(lat#4663)) AND (lon#4664 <= -104.05)) AND (lon#4664 >= -111.05)) AND (lat#4663 >= 41.0)) AND (lat#4663 <= 45.0))
+- *(1) ColumnarToRow
+- FileScan parquet spark_catalog.default.airports[id#4660,type#4661,name#4662,lat#4663,lon#4664,elev#4665,continent#4666,country#4667,region#4668,city#4669,iata#4670,code#4671,gps#4672] Batched: true, DataFilters: [isnotnull(lon#4664), isnotnull(lat#4663), (lon#4664 <= -104.05), (lon#4664 >= -111.05), (lat#466..., Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/home/acdcadmin/spark-warehouse/airports], PartitionFilters: [], PushedFilters: [IsNotNull(lon), IsNotNull(lat), LessThanOrEqual(lon,-104.05), GreaterThanOrEqual(lon,-111.05), G..., ReadSchema: struct<id:string,type:string,name:string,lat:double,lon:double,elev:double,continent:string,count...
|
jonathon
|
|
f963a6e1-d9ce-47b3-ad7f-e64435840746
|
2025/06/13 22:44:55
|
2025/06/13 22:44:55
|
2025/06/13 22:44:55
|
93 ms
|
211 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#2845, data_type#2846, comment#2847]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2845, data_type#2846, comment#2847]
== Optimized Logical Plan ==
CommandResult [col_name#2845, data_type#2846, comment#2847], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2845, data_type#2846, comment#2847]
== Physical Plan ==
CommandResult [col_name#2845, data_type#2846, comment#2847]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2845, data_type#2846, comment#2847]
|
jonathan
|
|
f9358d8e-5868-443d-af86-f9f24842f4a8
|
2025/06/13 22:51:52
|
2025/06/13 22:51:52
|
2025/06/13 22:51:52
|
51 ms
|
329 ms
|
SHOW TABLES IN `default`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2961, tableName#2962, isTemporary#2963]
+- 'UnresolvedNamespace [default]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2961, tableName#2962, isTemporary#2963]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Optimized Logical Plan ==
CommandResult [namespace#2961, tableName#2962, isTemporary#2963], ShowTables [namespace#2961, tableName#2962, isTemporary#2963], V2SessionCatalog(spark_catalog), [default], [[0,2000000007,2800000008,0,746c7561666564,7374726f70726961], [0,2000000007,2800000008,0,746c7561666564,73657079746c6c61], [0,2000000007,2800000009,0,746c7561666564,73657079746c6c61,32], [0,2000000007,280000000d,0,746c7561666564,73657079746c6c61,6369736162], [0,2000000007,280000000e,0,746c7561666564,73657079746c6c61,326369736162], [0,2000000007,2800000009,0,746c7561666564,7079747961727261,65], [0,2000000007,280000000a,0,746c7561666564,7974746e69676962,6570], [0,2000000007,280000000a,0,746c7561666564,79747972616e6962,6570], [0,2000000007,2800000008,0,746c7561666564,6570797465746164], [0,2000000007,280000000b,0,746c7561666564,746c616d69636564,657079], [0,2000000007,2800000009,0,746c7561666564,70797474616f6c66,65], [0,2000000007,2800000008,0,746c7561666564,736570797470616d], [0,2000000007,280000000b,0,746c7561666564,646978617463796e,617461], [0,2000000007,280000000f,0,746c7561666564,746978617463796e,61746164706972], [0,2000000007,2800000010,0,746c7561666564,7365745f656d6f73,32656c6261745f74], [0,2000000007,280000000a,0,746c7561666564,7974746375727473,6570], [0,2000000007,280000000e,0,746c7561666564,656e6f7a69786174,70756b6f6f6c], [0,2000000007,280000000c,0,746c7561666564,74676e696b726f77,73657079], [0,2000000007,2800000016,0,746c7561666564,74676e696b726f77,6874697773657079,7265626d756e]]
+- ShowTables [namespace#2961, tableName#2962, isTemporary#2963]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Physical Plan ==
CommandResult [namespace#2961, tableName#2962, isTemporary#2963]
+- ShowTables [namespace#2961, tableName#2962, isTemporary#2963], V2SessionCatalog(spark_catalog), [default]
|
jonathan
|
|
f84c29a7-332d-4049-9e1c-1562b019fee5
|
2025/06/13 23:13:23
|
2025/06/13 23:13:23
|
2025/06/13 23:13:23
|
78 ms
|
347 ms
|
DESCRIBE TABLE `default`.`AllTypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3128, data_type#3129, comment#3130]
+- 'UnresolvedTableOrView [default, AllTypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3128, data_type#3129, comment#3130]
== Optimized Logical Plan ==
CommandResult [col_name#3128, data_type#3129, comment#3130], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3128, data_type#3129, comment#3130]
== Physical Plan ==
CommandResult [col_name#3128, data_type#3129, comment#3130]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3128, data_type#3129, comment#3130]
|
jonathon
|
|
f841a93a-75d6-40a4-ab0e-37a014173cb2
|
2025/06/13 22:37:47
|
2025/06/13 22:37:47
|
2025/06/13 22:37:47
|
95 ms
|
194 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#2598, data_type#2599, comment#2600]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2598, data_type#2599, comment#2600]
== Optimized Logical Plan ==
CommandResult [col_name#2598, data_type#2599, comment#2600], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2598, data_type#2599, comment#2600]
== Physical Plan ==
CommandResult [col_name#2598, data_type#2599, comment#2600]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2598, data_type#2599, comment#2600]
|
jonathan
|
|
f62ad77d-f169-48df-bce1-bdbabab8b106
|
2025/06/13 22:51:53
|
2025/06/13 22:51:53
|
2025/06/13 22:51:54
|
11 ms
|
321 ms
|
SHOW TABLES IN `global_temp`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3011, tableName#3012, isTemporary#3013]
+- 'UnresolvedNamespace [global_temp]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3011, tableName#3012, isTemporary#3013]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Optimized Logical Plan ==
CommandResult [namespace#3011, tableName#3012, isTemporary#3013], ShowTables [namespace#3011, tableName#3012, isTemporary#3013], V2SessionCatalog(spark_catalog), [global_temp]
+- ShowTables [namespace#3011, tableName#3012, isTemporary#3013]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Physical Plan ==
CommandResult <empty>, [namespace#3011, tableName#3012, isTemporary#3013]
+- ShowTables [namespace#3011, tableName#3012, isTemporary#3013], V2SessionCatalog(spark_catalog), [global_temp]
|
jonathon
|
|
f4914c47-dc63-4c48-b54d-9a23d0814768
|
2025/06/14 01:46:19
|
2025/06/14 01:46:19
|
2025/06/14 01:46:19
|
23 ms
|
119 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
f195549a-ac76-49e9-b8de-22f313437020
|
2025/06/13 23:38:36
|
2025/06/13 23:38:37
|
2025/06/13 23:38:37
|
83 ms
|
180 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#4265, data_type#4266, comment#4267]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4265, data_type#4266, comment#4267]
== Optimized Logical Plan ==
CommandResult [col_name#4265, data_type#4266, comment#4267], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4265, data_type#4266, comment#4267]
== Physical Plan ==
CommandResult [col_name#4265, data_type#4266, comment#4267]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4265, data_type#4266, comment#4267]
|
jonathon
|
|
f10e8c20-2019-4ab0-af5d-242b69ccdbc0
|
2025/06/14 01:47:47
|
2025/06/14 01:47:47
|
2025/06/14 01:47:47
|
207 ms
|
300 ms
|
Listing tables 'catalog : null, schemaPattern : %, tableTypes : null, tableName : %'
|
CLOSED
|
|
jonathon
|
|
eeddfd91-109c-4a73-a389-07106b8b68b8
|
2025/06/13 07:55:31
|
2025/06/13 07:55:31
|
2025/06/13 07:55:31
|
258 ms
|
418 ms
|
Listing tables 'catalog : null, schemaPattern : %, tableTypes : null, tableName : %'
|
CLOSED
|
|
jonathon
|
[45]
|
ee2b3dd1-6798-44ba-a603-2d3b58ef9f09
|
2025/06/14 01:47:48
|
2025/06/14 01:47:48
|
2025/06/14 01:47:48
|
170 ms
|
267 ms
|
SELECT C_5 AS C_13, C_43 AS C_23, C_6 AS C_15, C_4331 AS C_19, C_4332 AS C_21, C_4333 AS C_18, C_0 AS C_25, C_1 AS C_20, C_2 AS C_16, C_3 AS C_22, C_4 AS C_12, C_10 AS C_17, C_11 AS C_14, C_9 AS C_24 FROM (SELECT C_64656661756c745f616972706f727473.`id` AS C_5, C_64656661756c745f616972706f727473.`type` AS C_43, C_64656661756c745f616972706f727473.`name` AS C_6, C_64656661756c745f616972706f727473.`lat` AS C_7, C_64656661756c745f616972706f727473.`lon` AS C_8, C_64656661756c745f616972706f727473.`elev` AS C_9, C_64656661756c745f616972706f727473.`continent` AS C_0, C_64656661756c745f616972706f727473.`country` AS C_1, C_64656661756c745f616972706f727473.`region` AS C_2, C_64656661756c745f616972706f727473.`city` AS C_3, C_64656661756c745f616972706f727473.`iata` AS C_4, C_64656661756c745f616972706f727473.`code` AS C_10, C_64656661756c745f616972706f727473.`gps` AS C_11, (round((C_64656661756c745f616972706f727473.`lat` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4331, (round((C_64656661756c745f616972706f727473.`lon` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4332, (round((C_64656661756c745f616972706f727473.`elev` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4333 FROM `default`.`airports` C_64656661756c745f616972706f727473 WHERE ((C_64656661756c745f616972706f727473.`lon` <= (- 1.040500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lon` >= (- 1.110500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lat` >= 4.100000000000000E+001) AND (C_64656661756c745f616972706f727473.`lat` <= 4.500000000000000E+001)) ) C_4954424c ORDER BY C_24 DESC LIMIT 5
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'GlobalLimit 5
+- 'LocalLimit 5
+- 'Sort ['C_24 DESC NULLS LAST], true
+- 'Project ['C_5 AS C_13#4790, 'C_43 AS C_23#4791, 'C_6 AS C_15#4792, 'C_4331 AS C_19#4793, 'C_4332 AS C_21#4794, 'C_4333 AS C_18#4795, 'C_0 AS C_25#4796, 'C_1 AS C_20#4797, 'C_2 AS C_16#4798, 'C_3 AS C_22#4799, 'C_4 AS C_12#4800, 'C_10 AS C_17#4801, 'C_11 AS C_14#4802, 'C_9 AS C_24#4803]
+- 'SubqueryAlias C_4954424c
+- 'Project ['C_64656661756c745f616972706f727473.id AS C_5#4774, 'C_64656661756c745f616972706f727473.type AS C_43#4775, 'C_64656661756c745f616972706f727473.name AS C_6#4776, 'C_64656661756c745f616972706f727473.lat AS C_7#4777, 'C_64656661756c745f616972706f727473.lon AS C_8#4778, 'C_64656661756c745f616972706f727473.elev AS C_9#4779, 'C_64656661756c745f616972706f727473.continent AS C_0#4780, 'C_64656661756c745f616972706f727473.country AS C_1#4781, 'C_64656661756c745f616972706f727473.region AS C_2#4782, 'C_64656661756c745f616972706f727473.city AS C_3#4783, 'C_64656661756c745f616972706f727473.iata AS C_4#4784, 'C_64656661756c745f616972706f727473.code AS C_10#4785, 'C_64656661756c745f616972706f727473.gps AS C_11#4786, ('round(('C_64656661756c745f616972706f727473.lat * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4331#4787, ('round(('C_64656661756c745f616972706f727473.lon * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4332#4788, ('round(('C_64656661756c745f616972706f727473.elev * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4333#4789]
+- 'Filter ((('C_64656661756c745f616972706f727473.lon <= -104.05) AND ('C_64656661756c745f616972706f727473.lon >= -111.05)) AND (('C_64656661756c745f616972706f727473.lat >= 41.0) AND ('C_64656661756c745f616972706f727473.lat <= 45.0)))
+- 'SubqueryAlias C_64656661756c745f616972706f727473
+- 'UnresolvedRelation [default, airports], [], false
== Analyzed Logical Plan ==
C_13: string, C_23: string, C_15: string, C_19: double, C_21: double, C_18: double, C_25: string, C_20: string, C_16: string, C_22: string, C_12: string, C_17: string, C_14: string, C_24: double
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_24#4803 DESC NULLS LAST], true
+- Project [C_5#4774 AS C_13#4790, C_43#4775 AS C_23#4791, C_6#4776 AS C_15#4792, C_4331#4787 AS C_19#4793, C_4332#4788 AS C_21#4794, C_4333#4789 AS C_18#4795, C_0#4780 AS C_25#4796, C_1#4781 AS C_20#4797, C_2#4782 AS C_16#4798, C_3#4783 AS C_22#4799, C_4#4784 AS C_12#4800, C_10#4785 AS C_17#4801, C_11#4786 AS C_14#4802, C_9#4779 AS C_24#4803]
+- SubqueryAlias C_4954424c
+- Project [id#4804 AS C_5#4774, type#4805 AS C_43#4775, name#4806 AS C_6#4776, lat#4807 AS C_7#4777, lon#4808 AS C_8#4778, elev#4809 AS C_9#4779, continent#4810 AS C_0#4780, country#4811 AS C_1#4781, region#4812 AS C_2#4782, city#4813 AS C_3#4783, iata#4814 AS C_4#4784, code#4815 AS C_10#4785, gps#4816 AS C_11#4786, (round((lat#4807 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4331#4787, (round((lon#4808 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4332#4788, (round((elev#4809 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4333#4789]
+- Filter (((lon#4808 <= -104.05) AND (lon#4808 >= -111.05)) AND ((lat#4807 >= 41.0) AND (lat#4807 <= 45.0)))
+- SubqueryAlias C_64656661756c745f616972706f727473
+- SubqueryAlias spark_catalog.default.airports
+- Relation spark_catalog.default.airports[id#4804,type#4805,name#4806,lat#4807,lon#4808,elev#4809,continent#4810,country#4811,region#4812,city#4813,iata#4814,code#4815,gps#4816] parquet
== Optimized Logical Plan ==
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_24#4803 DESC NULLS LAST], true
+- Project [id#4804 AS C_13#4790, type#4805 AS C_23#4791, name#4806 AS C_15#4792, (round((lat#4807 * 1000.0), 0) / 1000.0) AS C_19#4793, (round((lon#4808 * 1000.0), 0) / 1000.0) AS C_21#4794, (round((elev#4809 * 1000.0), 0) / 1000.0) AS C_18#4795, continent#4810 AS C_25#4796, country#4811 AS C_20#4797, region#4812 AS C_16#4798, city#4813 AS C_22#4799, iata#4814 AS C_12#4800, code#4815 AS C_17#4801, gps#4816 AS C_14#4802, elev#4809 AS C_24#4803]
+- Filter ((isnotnull(lon#4808) AND isnotnull(lat#4807)) AND (((lon#4808 <= -104.05) AND (lon#4808 >= -111.05)) AND ((lat#4807 >= 41.0) AND (lat#4807 <= 45.0))))
+- Relation spark_catalog.default.airports[id#4804,type#4805,name#4806,lat#4807,lon#4808,elev#4809,continent#4810,country#4811,region#4812,city#4813,iata#4814,code#4815,gps#4816] parquet
== Physical Plan ==
TakeOrderedAndProject(limit=5, orderBy=[C_24#4803 DESC NULLS LAST], output=[C_13#4790,C_23#4791,C_15#4792,C_19#4793,C_21#4794,C_18#4795,C_25#4796,C_20#4797,C_16#4798,C_22#4799,C_12#4800,C_17#4801,C_14#4802,C_24#4803])
+- *(1) Project [id#4804 AS C_13#4790, type#4805 AS C_23#4791, name#4806 AS C_15#4792, (round((lat#4807 * 1000.0), 0) / 1000.0) AS C_19#4793, (round((lon#4808 * 1000.0), 0) / 1000.0) AS C_21#4794, (round((elev#4809 * 1000.0), 0) / 1000.0) AS C_18#4795, continent#4810 AS C_25#4796, country#4811 AS C_20#4797, region#4812 AS C_16#4798, city#4813 AS C_22#4799, iata#4814 AS C_12#4800, code#4815 AS C_17#4801, gps#4816 AS C_14#4802, elev#4809 AS C_24#4803]
+- *(1) Filter (((((isnotnull(lon#4808) AND isnotnull(lat#4807)) AND (lon#4808 <= -104.05)) AND (lon#4808 >= -111.05)) AND (lat#4807 >= 41.0)) AND (lat#4807 <= 45.0))
+- *(1) ColumnarToRow
+- FileScan parquet spark_catalog.default.airports[id#4804,type#4805,name#4806,lat#4807,lon#4808,elev#4809,continent#4810,country#4811,region#4812,city#4813,iata#4814,code#4815,gps#4816] Batched: true, DataFilters: [isnotnull(lon#4808), isnotnull(lat#4807), (lon#4808 <= -104.05), (lon#4808 >= -111.05), (lat#480..., Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/home/acdcadmin/spark-warehouse/airports], PartitionFilters: [], PushedFilters: [IsNotNull(lon), IsNotNull(lat), LessThanOrEqual(lon,-104.05), GreaterThanOrEqual(lon,-111.05), G..., ReadSchema: struct<id:string,type:string,name:string,lat:double,lon:double,elev:double,continent:string,count...
|
jonathon
|
|
ed3cdc17-162d-4b0b-8d62-e4943ebd2ef3
|
2025/06/14 01:47:46
|
2025/06/14 01:47:46
|
2025/06/14 01:47:47
|
35 ms
|
179 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4703, value#4704, meaning#4705, Since version#4706], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4703, value#4704, meaning#4705, Since version#4706]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
[34]
|
ecb3129b-9429-40bb-b1cd-f300b33f73c0
|
2025/06/13 07:55:32
|
2025/06/13 07:55:32
|
2025/06/13 07:55:32
|
200 ms
|
357 ms
|
SELECT C_43 AS C_12, C_1 AS C_16, C_2 AS C_14, C_4331 AS C_17, C_4332 AS C_13, C_4333 AS C_15, C_8 AS C_19, C_9 AS C_21, C_5 AS C_18, C_6 AS C_20, C_7 AS C_22, C_10 AS C_24, C_11 AS C_23, C_3 AS C_25 FROM (SELECT C_64656661756c745f616972706f727473.`id` AS C_43, C_64656661756c745f616972706f727473.`type` AS C_1, C_64656661756c745f616972706f727473.`name` AS C_2, C_64656661756c745f616972706f727473.`lat` AS C_4, C_64656661756c745f616972706f727473.`lon` AS C_0, C_64656661756c745f616972706f727473.`elev` AS C_3, C_64656661756c745f616972706f727473.`continent` AS C_8, C_64656661756c745f616972706f727473.`country` AS C_9, C_64656661756c745f616972706f727473.`region` AS C_5, C_64656661756c745f616972706f727473.`city` AS C_6, C_64656661756c745f616972706f727473.`iata` AS C_7, C_64656661756c745f616972706f727473.`code` AS C_10, C_64656661756c745f616972706f727473.`gps` AS C_11, (round((C_64656661756c745f616972706f727473.`lat` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4331, (round((C_64656661756c745f616972706f727473.`lon` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4332, (round((C_64656661756c745f616972706f727473.`elev` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4333 FROM `default`.`airports` C_64656661756c745f616972706f727473 WHERE ((C_64656661756c745f616972706f727473.`lon` <= (- 1.040500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lon` >= (- 1.110500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lat` >= 4.100000000000000E+001) AND (C_64656661756c745f616972706f727473.`lat` <= 4.500000000000000E+001)) ) C_4954424c ORDER BY C_25 DESC LIMIT 5
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'GlobalLimit 5
+- 'LocalLimit 5
+- 'Sort ['C_25 DESC NULLS LAST], true
+- 'Project ['C_43 AS C_12#2264, 'C_1 AS C_16#2265, 'C_2 AS C_14#2266, 'C_4331 AS C_17#2267, 'C_4332 AS C_13#2268, 'C_4333 AS C_15#2269, 'C_8 AS C_19#2270, 'C_9 AS C_21#2271, 'C_5 AS C_18#2272, 'C_6 AS C_20#2273, 'C_7 AS C_22#2274, 'C_10 AS C_24#2275, 'C_11 AS C_23#2276, 'C_3 AS C_25#2277]
+- 'SubqueryAlias C_4954424c
+- 'Project ['C_64656661756c745f616972706f727473.id AS C_43#2248, 'C_64656661756c745f616972706f727473.type AS C_1#2249, 'C_64656661756c745f616972706f727473.name AS C_2#2250, 'C_64656661756c745f616972706f727473.lat AS C_4#2251, 'C_64656661756c745f616972706f727473.lon AS C_0#2252, 'C_64656661756c745f616972706f727473.elev AS C_3#2253, 'C_64656661756c745f616972706f727473.continent AS C_8#2254, 'C_64656661756c745f616972706f727473.country AS C_9#2255, 'C_64656661756c745f616972706f727473.region AS C_5#2256, 'C_64656661756c745f616972706f727473.city AS C_6#2257, 'C_64656661756c745f616972706f727473.iata AS C_7#2258, 'C_64656661756c745f616972706f727473.code AS C_10#2259, 'C_64656661756c745f616972706f727473.gps AS C_11#2260, ('round(('C_64656661756c745f616972706f727473.lat * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4331#2261, ('round(('C_64656661756c745f616972706f727473.lon * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4332#2262, ('round(('C_64656661756c745f616972706f727473.elev * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4333#2263]
+- 'Filter ((('C_64656661756c745f616972706f727473.lon <= -104.05) AND ('C_64656661756c745f616972706f727473.lon >= -111.05)) AND (('C_64656661756c745f616972706f727473.lat >= 41.0) AND ('C_64656661756c745f616972706f727473.lat <= 45.0)))
+- 'SubqueryAlias C_64656661756c745f616972706f727473
+- 'UnresolvedRelation [default, airports], [], false
== Analyzed Logical Plan ==
C_12: string, C_16: string, C_14: string, C_17: double, C_13: double, C_15: double, C_19: string, C_21: string, C_18: string, C_20: string, C_22: string, C_24: string, C_23: string, C_25: double
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_25#2277 DESC NULLS LAST], true
+- Project [C_43#2248 AS C_12#2264, C_1#2249 AS C_16#2265, C_2#2250 AS C_14#2266, C_4331#2261 AS C_17#2267, C_4332#2262 AS C_13#2268, C_4333#2263 AS C_15#2269, C_8#2254 AS C_19#2270, C_9#2255 AS C_21#2271, C_5#2256 AS C_18#2272, C_6#2257 AS C_20#2273, C_7#2258 AS C_22#2274, C_10#2259 AS C_24#2275, C_11#2260 AS C_23#2276, C_3#2253 AS C_25#2277]
+- SubqueryAlias C_4954424c
+- Project [id#2278 AS C_43#2248, type#2279 AS C_1#2249, name#2280 AS C_2#2250, lat#2281 AS C_4#2251, lon#2282 AS C_0#2252, elev#2283 AS C_3#2253, continent#2284 AS C_8#2254, country#2285 AS C_9#2255, region#2286 AS C_5#2256, city#2287 AS C_6#2257, iata#2288 AS C_7#2258, code#2289 AS C_10#2259, gps#2290 AS C_11#2260, (round((lat#2281 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4331#2261, (round((lon#2282 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4332#2262, (round((elev#2283 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4333#2263]
+- Filter (((lon#2282 <= -104.05) AND (lon#2282 >= -111.05)) AND ((lat#2281 >= 41.0) AND (lat#2281 <= 45.0)))
+- SubqueryAlias C_64656661756c745f616972706f727473
+- SubqueryAlias spark_catalog.default.airports
+- Relation spark_catalog.default.airports[id#2278,type#2279,name#2280,lat#2281,lon#2282,elev#2283,continent#2284,country#2285,region#2286,city#2287,iata#2288,code#2289,gps#2290] parquet
== Optimized Logical Plan ==
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_25#2277 DESC NULLS LAST], true
+- Project [id#2278 AS C_12#2264, type#2279 AS C_16#2265, name#2280 AS C_14#2266, (round((lat#2281 * 1000.0), 0) / 1000.0) AS C_17#2267, (round((lon#2282 * 1000.0), 0) / 1000.0) AS C_13#2268, (round((elev#2283 * 1000.0), 0) / 1000.0) AS C_15#2269, continent#2284 AS C_19#2270, country#2285 AS C_21#2271, region#2286 AS C_18#2272, city#2287 AS C_20#2273, iata#2288 AS C_22#2274, code#2289 AS C_24#2275, gps#2290 AS C_23#2276, elev#2283 AS C_25#2277]
+- Filter ((isnotnull(lon#2282) AND isnotnull(lat#2281)) AND (((lon#2282 <= -104.05) AND (lon#2282 >= -111.05)) AND ((lat#2281 >= 41.0) AND (lat#2281 <= 45.0))))
+- Relation spark_catalog.default.airports[id#2278,type#2279,name#2280,lat#2281,lon#2282,elev#2283,continent#2284,country#2285,region#2286,city#2287,iata#2288,code#2289,gps#2290] parquet
== Physical Plan ==
TakeOrderedAndProject(limit=5, orderBy=[C_25#2277 DESC NULLS LAST], output=[C_12#2264,C_16#2265,C_14#2266,C_17#2267,C_13#2268,C_15#2269,C_19#2270,C_21#2271,C_18#2272,C_20#2273,C_22#2274,C_24#2275,C_23#2276,C_25#2277])
+- *(1) Project [id#2278 AS C_12#2264, type#2279 AS C_16#2265, name#2280 AS C_14#2266, (round((lat#2281 * 1000.0), 0) / 1000.0) AS C_17#2267, (round((lon#2282 * 1000.0), 0) / 1000.0) AS C_13#2268, (round((elev#2283 * 1000.0), 0) / 1000.0) AS C_15#2269, continent#2284 AS C_19#2270, country#2285 AS C_21#2271, region#2286 AS C_18#2272, city#2287 AS C_20#2273, iata#2288 AS C_22#2274, code#2289 AS C_24#2275, gps#2290 AS C_23#2276, elev#2283 AS C_25#2277]
+- *(1) Filter (((((isnotnull(lon#2282) AND isnotnull(lat#2281)) AND (lon#2282 <= -104.05)) AND (lon#2282 >= -111.05)) AND (lat#2281 >= 41.0)) AND (lat#2281 <= 45.0))
+- *(1) ColumnarToRow
+- FileScan parquet spark_catalog.default.airports[id#2278,type#2279,name#2280,lat#2281,lon#2282,elev#2283,continent#2284,country#2285,region#2286,city#2287,iata#2288,code#2289,gps#2290] Batched: true, DataFilters: [isnotnull(lon#2282), isnotnull(lat#2281), (lon#2282 <= -104.05), (lon#2282 >= -111.05), (lat#228..., Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/home/acdcadmin/spark-warehouse/airports], PartitionFilters: [], PushedFilters: [IsNotNull(lon), IsNotNull(lat), LessThanOrEqual(lon,-104.05), GreaterThanOrEqual(lon,-111.05), G..., ReadSchema: struct<id:string,type:string,name:string,lat:double,lon:double,elev:double,continent:string,count...
|
jonathan
|
|
ebbdceac-2358-42f5-91b4-13bf7f038e68
|
2025/06/13 23:21:11
|
2025/06/13 23:21:11
|
2025/06/13 23:21:12
|
28 ms
|
336 ms
|
SHOW TABLES IN `onetableschema`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3420, tableName#3421, isTemporary#3422]
+- 'UnresolvedNamespace [onetableschema]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3420, tableName#3421, isTemporary#3422]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Optimized Logical Plan ==
CommandResult [namespace#3420, tableName#3421, isTemporary#3422], ShowTables [namespace#3420, tableName#3421, isTemporary#3422], V2SessionCatalog(spark_catalog), [onetableschema], [[0,200000000e,300000000c,0,656c626174656e6f,616d65686373,73657079746c6c61,74736574]]
+- ShowTables [namespace#3420, tableName#3421, isTemporary#3422]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Physical Plan ==
CommandResult [namespace#3420, tableName#3421, isTemporary#3422]
+- ShowTables [namespace#3420, tableName#3421, isTemporary#3422], V2SessionCatalog(spark_catalog), [onetableschema]
|
jonathon
|
|
e80af083-fac5-4e20-be69-410dbbb809c1
|
2025/06/15 06:45:38
|
2025/06/15 06:45:38
|
2025/06/15 06:45:38
|
78 ms
|
234 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#5448, data_type#5449, comment#5450]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#5448, data_type#5449, comment#5450]
== Optimized Logical Plan ==
CommandResult [col_name#5448, data_type#5449, comment#5450], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#5448, data_type#5449, comment#5450]
== Physical Plan ==
CommandResult [col_name#5448, data_type#5449, comment#5450]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#5448, data_type#5449, comment#5450]
|
jonathan
|
|
e779eaf6-19a6-4e51-87a1-9cdc92370694
|
2025/06/13 23:34:51
|
2025/06/13 23:34:51
|
2025/06/13 23:34:51
|
18 ms
|
336 ms
|
SHOW TABLES IN `test`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#4118, tableName#4119, isTemporary#4120]
+- 'UnresolvedNamespace [test]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#4118, tableName#4119, isTemporary#4120]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Optimized Logical Plan ==
CommandResult [namespace#4118, tableName#4119, isTemporary#4120], ShowTables [namespace#4118, tableName#4119, isTemporary#4120], V2SessionCatalog(spark_catalog), [test]
+- ShowTables [namespace#4118, tableName#4119, isTemporary#4120]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Physical Plan ==
CommandResult <empty>, [namespace#4118, tableName#4119, isTemporary#4120]
+- ShowTables [namespace#4118, tableName#4119, isTemporary#4120], V2SessionCatalog(spark_catalog), [test]
|
jonathon
|
|
e68ea7bc-3ad4-4647-854d-1d6c077dde95
|
2025/06/14 06:13:08
|
2025/06/14 06:13:08
|
2025/06/14 06:13:08
|
48 ms
|
142 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
e42445e9-0901-4632-8649-75569bf5ac95
|
2025/06/13 22:37:47
|
2025/06/13 22:37:47
|
2025/06/13 22:37:47
|
240 ms
|
331 ms
|
Listing tables 'catalog : null, schemaPattern : %, tableTypes : null, tableName : %'
|
CLOSED
|
|
jonathon
|
|
e00c8cb7-9bec-494c-9f44-2c1d31af2513
|
2025/06/14 05:46:26
|
2025/06/14 05:46:26
|
2025/06/14 05:46:27
|
81 ms
|
237 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#4872, data_type#4873, comment#4874]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4872, data_type#4873, comment#4874]
== Optimized Logical Plan ==
CommandResult [col_name#4872, data_type#4873, comment#4874], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4872, data_type#4873, comment#4874]
== Physical Plan ==
CommandResult [col_name#4872, data_type#4873, comment#4874]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4872, data_type#4873, comment#4874]
|
jonathon
|
|
e0086b25-30c5-4faa-8f94-69f3645f7be2
|
2025/06/14 05:46:27
|
2025/06/14 05:46:27
|
2025/06/14 05:46:27
|
24 ms
|
177 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
de84919a-5ae5-4156-acc5-4af4137969e9
|
2025/06/13 23:24:48
|
2025/06/13 23:24:48
|
2025/06/13 23:24:48
|
54 ms
|
343 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3647, data_type#3648, comment#3649]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3647, data_type#3648, comment#3649]
== Optimized Logical Plan ==
CommandResult [col_name#3647, data_type#3648, comment#3649], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3647, data_type#3648, comment#3649]
== Physical Plan ==
CommandResult [col_name#3647, data_type#3648, comment#3649]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3647, data_type#3648, comment#3649]
|
jonathon
|
|
de4e9f6b-1dde-46e8-a9f8-94f35fbf6cd1
|
2025/06/13 22:44:54
|
2025/06/13 22:44:54
|
2025/06/13 22:44:54
|
36 ms
|
209 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#2797, value#2798, meaning#2799, Since version#2800], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#2797, value#2798, meaning#2799, Since version#2800]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
[33]
|
dd9c74a0-d6cd-4177-80d7-a0ed129f43bf
|
2025/06/13 07:16:52
|
2025/06/13 07:16:52
|
2025/06/13 07:16:52
|
211 ms
|
308 ms
|
SELECT C_10 AS C_12, C_2 AS C_14, C_43 AS C_17, C_4331 AS C_18, C_4332 AS C_21, C_4333 AS C_19, C_9 AS C_23, C_0 AS C_16, C_3 AS C_15, C_1 AS C_22, C_4 AS C_13, C_5 AS C_24, C_11 AS C_25, C_6 AS C_20 FROM (SELECT C_64656661756c745f616972706f727473.`id` AS C_10, C_64656661756c745f616972706f727473.`type` AS C_2, C_64656661756c745f616972706f727473.`name` AS C_43, C_64656661756c745f616972706f727473.`lat` AS C_7, C_64656661756c745f616972706f727473.`lon` AS C_8, C_64656661756c745f616972706f727473.`elev` AS C_6, C_64656661756c745f616972706f727473.`continent` AS C_9, C_64656661756c745f616972706f727473.`country` AS C_0, C_64656661756c745f616972706f727473.`region` AS C_3, C_64656661756c745f616972706f727473.`city` AS C_1, C_64656661756c745f616972706f727473.`iata` AS C_4, C_64656661756c745f616972706f727473.`code` AS C_5, C_64656661756c745f616972706f727473.`gps` AS C_11, (round((C_64656661756c745f616972706f727473.`lat` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4331, (round((C_64656661756c745f616972706f727473.`lon` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4332, (round((C_64656661756c745f616972706f727473.`elev` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4333 FROM `default`.`airports` C_64656661756c745f616972706f727473 WHERE ((C_64656661756c745f616972706f727473.`lon` <= (- 1.040500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lon` >= (- 1.110500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lat` >= 4.100000000000000E+001) AND (C_64656661756c745f616972706f727473.`lat` <= 4.500000000000000E+001)) ) C_4954424c ORDER BY C_20 DESC LIMIT 5
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'GlobalLimit 5
+- 'LocalLimit 5
+- 'Sort ['C_20 DESC NULLS LAST], true
+- 'Project ['C_10 AS C_12#2120, 'C_2 AS C_14#2121, 'C_43 AS C_17#2122, 'C_4331 AS C_18#2123, 'C_4332 AS C_21#2124, 'C_4333 AS C_19#2125, 'C_9 AS C_23#2126, 'C_0 AS C_16#2127, 'C_3 AS C_15#2128, 'C_1 AS C_22#2129, 'C_4 AS C_13#2130, 'C_5 AS C_24#2131, 'C_11 AS C_25#2132, 'C_6 AS C_20#2133]
+- 'SubqueryAlias C_4954424c
+- 'Project ['C_64656661756c745f616972706f727473.id AS C_10#2104, 'C_64656661756c745f616972706f727473.type AS C_2#2105, 'C_64656661756c745f616972706f727473.name AS C_43#2106, 'C_64656661756c745f616972706f727473.lat AS C_7#2107, 'C_64656661756c745f616972706f727473.lon AS C_8#2108, 'C_64656661756c745f616972706f727473.elev AS C_6#2109, 'C_64656661756c745f616972706f727473.continent AS C_9#2110, 'C_64656661756c745f616972706f727473.country AS C_0#2111, 'C_64656661756c745f616972706f727473.region AS C_3#2112, 'C_64656661756c745f616972706f727473.city AS C_1#2113, 'C_64656661756c745f616972706f727473.iata AS C_4#2114, 'C_64656661756c745f616972706f727473.code AS C_5#2115, 'C_64656661756c745f616972706f727473.gps AS C_11#2116, ('round(('C_64656661756c745f616972706f727473.lat * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4331#2117, ('round(('C_64656661756c745f616972706f727473.lon * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4332#2118, ('round(('C_64656661756c745f616972706f727473.elev * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4333#2119]
+- 'Filter ((('C_64656661756c745f616972706f727473.lon <= -104.05) AND ('C_64656661756c745f616972706f727473.lon >= -111.05)) AND (('C_64656661756c745f616972706f727473.lat >= 41.0) AND ('C_64656661756c745f616972706f727473.lat <= 45.0)))
+- 'SubqueryAlias C_64656661756c745f616972706f727473
+- 'UnresolvedRelation [default, airports], [], false
== Analyzed Logical Plan ==
C_12: string, C_14: string, C_17: string, C_18: double, C_21: double, C_19: double, C_23: string, C_16: string, C_15: string, C_22: string, C_13: string, C_24: string, C_25: string, C_20: double
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_20#2133 DESC NULLS LAST], true
+- Project [C_10#2104 AS C_12#2120, C_2#2105 AS C_14#2121, C_43#2106 AS C_17#2122, C_4331#2117 AS C_18#2123, C_4332#2118 AS C_21#2124, C_4333#2119 AS C_19#2125, C_9#2110 AS C_23#2126, C_0#2111 AS C_16#2127, C_3#2112 AS C_15#2128, C_1#2113 AS C_22#2129, C_4#2114 AS C_13#2130, C_5#2115 AS C_24#2131, C_11#2116 AS C_25#2132, C_6#2109 AS C_20#2133]
+- SubqueryAlias C_4954424c
+- Project [id#2134 AS C_10#2104, type#2135 AS C_2#2105, name#2136 AS C_43#2106, lat#2137 AS C_7#2107, lon#2138 AS C_8#2108, elev#2139 AS C_6#2109, continent#2140 AS C_9#2110, country#2141 AS C_0#2111, region#2142 AS C_3#2112, city#2143 AS C_1#2113, iata#2144 AS C_4#2114, code#2145 AS C_5#2115, gps#2146 AS C_11#2116, (round((lat#2137 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4331#2117, (round((lon#2138 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4332#2118, (round((elev#2139 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4333#2119]
+- Filter (((lon#2138 <= -104.05) AND (lon#2138 >= -111.05)) AND ((lat#2137 >= 41.0) AND (lat#2137 <= 45.0)))
+- SubqueryAlias C_64656661756c745f616972706f727473
+- SubqueryAlias spark_catalog.default.airports
+- Relation spark_catalog.default.airports[id#2134,type#2135,name#2136,lat#2137,lon#2138,elev#2139,continent#2140,country#2141,region#2142,city#2143,iata#2144,code#2145,gps#2146] parquet
== Optimized Logical Plan ==
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_20#2133 DESC NULLS LAST], true
+- Project [id#2134 AS C_12#2120, type#2135 AS C_14#2121, name#2136 AS C_17#2122, (round((lat#2137 * 1000.0), 0) / 1000.0) AS C_18#2123, (round((lon#2138 * 1000.0), 0) / 1000.0) AS C_21#2124, (round((elev#2139 * 1000.0), 0) / 1000.0) AS C_19#2125, continent#2140 AS C_23#2126, country#2141 AS C_16#2127, region#2142 AS C_15#2128, city#2143 AS C_22#2129, iata#2144 AS C_13#2130, code#2145 AS C_24#2131, gps#2146 AS C_25#2132, elev#2139 AS C_20#2133]
+- Filter ((isnotnull(lon#2138) AND isnotnull(lat#2137)) AND (((lon#2138 <= -104.05) AND (lon#2138 >= -111.05)) AND ((lat#2137 >= 41.0) AND (lat#2137 <= 45.0))))
+- Relation spark_catalog.default.airports[id#2134,type#2135,name#2136,lat#2137,lon#2138,elev#2139,continent#2140,country#2141,region#2142,city#2143,iata#2144,code#2145,gps#2146] parquet
== Physical Plan ==
TakeOrderedAndProject(limit=5, orderBy=[C_20#2133 DESC NULLS LAST], output=[C_12#2120,C_14#2121,C_17#2122,C_18#2123,C_21#2124,C_19#2125,C_23#2126,C_16#2127,C_15#2128,C_22#2129,C_13#2130,C_24#2131,C_25#2132,C_20#2133])
+- *(1) Project [id#2134 AS C_12#2120, type#2135 AS C_14#2121, name#2136 AS C_17#2122, (round((lat#2137 * 1000.0), 0) / 1000.0) AS C_18#2123, (round((lon#2138 * 1000.0), 0) / 1000.0) AS C_21#2124, (round((elev#2139 * 1000.0), 0) / 1000.0) AS C_19#2125, continent#2140 AS C_23#2126, country#2141 AS C_16#2127, region#2142 AS C_15#2128, city#2143 AS C_22#2129, iata#2144 AS C_13#2130, code#2145 AS C_24#2131, gps#2146 AS C_25#2132, elev#2139 AS C_20#2133]
+- *(1) Filter (((((isnotnull(lon#2138) AND isnotnull(lat#2137)) AND (lon#2138 <= -104.05)) AND (lon#2138 >= -111.05)) AND (lat#2137 >= 41.0)) AND (lat#2137 <= 45.0))
+- *(1) ColumnarToRow
+- FileScan parquet spark_catalog.default.airports[id#2134,type#2135,name#2136,lat#2137,lon#2138,elev#2139,continent#2140,country#2141,region#2142,city#2143,iata#2144,code#2145,gps#2146] Batched: true, DataFilters: [isnotnull(lon#2138), isnotnull(lat#2137), (lon#2138 <= -104.05), (lon#2138 >= -111.05), (lat#213..., Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/home/acdcadmin/spark-warehouse/airports], PartitionFilters: [], PushedFilters: [IsNotNull(lon), IsNotNull(lat), LessThanOrEqual(lon,-104.05), GreaterThanOrEqual(lon,-111.05), G..., ReadSchema: struct<id:string,type:string,name:string,lat:double,lon:double,elev:double,continent:string,count...
|
jonathon
|
|
dd068b27-516d-4575-b94e-a989922bdbb9
|
2025/06/14 01:47:47
|
2025/06/14 01:47:47
|
2025/06/14 01:47:47
|
82 ms
|
178 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#4751, data_type#4752, comment#4753]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4751, data_type#4752, comment#4753]
== Optimized Logical Plan ==
CommandResult [col_name#4751, data_type#4752, comment#4753], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4751, data_type#4752, comment#4753]
== Physical Plan ==
CommandResult [col_name#4751, data_type#4752, comment#4753]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4751, data_type#4752, comment#4753]
|
jonathan
|
|
dc6650c6-d345-46fa-808d-9b59408d5af6
|
2025/06/13 23:34:50
|
2025/06/13 23:34:50
|
2025/06/13 23:34:50
|
53 ms
|
328 ms
|
SHOW TABLES IN `c3ba675f1fb64660ba4a90155b35924e`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#4058, tableName#4059, isTemporary#4060]
+- 'UnresolvedNamespace [c3ba675f1fb64660ba4a90155b35924e]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#4058, tableName#4059, isTemporary#4060]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Optimized Logical Plan ==
CommandResult [namespace#4058, tableName#4059, isTemporary#4060], ShowTables [namespace#4058, tableName#4059, isTemporary#4060], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e], [[0,2000000020,400000000c,0,6635373661623363,3036363436626631,3531303961346162,6534323935336235,69746e656469796d,72656966]]
+- ShowTables [namespace#4058, tableName#4059, isTemporary#4060]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Physical Plan ==
CommandResult [namespace#4058, tableName#4059, isTemporary#4060]
+- ShowTables [namespace#4058, tableName#4059, isTemporary#4060], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
|
jonathon
|
|
d8e08b2c-1486-4408-b18e-49868f58bf30
|
2025/06/13 23:20:55
|
2025/06/13 23:20:55
|
2025/06/13 23:20:55
|
26 ms
|
121 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
d85ab925-3a3a-45d9-b653-c25d2e078f24
|
2025/06/14 05:46:26
|
2025/06/14 05:46:26
|
2025/06/14 05:46:26
|
210 ms
|
366 ms
|
Listing tables 'catalog : null, schemaPattern : %, tableTypes : null, tableName : %'
|
CLOSED
|
|
jonathon
|
|
d57f7ae0-bf2a-48e4-a2d0-0d8fc821033b
|
2025/06/15 06:48:30
|
2025/06/15 06:48:30
|
2025/06/15 06:48:31
|
215 ms
|
308 ms
|
Listing tables 'catalog : null, schemaPattern : %, tableTypes : null, tableName : %'
|
CLOSED
|
|
jonathon
|
|
d5105876-2692-43cf-a8ec-b8e810803663
|
2025/06/13 23:29:58
|
2025/06/13 23:29:58
|
2025/06/13 23:29:58
|
42 ms
|
353 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#3808, value#3809, meaning#3810, Since version#3811], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#3808, value#3809, meaning#3810, Since version#3811]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathan
|
|
d46e6094-0887-4e91-bfc4-bfd910f7945a
|
2025/06/13 23:12:39
|
2025/06/13 23:12:39
|
2025/06/13 23:12:39
|
82 ms
|
666 ms
|
DESCRIBE TABLE `default`.`AllTypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3101, data_type#3102, comment#3103]
+- 'UnresolvedTableOrView [default, AllTypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3101, data_type#3102, comment#3103]
== Optimized Logical Plan ==
CommandResult [col_name#3101, data_type#3102, comment#3103], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3101, data_type#3102, comment#3103]
== Physical Plan ==
CommandResult [col_name#3101, data_type#3102, comment#3103]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3101, data_type#3102, comment#3103]
|
jonathan
|
|
d38113ab-3c71-48c7-9161-558f6b0b85a3
|
2025/06/13 23:23:32
|
2025/06/13 23:23:32
|
2025/06/13 23:23:32
|
86 ms
|
352 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3566, data_type#3567, comment#3568]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3566, data_type#3567, comment#3568]
== Optimized Logical Plan ==
CommandResult [col_name#3566, data_type#3567, comment#3568], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3566, data_type#3567, comment#3568]
== Physical Plan ==
CommandResult [col_name#3566, data_type#3567, comment#3568]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3566, data_type#3567, comment#3568]
|
jonathan
|
|
d3607e54-df5f-406c-b493-bbdc49b92c86
|
2025/06/13 23:27:03
|
2025/06/13 23:27:03
|
2025/06/13 23:27:03
|
26 ms
|
337 ms
|
SHOW TABLES IN `onetableschema`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3768, tableName#3769, isTemporary#3770]
+- 'UnresolvedNamespace [onetableschema]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3768, tableName#3769, isTemporary#3770]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Optimized Logical Plan ==
CommandResult [namespace#3768, tableName#3769, isTemporary#3770], ShowTables [namespace#3768, tableName#3769, isTemporary#3770], V2SessionCatalog(spark_catalog), [onetableschema], [[0,200000000e,300000000c,0,656c626174656e6f,616d65686373,73657079746c6c61,74736574]]
+- ShowTables [namespace#3768, tableName#3769, isTemporary#3770]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Physical Plan ==
CommandResult [namespace#3768, tableName#3769, isTemporary#3770]
+- ShowTables [namespace#3768, tableName#3769, isTemporary#3770], V2SessionCatalog(spark_catalog), [onetableschema]
|
jonathan
|
|
d2b58a5b-b61f-4d30-92ef-9837d1d25622
|
2025/06/13 22:39:05
|
2025/06/13 22:39:06
|
2025/06/13 22:39:06
|
40 ms
|
379 ms
|
SHOW TABLES IN `c3ba675f1fb64660ba4a90155b35924e`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2717, tableName#2718, isTemporary#2719]
+- 'UnresolvedNamespace [c3ba675f1fb64660ba4a90155b35924e]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2717, tableName#2718, isTemporary#2719]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Optimized Logical Plan ==
CommandResult [namespace#2717, tableName#2718, isTemporary#2719], ShowTables [namespace#2717, tableName#2718, isTemporary#2719], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e], [[0,2000000020,400000000c,0,6635373661623363,3036363436626631,3531303961346162,6534323935336235,69746e656469796d,72656966]]
+- ShowTables [namespace#2717, tableName#2718, isTemporary#2719]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Physical Plan ==
CommandResult [namespace#2717, tableName#2718, isTemporary#2719]
+- ShowTables [namespace#2717, tableName#2718, isTemporary#2719], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
|
jonathon
|
|
d0d4aba6-8208-43a0-a733-6f6a2284b010
|
2025/06/13 22:44:55
|
2025/06/13 22:44:55
|
2025/06/13 22:44:55
|
47 ms
|
147 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
d05b3f61-2402-4b48-b9c2-c45932a10b4f
|
2025/06/13 23:21:09
|
2025/06/13 23:21:09
|
2025/06/13 23:21:10
|
13 ms
|
934 ms
|
Listing catalogs
|
CLOSED
|
|
jonathon
|
|
d03c9acc-0fe4-4e21-84e1-736635b4db5d
|
2025/06/14 06:13:09
|
2025/06/14 06:13:09
|
2025/06/14 06:13:09
|
22 ms
|
117 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
d00ae85e-1e66-4e4b-81de-74575d789d5f
|
2025/06/13 22:54:12
|
2025/06/13 22:54:12
|
2025/06/13 22:54:13
|
12 ms
|
335 ms
|
SHOW TABLES IN `global_temp`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3091, tableName#3092, isTemporary#3093]
+- 'UnresolvedNamespace [global_temp]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3091, tableName#3092, isTemporary#3093]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Optimized Logical Plan ==
CommandResult [namespace#3091, tableName#3092, isTemporary#3093], ShowTables [namespace#3091, tableName#3092, isTemporary#3093], V2SessionCatalog(spark_catalog), [global_temp]
+- ShowTables [namespace#3091, tableName#3092, isTemporary#3093]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Physical Plan ==
CommandResult <empty>, [namespace#3091, tableName#3092, isTemporary#3093]
+- ShowTables [namespace#3091, tableName#3092, isTemporary#3093], V2SessionCatalog(spark_catalog), [global_temp]
|
jonathan
|
|
cfe9ec7e-8e1d-480c-ae46-74da9249f1e5
|
2025/06/13 19:06:15
|
2025/06/13 19:06:15
|
2025/06/13 19:06:16
|
102 ms
|
506 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#2321, data_type#2322, comment#2323]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2321, data_type#2322, comment#2323]
== Optimized Logical Plan ==
CommandResult [col_name#2321, data_type#2322, comment#2323], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2321, data_type#2322, comment#2323]
== Physical Plan ==
CommandResult [col_name#2321, data_type#2322, comment#2323]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2321, data_type#2322, comment#2323]
|
jonathon
|
|
cded8feb-be09-499b-8a5b-645bcfd08839
|
2025/06/15 06:43:48
|
2025/06/15 06:43:48
|
2025/06/15 06:43:48
|
23 ms
|
115 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
cc399ebb-36c2-40d4-90fa-ebfe29305ea1
|
2025/06/15 06:43:48
|
2025/06/15 06:43:48
|
2025/06/15 06:43:48
|
50 ms
|
144 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
c90cccb9-6b56-403e-aa36-8fa4abe6011e
|
2025/06/13 22:44:54
|
2025/06/13 22:44:55
|
2025/06/13 22:44:55
|
222 ms
|
317 ms
|
Listing tables 'catalog : null, schemaPattern : %, tableTypes : null, tableName : %'
|
CLOSED
|
|
jonathon
|
|
c73f7b30-aa56-45ea-a9cf-15a6b09ffa47
|
2025/06/14 06:13:09
|
2025/06/14 06:13:09
|
2025/06/14 06:13:09
|
84 ms
|
185 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#5039, data_type#5040, comment#5041]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#5039, data_type#5040, comment#5041]
== Optimized Logical Plan ==
CommandResult [col_name#5039, data_type#5040, comment#5041], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#5039, data_type#5040, comment#5041]
== Physical Plan ==
CommandResult [col_name#5039, data_type#5040, comment#5041]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#5039, data_type#5040, comment#5041]
|
jonathon
|
[49]
|
c5b801e3-00af-402d-acbd-beefd5e1196d
|
2025/06/15 06:43:48
|
2025/06/15 06:43:49
|
2025/06/15 06:43:49
|
221 ms
|
315 ms
|
SELECT C_43 AS C_13, C_0 AS C_12, C_8 AS C_18, C_4331 AS C_21, C_4332 AS C_17, C_4333 AS C_20, C_4 AS C_22, C_5 AS C_25, C_6 AS C_23, C_7 AS C_14, C_9 AS C_24, C_11 AS C_16, C_10 AS C_19, C_2 AS C_15 FROM (SELECT C_64656661756c745f616972706f727473.`id` AS C_43, C_64656661756c745f616972706f727473.`type` AS C_0, C_64656661756c745f616972706f727473.`name` AS C_8, C_64656661756c745f616972706f727473.`lat` AS C_3, C_64656661756c745f616972706f727473.`lon` AS C_1, C_64656661756c745f616972706f727473.`elev` AS C_2, C_64656661756c745f616972706f727473.`continent` AS C_4, C_64656661756c745f616972706f727473.`country` AS C_5, C_64656661756c745f616972706f727473.`region` AS C_6, C_64656661756c745f616972706f727473.`city` AS C_7, C_64656661756c745f616972706f727473.`iata` AS C_9, C_64656661756c745f616972706f727473.`code` AS C_11, C_64656661756c745f616972706f727473.`gps` AS C_10, (round((C_64656661756c745f616972706f727473.`lat` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4331, (round((C_64656661756c745f616972706f727473.`lon` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4332, (round((C_64656661756c745f616972706f727473.`elev` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4333 FROM `default`.`airports` C_64656661756c745f616972706f727473 WHERE ((C_64656661756c745f616972706f727473.`lon` <= (- 1.040500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lon` >= (- 1.110500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lat` >= 4.100000000000000E+001) AND (C_64656661756c745f616972706f727473.`lat` <= 4.500000000000000E+001)) ) C_4954424c ORDER BY C_15 DESC LIMIT 5
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'GlobalLimit 5
+- 'LocalLimit 5
+- 'Sort ['C_15 DESC NULLS LAST], true
+- 'Project ['C_43 AS C_13#5366, 'C_0 AS C_12#5367, 'C_8 AS C_18#5368, 'C_4331 AS C_21#5369, 'C_4332 AS C_17#5370, 'C_4333 AS C_20#5371, 'C_4 AS C_22#5372, 'C_5 AS C_25#5373, 'C_6 AS C_23#5374, 'C_7 AS C_14#5375, 'C_9 AS C_24#5376, 'C_11 AS C_16#5377, 'C_10 AS C_19#5378, 'C_2 AS C_15#5379]
+- 'SubqueryAlias C_4954424c
+- 'Project ['C_64656661756c745f616972706f727473.id AS C_43#5350, 'C_64656661756c745f616972706f727473.type AS C_0#5351, 'C_64656661756c745f616972706f727473.name AS C_8#5352, 'C_64656661756c745f616972706f727473.lat AS C_3#5353, 'C_64656661756c745f616972706f727473.lon AS C_1#5354, 'C_64656661756c745f616972706f727473.elev AS C_2#5355, 'C_64656661756c745f616972706f727473.continent AS C_4#5356, 'C_64656661756c745f616972706f727473.country AS C_5#5357, 'C_64656661756c745f616972706f727473.region AS C_6#5358, 'C_64656661756c745f616972706f727473.city AS C_7#5359, 'C_64656661756c745f616972706f727473.iata AS C_9#5360, 'C_64656661756c745f616972706f727473.code AS C_11#5361, 'C_64656661756c745f616972706f727473.gps AS C_10#5362, ('round(('C_64656661756c745f616972706f727473.lat * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4331#5363, ('round(('C_64656661756c745f616972706f727473.lon * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4332#5364, ('round(('C_64656661756c745f616972706f727473.elev * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4333#5365]
+- 'Filter ((('C_64656661756c745f616972706f727473.lon <= -104.05) AND ('C_64656661756c745f616972706f727473.lon >= -111.05)) AND (('C_64656661756c745f616972706f727473.lat >= 41.0) AND ('C_64656661756c745f616972706f727473.lat <= 45.0)))
+- 'SubqueryAlias C_64656661756c745f616972706f727473
+- 'UnresolvedRelation [default, airports], [], false
== Analyzed Logical Plan ==
C_13: string, C_12: string, C_18: string, C_21: double, C_17: double, C_20: double, C_22: string, C_25: string, C_23: string, C_14: string, C_24: string, C_16: string, C_19: string, C_15: double
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_15#5379 DESC NULLS LAST], true
+- Project [C_43#5350 AS C_13#5366, C_0#5351 AS C_12#5367, C_8#5352 AS C_18#5368, C_4331#5363 AS C_21#5369, C_4332#5364 AS C_17#5370, C_4333#5365 AS C_20#5371, C_4#5356 AS C_22#5372, C_5#5357 AS C_25#5373, C_6#5358 AS C_23#5374, C_7#5359 AS C_14#5375, C_9#5360 AS C_24#5376, C_11#5361 AS C_16#5377, C_10#5362 AS C_19#5378, C_2#5355 AS C_15#5379]
+- SubqueryAlias C_4954424c
+- Project [id#5380 AS C_43#5350, type#5381 AS C_0#5351, name#5382 AS C_8#5352, lat#5383 AS C_3#5353, lon#5384 AS C_1#5354, elev#5385 AS C_2#5355, continent#5386 AS C_4#5356, country#5387 AS C_5#5357, region#5388 AS C_6#5358, city#5389 AS C_7#5359, iata#5390 AS C_9#5360, code#5391 AS C_11#5361, gps#5392 AS C_10#5362, (round((lat#5383 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4331#5363, (round((lon#5384 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4332#5364, (round((elev#5385 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4333#5365]
+- Filter (((lon#5384 <= -104.05) AND (lon#5384 >= -111.05)) AND ((lat#5383 >= 41.0) AND (lat#5383 <= 45.0)))
+- SubqueryAlias C_64656661756c745f616972706f727473
+- SubqueryAlias spark_catalog.default.airports
+- Relation spark_catalog.default.airports[id#5380,type#5381,name#5382,lat#5383,lon#5384,elev#5385,continent#5386,country#5387,region#5388,city#5389,iata#5390,code#5391,gps#5392] parquet
== Optimized Logical Plan ==
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_15#5379 DESC NULLS LAST], true
+- Project [id#5380 AS C_13#5366, type#5381 AS C_12#5367, name#5382 AS C_18#5368, (round((lat#5383 * 1000.0), 0) / 1000.0) AS C_21#5369, (round((lon#5384 * 1000.0), 0) / 1000.0) AS C_17#5370, (round((elev#5385 * 1000.0), 0) / 1000.0) AS C_20#5371, continent#5386 AS C_22#5372, country#5387 AS C_25#5373, region#5388 AS C_23#5374, city#5389 AS C_14#5375, iata#5390 AS C_24#5376, code#5391 AS C_16#5377, gps#5392 AS C_19#5378, elev#5385 AS C_15#5379]
+- Filter ((isnotnull(lon#5384) AND isnotnull(lat#5383)) AND (((lon#5384 <= -104.05) AND (lon#5384 >= -111.05)) AND ((lat#5383 >= 41.0) AND (lat#5383 <= 45.0))))
+- Relation spark_catalog.default.airports[id#5380,type#5381,name#5382,lat#5383,lon#5384,elev#5385,continent#5386,country#5387,region#5388,city#5389,iata#5390,code#5391,gps#5392] parquet
== Physical Plan ==
TakeOrderedAndProject(limit=5, orderBy=[C_15#5379 DESC NULLS LAST], output=[C_13#5366,C_12#5367,C_18#5368,C_21#5369,C_17#5370,C_20#5371,C_22#5372,C_25#5373,C_23#5374,C_14#5375,C_24#5376,C_16#5377,C_19#5378,C_15#5379])
+- *(1) Project [id#5380 AS C_13#5366, type#5381 AS C_12#5367, name#5382 AS C_18#5368, (round((lat#5383 * 1000.0), 0) / 1000.0) AS C_21#5369, (round((lon#5384 * 1000.0), 0) / 1000.0) AS C_17#5370, (round((elev#5385 * 1000.0), 0) / 1000.0) AS C_20#5371, continent#5386 AS C_22#5372, country#5387 AS C_25#5373, region#5388 AS C_23#5374, city#5389 AS C_14#5375, iata#5390 AS C_24#5376, code#5391 AS C_16#5377, gps#5392 AS C_19#5378, elev#5385 AS C_15#5379]
+- *(1) Filter (((((isnotnull(lon#5384) AND isnotnull(lat#5383)) AND (lon#5384 <= -104.05)) AND (lon#5384 >= -111.05)) AND (lat#5383 >= 41.0)) AND (lat#5383 <= 45.0))
+- *(1) ColumnarToRow
+- FileScan parquet spark_catalog.default.airports[id#5380,type#5381,name#5382,lat#5383,lon#5384,elev#5385,continent#5386,country#5387,region#5388,city#5389,iata#5390,code#5391,gps#5392] Batched: true, DataFilters: [isnotnull(lon#5384), isnotnull(lat#5383), (lon#5384 <= -104.05), (lon#5384 >= -111.05), (lat#538..., Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/home/acdcadmin/spark-warehouse/airports], PartitionFilters: [], PushedFilters: [IsNotNull(lon), IsNotNull(lat), LessThanOrEqual(lon,-104.05), GreaterThanOrEqual(lon,-111.05), G..., ReadSchema: struct<id:string,type:string,name:string,lat:double,lon:double,elev:double,continent:string,count...
|
jonathan
|
|
c4177770-6474-4fde-940d-efc9fd04cb71
|
2025/06/13 22:39:05
|
2025/06/13 22:39:05
|
2025/06/13 22:39:05
|
34 ms
|
355 ms
|
Listing databases 'catalog : , schemaPattern : null'
|
CLOSED
|
|
jonathon
|
|
c15f7b20-7e3e-4f66-9e0b-e3e86343ca24
|
2025/06/14 01:23:35
|
2025/06/14 01:23:35
|
2025/06/14 01:23:35
|
26 ms
|
179 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
c14d354d-1dd0-45a8-9612-9f0604b08f7a
|
2025/06/13 23:29:58
|
2025/06/13 23:29:58
|
2025/06/13 23:29:58
|
51 ms
|
154 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
c0483dac-dacb-4972-bc33-0b22267e9d6d
|
2025/06/14 01:46:19
|
2025/06/14 01:46:19
|
2025/06/14 01:46:19
|
81 ms
|
177 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#4607, data_type#4608, comment#4609]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4607, data_type#4608, comment#4609]
== Optimized Logical Plan ==
CommandResult [col_name#4607, data_type#4608, comment#4609], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4607, data_type#4608, comment#4609]
== Physical Plan ==
CommandResult [col_name#4607, data_type#4608, comment#4609]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4607, data_type#4608, comment#4609]
|
jonathon
|
|
be589622-87e4-495f-ab8e-f1bb824cae28
|
2025/06/13 07:16:50
|
2025/06/13 07:16:51
|
2025/06/13 07:16:51
|
41 ms
|
183 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#2033, value#2034, meaning#2035, Since version#2036], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#2033, value#2034, meaning#2035, Since version#2036]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathan
|
|
bce964e2-7852-4f5d-a13d-f4db03b76edc
|
2025/06/13 23:34:47
|
2025/06/13 23:34:47
|
2025/06/13 23:34:48
|
65 ms
|
617 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3952, data_type#3953, comment#3954]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3952, data_type#3953, comment#3954]
== Optimized Logical Plan ==
CommandResult [col_name#3952, data_type#3953, comment#3954], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3952, data_type#3953, comment#3954]
== Physical Plan ==
CommandResult [col_name#3952, data_type#3953, comment#3954]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3952, data_type#3953, comment#3954]
|
jonathan
|
|
bcc16f8c-edd3-4141-87ec-368a4f20cc6c
|
2025/06/13 22:51:53
|
2025/06/13 22:51:53
|
2025/06/13 22:51:53
|
26 ms
|
335 ms
|
SHOW TABLES IN `onetableschema`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2981, tableName#2982, isTemporary#2983]
+- 'UnresolvedNamespace [onetableschema]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2981, tableName#2982, isTemporary#2983]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Optimized Logical Plan ==
CommandResult [namespace#2981, tableName#2982, isTemporary#2983], ShowTables [namespace#2981, tableName#2982, isTemporary#2983], V2SessionCatalog(spark_catalog), [onetableschema], [[0,200000000e,300000000c,0,656c626174656e6f,616d65686373,73657079746c6c61,74736574]]
+- ShowTables [namespace#2981, tableName#2982, isTemporary#2983]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Physical Plan ==
CommandResult [namespace#2981, tableName#2982, isTemporary#2983]
+- ShowTables [namespace#2981, tableName#2982, isTemporary#2983], V2SessionCatalog(spark_catalog), [onetableschema]
|
jonathan
|
|
b9dd0169-a02e-4def-a8dd-397345129b11
|
2025/06/13 23:23:32
|
2025/06/13 23:23:32
|
2025/06/13 23:23:32
|
60 ms
|
327 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3593, data_type#3594, comment#3595]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3593, data_type#3594, comment#3595]
== Optimized Logical Plan ==
CommandResult [col_name#3593, data_type#3594, comment#3595], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3593, data_type#3594, comment#3595]
== Physical Plan ==
CommandResult [col_name#3593, data_type#3594, comment#3595]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3593, data_type#3594, comment#3595]
|
jonathon
|
|
b9a3f197-6d25-4c0b-bafe-d5869488d114
|
2025/06/15 06:43:47
|
2025/06/15 06:43:48
|
2025/06/15 06:43:48
|
219 ms
|
310 ms
|
Listing tables 'catalog : null, schemaPattern : %, tableTypes : null, tableName : %'
|
CLOSED
|
|
jonathon
|
|
b771d1e2-ba8c-47c4-ad6f-e0223ade0b2c
|
2025/06/13 22:37:48
|
2025/06/13 22:37:48
|
2025/06/13 22:37:48
|
28 ms
|
120 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
b73be3d4-36d0-4ed3-8389-50f00e5273d7
|
2025/06/13 23:29:59
|
2025/06/13 23:29:59
|
2025/06/13 23:29:59
|
88 ms
|
187 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3856, data_type#3857, comment#3858]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#3856, data_type#3857, comment#3858]
== Optimized Logical Plan ==
CommandResult [col_name#3856, data_type#3857, comment#3858], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#3856, data_type#3857, comment#3858]
== Physical Plan ==
CommandResult [col_name#3856, data_type#3857, comment#3858]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#3856, data_type#3857, comment#3858]
|
jonathon
|
|
b457c7ec-4fa6-4ab0-aed5-e03441cf70d2
|
2025/06/15 06:48:30
|
2025/06/15 06:48:30
|
2025/06/15 06:48:30
|
33 ms
|
176 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#5567, value#5568, meaning#5569, Since version#5570], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#5567, value#5568, meaning#5569, Since version#5570]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathan
|
|
b35330e2-2881-406e-9686-7716a3933bd4
|
2025/06/13 23:27:03
|
2025/06/13 23:27:03
|
2025/06/13 23:27:03
|
39 ms
|
320 ms
|
SHOW TABLES IN `test`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3788, tableName#3789, isTemporary#3790]
+- 'UnresolvedNamespace [test]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3788, tableName#3789, isTemporary#3790]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Optimized Logical Plan ==
CommandResult [namespace#3788, tableName#3789, isTemporary#3790], ShowTables [namespace#3788, tableName#3789, isTemporary#3790], V2SessionCatalog(spark_catalog), [test]
+- ShowTables [namespace#3788, tableName#3789, isTemporary#3790]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Physical Plan ==
CommandResult <empty>, [namespace#3788, tableName#3789, isTemporary#3790]
+- ShowTables [namespace#3788, tableName#3789, isTemporary#3790], V2SessionCatalog(spark_catalog), [test]
|
jonathon
|
|
b311ad42-7f00-40ab-a53b-11a9f42d88bf
|
2025/06/14 06:13:08
|
2025/06/14 06:13:08
|
2025/06/14 06:13:08
|
37 ms
|
180 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4991, value#4992, meaning#4993, Since version#4994], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4991, value#4992, meaning#4993, Since version#4994]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
b2bee24c-a2fb-400a-8659-3fb5d9b8f60c
|
2025/06/14 01:46:18
|
2025/06/14 01:46:18
|
2025/06/14 01:46:18
|
45 ms
|
195 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4559, value#4560, meaning#4561, Since version#4562], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4559, value#4560, meaning#4561, Since version#4562]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathan
|
|
b1e0a82c-ad1c-4482-ad28-c7e18781b437
|
2025/06/13 19:06:30
|
2025/06/13 19:06:30
|
2025/06/13 19:06:30
|
97 ms
|
368 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#2375, data_type#2376, comment#2377]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2375, data_type#2376, comment#2377]
== Optimized Logical Plan ==
CommandResult [col_name#2375, data_type#2376, comment#2377], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2375, data_type#2376, comment#2377]
== Physical Plan ==
CommandResult [col_name#2375, data_type#2376, comment#2377]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2375, data_type#2376, comment#2377]
|
jonathon
|
|
aebb6834-45b8-4ce9-99c7-81e49b2a4950
|
2025/06/14 06:31:58
|
2025/06/14 06:31:59
|
2025/06/14 06:31:59
|
22 ms
|
114 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
adb63f8e-9127-4633-a15c-57cbd5cdcc78
|
2025/06/14 05:46:26
|
2025/06/14 05:46:26
|
2025/06/14 05:46:26
|
32 ms
|
185 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
abca4d7a-53c4-445b-9949-636e829c05fd
|
2025/06/13 23:19:33
|
2025/06/13 23:19:33
|
2025/06/13 23:19:33
|
83 ms
|
383 ms
|
DESCRIBE TABLE `default`.`AllTypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3182, data_type#3183, comment#3184]
+- 'UnresolvedTableOrView [default, AllTypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3182, data_type#3183, comment#3184]
== Optimized Logical Plan ==
CommandResult [col_name#3182, data_type#3183, comment#3184], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3182, data_type#3183, comment#3184]
== Physical Plan ==
CommandResult [col_name#3182, data_type#3183, comment#3184]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3182, data_type#3183, comment#3184]
|
jonathon
|
[46]
|
a5329af3-41e1-4fc1-a7d7-63e39fcce790
|
2025/06/14 05:46:27
|
2025/06/14 05:46:27
|
2025/06/14 05:46:27
|
234 ms
|
393 ms
|
SELECT C_43 AS C_12, C_4 AS C_13, C_1 AS C_17, C_4331 AS C_19, C_4332 AS C_18, C_4333 AS C_16, C_6 AS C_15, C_8 AS C_14, C_9 AS C_20, C_7 AS C_23, C_5 AS C_22, C_10 AS C_21, C_11 AS C_24, C_0 AS C_25 FROM (SELECT C_64656661756c745f616972706f727473.`id` AS C_43, C_64656661756c745f616972706f727473.`type` AS C_4, C_64656661756c745f616972706f727473.`name` AS C_1, C_64656661756c745f616972706f727473.`lat` AS C_2, C_64656661756c745f616972706f727473.`lon` AS C_3, C_64656661756c745f616972706f727473.`elev` AS C_0, C_64656661756c745f616972706f727473.`continent` AS C_6, C_64656661756c745f616972706f727473.`country` AS C_8, C_64656661756c745f616972706f727473.`region` AS C_9, C_64656661756c745f616972706f727473.`city` AS C_7, C_64656661756c745f616972706f727473.`iata` AS C_5, C_64656661756c745f616972706f727473.`code` AS C_10, C_64656661756c745f616972706f727473.`gps` AS C_11, (round((C_64656661756c745f616972706f727473.`lat` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4331, (round((C_64656661756c745f616972706f727473.`lon` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4332, (round((C_64656661756c745f616972706f727473.`elev` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4333 FROM `default`.`airports` C_64656661756c745f616972706f727473 WHERE ((C_64656661756c745f616972706f727473.`lon` <= (- 1.040500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lon` >= (- 1.110500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lat` >= 4.100000000000000E+001) AND (C_64656661756c745f616972706f727473.`lat` <= 4.500000000000000E+001)) ) C_4954424c ORDER BY C_25 DESC LIMIT 5
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'GlobalLimit 5
+- 'LocalLimit 5
+- 'Sort ['C_25 DESC NULLS LAST], true
+- 'Project ['C_43 AS C_12#4934, 'C_4 AS C_13#4935, 'C_1 AS C_17#4936, 'C_4331 AS C_19#4937, 'C_4332 AS C_18#4938, 'C_4333 AS C_16#4939, 'C_6 AS C_15#4940, 'C_8 AS C_14#4941, 'C_9 AS C_20#4942, 'C_7 AS C_23#4943, 'C_5 AS C_22#4944, 'C_10 AS C_21#4945, 'C_11 AS C_24#4946, 'C_0 AS C_25#4947]
+- 'SubqueryAlias C_4954424c
+- 'Project ['C_64656661756c745f616972706f727473.id AS C_43#4918, 'C_64656661756c745f616972706f727473.type AS C_4#4919, 'C_64656661756c745f616972706f727473.name AS C_1#4920, 'C_64656661756c745f616972706f727473.lat AS C_2#4921, 'C_64656661756c745f616972706f727473.lon AS C_3#4922, 'C_64656661756c745f616972706f727473.elev AS C_0#4923, 'C_64656661756c745f616972706f727473.continent AS C_6#4924, 'C_64656661756c745f616972706f727473.country AS C_8#4925, 'C_64656661756c745f616972706f727473.region AS C_9#4926, 'C_64656661756c745f616972706f727473.city AS C_7#4927, 'C_64656661756c745f616972706f727473.iata AS C_5#4928, 'C_64656661756c745f616972706f727473.code AS C_10#4929, 'C_64656661756c745f616972706f727473.gps AS C_11#4930, ('round(('C_64656661756c745f616972706f727473.lat * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4331#4931, ('round(('C_64656661756c745f616972706f727473.lon * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4332#4932, ('round(('C_64656661756c745f616972706f727473.elev * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4333#4933]
+- 'Filter ((('C_64656661756c745f616972706f727473.lon <= -104.05) AND ('C_64656661756c745f616972706f727473.lon >= -111.05)) AND (('C_64656661756c745f616972706f727473.lat >= 41.0) AND ('C_64656661756c745f616972706f727473.lat <= 45.0)))
+- 'SubqueryAlias C_64656661756c745f616972706f727473
+- 'UnresolvedRelation [default, airports], [], false
== Analyzed Logical Plan ==
C_12: string, C_13: string, C_17: string, C_19: double, C_18: double, C_16: double, C_15: string, C_14: string, C_20: string, C_23: string, C_22: string, C_21: string, C_24: string, C_25: double
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_25#4947 DESC NULLS LAST], true
+- Project [C_43#4918 AS C_12#4934, C_4#4919 AS C_13#4935, C_1#4920 AS C_17#4936, C_4331#4931 AS C_19#4937, C_4332#4932 AS C_18#4938, C_4333#4933 AS C_16#4939, C_6#4924 AS C_15#4940, C_8#4925 AS C_14#4941, C_9#4926 AS C_20#4942, C_7#4927 AS C_23#4943, C_5#4928 AS C_22#4944, C_10#4929 AS C_21#4945, C_11#4930 AS C_24#4946, C_0#4923 AS C_25#4947]
+- SubqueryAlias C_4954424c
+- Project [id#4948 AS C_43#4918, type#4949 AS C_4#4919, name#4950 AS C_1#4920, lat#4951 AS C_2#4921, lon#4952 AS C_3#4922, elev#4953 AS C_0#4923, continent#4954 AS C_6#4924, country#4955 AS C_8#4925, region#4956 AS C_9#4926, city#4957 AS C_7#4927, iata#4958 AS C_5#4928, code#4959 AS C_10#4929, gps#4960 AS C_11#4930, (round((lat#4951 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4331#4931, (round((lon#4952 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4332#4932, (round((elev#4953 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4333#4933]
+- Filter (((lon#4952 <= -104.05) AND (lon#4952 >= -111.05)) AND ((lat#4951 >= 41.0) AND (lat#4951 <= 45.0)))
+- SubqueryAlias C_64656661756c745f616972706f727473
+- SubqueryAlias spark_catalog.default.airports
+- Relation spark_catalog.default.airports[id#4948,type#4949,name#4950,lat#4951,lon#4952,elev#4953,continent#4954,country#4955,region#4956,city#4957,iata#4958,code#4959,gps#4960] parquet
== Optimized Logical Plan ==
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_25#4947 DESC NULLS LAST], true
+- Project [id#4948 AS C_12#4934, type#4949 AS C_13#4935, name#4950 AS C_17#4936, (round((lat#4951 * 1000.0), 0) / 1000.0) AS C_19#4937, (round((lon#4952 * 1000.0), 0) / 1000.0) AS C_18#4938, (round((elev#4953 * 1000.0), 0) / 1000.0) AS C_16#4939, continent#4954 AS C_15#4940, country#4955 AS C_14#4941, region#4956 AS C_20#4942, city#4957 AS C_23#4943, iata#4958 AS C_22#4944, code#4959 AS C_21#4945, gps#4960 AS C_24#4946, elev#4953 AS C_25#4947]
+- Filter ((isnotnull(lon#4952) AND isnotnull(lat#4951)) AND (((lon#4952 <= -104.05) AND (lon#4952 >= -111.05)) AND ((lat#4951 >= 41.0) AND (lat#4951 <= 45.0))))
+- Relation spark_catalog.default.airports[id#4948,type#4949,name#4950,lat#4951,lon#4952,elev#4953,continent#4954,country#4955,region#4956,city#4957,iata#4958,code#4959,gps#4960] parquet
== Physical Plan ==
TakeOrderedAndProject(limit=5, orderBy=[C_25#4947 DESC NULLS LAST], output=[C_12#4934,C_13#4935,C_17#4936,C_19#4937,C_18#4938,C_16#4939,C_15#4940,C_14#4941,C_20#4942,C_23#4943,C_22#4944,C_21#4945,C_24#4946,C_25#4947])
+- *(1) Project [id#4948 AS C_12#4934, type#4949 AS C_13#4935, name#4950 AS C_17#4936, (round((lat#4951 * 1000.0), 0) / 1000.0) AS C_19#4937, (round((lon#4952 * 1000.0), 0) / 1000.0) AS C_18#4938, (round((elev#4953 * 1000.0), 0) / 1000.0) AS C_16#4939, continent#4954 AS C_15#4940, country#4955 AS C_14#4941, region#4956 AS C_20#4942, city#4957 AS C_23#4943, iata#4958 AS C_22#4944, code#4959 AS C_21#4945, gps#4960 AS C_24#4946, elev#4953 AS C_25#4947]
+- *(1) Filter (((((isnotnull(lon#4952) AND isnotnull(lat#4951)) AND (lon#4952 <= -104.05)) AND (lon#4952 >= -111.05)) AND (lat#4951 >= 41.0)) AND (lat#4951 <= 45.0))
+- *(1) ColumnarToRow
+- FileScan parquet spark_catalog.default.airports[id#4948,type#4949,name#4950,lat#4951,lon#4952,elev#4953,continent#4954,country#4955,region#4956,city#4957,iata#4958,code#4959,gps#4960] Batched: true, DataFilters: [isnotnull(lon#4952), isnotnull(lat#4951), (lon#4952 <= -104.05), (lon#4952 >= -111.05), (lat#495..., Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/home/acdcadmin/spark-warehouse/airports], PartitionFilters: [], PushedFilters: [IsNotNull(lon), IsNotNull(lat), LessThanOrEqual(lon,-104.05), GreaterThanOrEqual(lon,-111.05), G..., ReadSchema: struct<id:string,type:string,name:string,lat:double,lon:double,elev:double,continent:string,count...
|
jonathan
|
|
a50d9a38-f2c1-4512-8252-ddf7ec1d380a
|
2025/06/13 23:35:50
|
2025/06/13 23:35:50
|
2025/06/13 23:35:50
|
85 ms
|
158 ms
|
DESCRIBE default.alltypes
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#4190, data_type#4191, comment#4192]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#4190, data_type#4191, comment#4192]
== Optimized Logical Plan ==
CommandResult [col_name#4190, data_type#4191, comment#4192], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#4190, data_type#4191, comment#4192]
== Physical Plan ==
CommandResult [col_name#4190, data_type#4191, comment#4192]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#4190, data_type#4191, comment#4192]
|
jonathon
|
|
a3f8b679-2dcb-4c49-9921-e22bc5f758a3
|
2025/06/13 07:55:31
|
2025/06/13 07:55:31
|
2025/06/13 07:55:32
|
94 ms
|
250 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#2202, data_type#2203, comment#2204]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2202, data_type#2203, comment#2204]
== Optimized Logical Plan ==
CommandResult [col_name#2202, data_type#2203, comment#2204], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2202, data_type#2203, comment#2204]
== Physical Plan ==
CommandResult [col_name#2202, data_type#2203, comment#2204]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2202, data_type#2203, comment#2204]
|
jonathon
|
[51]
|
a3ecd11e-5087-4d17-a30d-f68b8e8f4d8f
|
2025/06/15 06:48:31
|
2025/06/15 06:48:31
|
2025/06/15 06:48:32
|
159 ms
|
258 ms
|
SELECT C_0 AS C_12, C_1 AS C_17, C_2 AS C_14, C_4331 AS C_25, C_4332 AS C_18, C_4333 AS C_21, C_4 AS C_22, C_6 AS C_19, C_8 AS C_13, C_9 AS C_20, C_10 AS C_24, C_43 AS C_16, C_11 AS C_15, C_7 AS C_23 FROM (SELECT C_64656661756c745f616972706f727473.`id` AS C_0, C_64656661756c745f616972706f727473.`type` AS C_1, C_64656661756c745f616972706f727473.`name` AS C_2, C_64656661756c745f616972706f727473.`lat` AS C_5, C_64656661756c745f616972706f727473.`lon` AS C_3, C_64656661756c745f616972706f727473.`elev` AS C_7, C_64656661756c745f616972706f727473.`continent` AS C_4, C_64656661756c745f616972706f727473.`country` AS C_6, C_64656661756c745f616972706f727473.`region` AS C_8, C_64656661756c745f616972706f727473.`city` AS C_9, C_64656661756c745f616972706f727473.`iata` AS C_10, C_64656661756c745f616972706f727473.`code` AS C_43, C_64656661756c745f616972706f727473.`gps` AS C_11, (round((C_64656661756c745f616972706f727473.`lat` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4331, (round((C_64656661756c745f616972706f727473.`lon` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4332, (round((C_64656661756c745f616972706f727473.`elev` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4333 FROM `default`.`airports` C_64656661756c745f616972706f727473 WHERE ((C_64656661756c745f616972706f727473.`lon` <= (- 1.040500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lon` >= (- 1.110500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lat` >= 4.100000000000000E+001) AND (C_64656661756c745f616972706f727473.`lat` <= 4.500000000000000E+001)) ) C_4954424c ORDER BY C_23 DESC LIMIT 5
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'GlobalLimit 5
+- 'LocalLimit 5
+- 'Sort ['C_23 DESC NULLS LAST], true
+- 'Project ['C_0 AS C_12#5654, 'C_1 AS C_17#5655, 'C_2 AS C_14#5656, 'C_4331 AS C_25#5657, 'C_4332 AS C_18#5658, 'C_4333 AS C_21#5659, 'C_4 AS C_22#5660, 'C_6 AS C_19#5661, 'C_8 AS C_13#5662, 'C_9 AS C_20#5663, 'C_10 AS C_24#5664, 'C_43 AS C_16#5665, 'C_11 AS C_15#5666, 'C_7 AS C_23#5667]
+- 'SubqueryAlias C_4954424c
+- 'Project ['C_64656661756c745f616972706f727473.id AS C_0#5638, 'C_64656661756c745f616972706f727473.type AS C_1#5639, 'C_64656661756c745f616972706f727473.name AS C_2#5640, 'C_64656661756c745f616972706f727473.lat AS C_5#5641, 'C_64656661756c745f616972706f727473.lon AS C_3#5642, 'C_64656661756c745f616972706f727473.elev AS C_7#5643, 'C_64656661756c745f616972706f727473.continent AS C_4#5644, 'C_64656661756c745f616972706f727473.country AS C_6#5645, 'C_64656661756c745f616972706f727473.region AS C_8#5646, 'C_64656661756c745f616972706f727473.city AS C_9#5647, 'C_64656661756c745f616972706f727473.iata AS C_10#5648, 'C_64656661756c745f616972706f727473.code AS C_43#5649, 'C_64656661756c745f616972706f727473.gps AS C_11#5650, ('round(('C_64656661756c745f616972706f727473.lat * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4331#5651, ('round(('C_64656661756c745f616972706f727473.lon * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4332#5652, ('round(('C_64656661756c745f616972706f727473.elev * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4333#5653]
+- 'Filter ((('C_64656661756c745f616972706f727473.lon <= -104.05) AND ('C_64656661756c745f616972706f727473.lon >= -111.05)) AND (('C_64656661756c745f616972706f727473.lat >= 41.0) AND ('C_64656661756c745f616972706f727473.lat <= 45.0)))
+- 'SubqueryAlias C_64656661756c745f616972706f727473
+- 'UnresolvedRelation [default, airports], [], false
== Analyzed Logical Plan ==
C_12: string, C_17: string, C_14: string, C_25: double, C_18: double, C_21: double, C_22: string, C_19: string, C_13: string, C_20: string, C_24: string, C_16: string, C_15: string, C_23: double
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_23#5667 DESC NULLS LAST], true
+- Project [C_0#5638 AS C_12#5654, C_1#5639 AS C_17#5655, C_2#5640 AS C_14#5656, C_4331#5651 AS C_25#5657, C_4332#5652 AS C_18#5658, C_4333#5653 AS C_21#5659, C_4#5644 AS C_22#5660, C_6#5645 AS C_19#5661, C_8#5646 AS C_13#5662, C_9#5647 AS C_20#5663, C_10#5648 AS C_24#5664, C_43#5649 AS C_16#5665, C_11#5650 AS C_15#5666, C_7#5643 AS C_23#5667]
+- SubqueryAlias C_4954424c
+- Project [id#5668 AS C_0#5638, type#5669 AS C_1#5639, name#5670 AS C_2#5640, lat#5671 AS C_5#5641, lon#5672 AS C_3#5642, elev#5673 AS C_7#5643, continent#5674 AS C_4#5644, country#5675 AS C_6#5645, region#5676 AS C_8#5646, city#5677 AS C_9#5647, iata#5678 AS C_10#5648, code#5679 AS C_43#5649, gps#5680 AS C_11#5650, (round((lat#5671 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4331#5651, (round((lon#5672 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4332#5652, (round((elev#5673 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4333#5653]
+- Filter (((lon#5672 <= -104.05) AND (lon#5672 >= -111.05)) AND ((lat#5671 >= 41.0) AND (lat#5671 <= 45.0)))
+- SubqueryAlias C_64656661756c745f616972706f727473
+- SubqueryAlias spark_catalog.default.airports
+- Relation spark_catalog.default.airports[id#5668,type#5669,name#5670,lat#5671,lon#5672,elev#5673,continent#5674,country#5675,region#5676,city#5677,iata#5678,code#5679,gps#5680] parquet
== Optimized Logical Plan ==
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_23#5667 DESC NULLS LAST], true
+- Project [id#5668 AS C_12#5654, type#5669 AS C_17#5655, name#5670 AS C_14#5656, (round((lat#5671 * 1000.0), 0) / 1000.0) AS C_25#5657, (round((lon#5672 * 1000.0), 0) / 1000.0) AS C_18#5658, (round((elev#5673 * 1000.0), 0) / 1000.0) AS C_21#5659, continent#5674 AS C_22#5660, country#5675 AS C_19#5661, region#5676 AS C_13#5662, city#5677 AS C_20#5663, iata#5678 AS C_24#5664, code#5679 AS C_16#5665, gps#5680 AS C_15#5666, elev#5673 AS C_23#5667]
+- Filter ((isnotnull(lon#5672) AND isnotnull(lat#5671)) AND (((lon#5672 <= -104.05) AND (lon#5672 >= -111.05)) AND ((lat#5671 >= 41.0) AND (lat#5671 <= 45.0))))
+- Relation spark_catalog.default.airports[id#5668,type#5669,name#5670,lat#5671,lon#5672,elev#5673,continent#5674,country#5675,region#5676,city#5677,iata#5678,code#5679,gps#5680] parquet
== Physical Plan ==
TakeOrderedAndProject(limit=5, orderBy=[C_23#5667 DESC NULLS LAST], output=[C_12#5654,C_17#5655,C_14#5656,C_25#5657,C_18#5658,C_21#5659,C_22#5660,C_19#5661,C_13#5662,C_20#5663,C_24#5664,C_16#5665,C_15#5666,C_23#5667])
+- *(1) Project [id#5668 AS C_12#5654, type#5669 AS C_17#5655, name#5670 AS C_14#5656, (round((lat#5671 * 1000.0), 0) / 1000.0) AS C_25#5657, (round((lon#5672 * 1000.0), 0) / 1000.0) AS C_18#5658, (round((elev#5673 * 1000.0), 0) / 1000.0) AS C_21#5659, continent#5674 AS C_22#5660, country#5675 AS C_19#5661, region#5676 AS C_13#5662, city#5677 AS C_20#5663, iata#5678 AS C_24#5664, code#5679 AS C_16#5665, gps#5680 AS C_15#5666, elev#5673 AS C_23#5667]
+- *(1) Filter (((((isnotnull(lon#5672) AND isnotnull(lat#5671)) AND (lon#5672 <= -104.05)) AND (lon#5672 >= -111.05)) AND (lat#5671 >= 41.0)) AND (lat#5671 <= 45.0))
+- *(1) ColumnarToRow
+- FileScan parquet spark_catalog.default.airports[id#5668,type#5669,name#5670,lat#5671,lon#5672,elev#5673,continent#5674,country#5675,region#5676,city#5677,iata#5678,code#5679,gps#5680] Batched: true, DataFilters: [isnotnull(lon#5672), isnotnull(lat#5671), (lon#5672 <= -104.05), (lon#5672 >= -111.05), (lat#567..., Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/home/acdcadmin/spark-warehouse/airports], PartitionFilters: [], PushedFilters: [IsNotNull(lon), IsNotNull(lat), LessThanOrEqual(lon,-104.05), GreaterThanOrEqual(lon,-111.05), G..., ReadSchema: struct<id:string,type:string,name:string,lat:double,lon:double,elev:double,continent:string,count...
|
jonathon
|
|
a388ecc4-fd27-4ce8-b100-f7efb4dc03f5
|
2025/06/13 23:38:35
|
2025/06/13 23:38:36
|
2025/06/13 23:38:36
|
39 ms
|
240 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4217, value#4218, meaning#4219, Since version#4220], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4217, value#4218, meaning#4219, Since version#4220]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
a2e231af-a393-4655-8251-8a70076828f1
|
2025/06/13 23:38:36
|
2025/06/13 23:38:36
|
2025/06/13 23:38:36
|
197 ms
|
303 ms
|
Listing tables 'catalog : null, schemaPattern : %, tableTypes : null, tableName : %'
|
CLOSED
|
|
jonathan
|
|
a25e8327-fe20-4539-9cbe-e24a2d8c1f1b
|
2025/06/13 23:21:12
|
2025/06/13 23:21:12
|
2025/06/13 23:21:13
|
11 ms
|
319 ms
|
SHOW TABLES IN `global_temp`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3450, tableName#3451, isTemporary#3452]
+- 'UnresolvedNamespace [global_temp]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3450, tableName#3451, isTemporary#3452]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Optimized Logical Plan ==
CommandResult [namespace#3450, tableName#3451, isTemporary#3452], ShowTables [namespace#3450, tableName#3451, isTemporary#3452], V2SessionCatalog(spark_catalog), [global_temp]
+- ShowTables [namespace#3450, tableName#3451, isTemporary#3452]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Physical Plan ==
CommandResult <empty>, [namespace#3450, tableName#3451, isTemporary#3452]
+- ShowTables [namespace#3450, tableName#3451, isTemporary#3452], V2SessionCatalog(spark_catalog), [global_temp]
|
jonathon
|
|
a1b8f8cb-013e-4133-a96e-d018a40ad262
|
2025/06/13 07:16:51
|
2025/06/13 07:16:52
|
2025/06/13 07:16:52
|
95 ms
|
192 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#2081, data_type#2082, comment#2083]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2081, data_type#2082, comment#2083]
== Optimized Logical Plan ==
CommandResult [col_name#2081, data_type#2082, comment#2083], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2081, data_type#2082, comment#2083]
== Physical Plan ==
CommandResult [col_name#2081, data_type#2082, comment#2083]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2081, data_type#2082, comment#2083]
|
jonathan
|
|
a052b843-5569-402b-89ab-e5d7cdbba6db
|
2025/06/13 23:21:32
|
2025/06/13 23:21:32
|
2025/06/13 23:21:32
|
94 ms
|
657 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3460, data_type#3461, comment#3462]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3460, data_type#3461, comment#3462]
== Optimized Logical Plan ==
CommandResult [col_name#3460, data_type#3461, comment#3462], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3460, data_type#3461, comment#3462]
== Physical Plan ==
CommandResult [col_name#3460, data_type#3461, comment#3462]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3460, data_type#3461, comment#3462]
|
jonathon
|
|
9fc768ea-28aa-40a0-a236-dbae9669cc6c
|
2025/06/13 23:20:56
|
2025/06/13 23:20:56
|
2025/06/13 23:20:56
|
22 ms
|
117 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
9ec2aebc-0785-4c8a-a0fb-18983e537287
|
2025/06/14 05:46:27
|
2025/06/14 05:46:27
|
2025/06/14 05:46:27
|
84 ms
|
238 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#4895, data_type#4896, comment#4897]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4895, data_type#4896, comment#4897]
== Optimized Logical Plan ==
CommandResult [col_name#4895, data_type#4896, comment#4897], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4895, data_type#4896, comment#4897]
== Physical Plan ==
CommandResult [col_name#4895, data_type#4896, comment#4897]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4895, data_type#4896, comment#4897]
|
jonathan
|
|
9dc4101e-0cb8-45a6-b9d0-d89d90d2788b
|
2025/06/13 23:34:50
|
2025/06/13 23:34:50
|
2025/06/13 23:34:50
|
26 ms
|
357 ms
|
SHOW TABLES IN `default`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#4078, tableName#4079, isTemporary#4080]
+- 'UnresolvedNamespace [default]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#4078, tableName#4079, isTemporary#4080]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Optimized Logical Plan ==
CommandResult [namespace#4078, tableName#4079, isTemporary#4080], ShowTables [namespace#4078, tableName#4079, isTemporary#4080], V2SessionCatalog(spark_catalog), [default], [[0,2000000007,2800000008,0,746c7561666564,7374726f70726961], [0,2000000007,2800000008,0,746c7561666564,73657079746c6c61], [0,2000000007,2800000009,0,746c7561666564,73657079746c6c61,32], [0,2000000007,280000000d,0,746c7561666564,73657079746c6c61,6369736162], [0,2000000007,280000000e,0,746c7561666564,73657079746c6c61,326369736162], [0,2000000007,2800000009,0,746c7561666564,7079747961727261,65], [0,2000000007,280000000a,0,746c7561666564,7974746e69676962,6570], [0,2000000007,280000000a,0,746c7561666564,79747972616e6962,6570], [0,2000000007,2800000008,0,746c7561666564,6570797465746164], [0,2000000007,280000000b,0,746c7561666564,746c616d69636564,657079], [0,2000000007,2800000009,0,746c7561666564,70797474616f6c66,65], [0,2000000007,2800000008,0,746c7561666564,736570797470616d], [0,2000000007,280000000b,0,746c7561666564,646978617463796e,617461], [0,2000000007,280000000f,0,746c7561666564,746978617463796e,61746164706972], [0,2000000007,2800000010,0,746c7561666564,7365745f656d6f73,32656c6261745f74], [0,2000000007,280000000a,0,746c7561666564,7974746375727473,6570], [0,2000000007,280000000e,0,746c7561666564,656e6f7a69786174,70756b6f6f6c], [0,2000000007,280000000c,0,746c7561666564,74676e696b726f77,73657079], [0,2000000007,2800000016,0,746c7561666564,74676e696b726f77,6874697773657079,7265626d756e]]
+- ShowTables [namespace#4078, tableName#4079, isTemporary#4080]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Physical Plan ==
CommandResult [namespace#4078, tableName#4079, isTemporary#4080]
+- ShowTables [namespace#4078, tableName#4079, isTemporary#4080], V2SessionCatalog(spark_catalog), [default]
|
jonathon
|
|
9bb688da-63cc-4516-91e2-55d08768cd7b
|
2025/06/13 07:55:32
|
2025/06/13 07:55:32
|
2025/06/13 07:55:32
|
26 ms
|
182 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
99bce643-8f66-49d0-b4e2-aed90b373daf
|
2025/06/13 19:06:16
|
2025/06/13 19:06:16
|
2025/06/13 19:06:16
|
77 ms
|
369 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#2348, data_type#2349, comment#2350]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2348, data_type#2349, comment#2350]
== Optimized Logical Plan ==
CommandResult [col_name#2348, data_type#2349, comment#2350], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2348, data_type#2349, comment#2350]
== Physical Plan ==
CommandResult [col_name#2348, data_type#2349, comment#2350]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2348, data_type#2349, comment#2350]
|
jonathan
|
|
98a0c092-b2d5-4d49-be3a-4740a60763fb
|
2025/06/13 22:54:10
|
2025/06/13 22:54:10
|
2025/06/13 22:54:10
|
24 ms
|
350 ms
|
Listing databases 'catalog : , schemaPattern : null'
|
CLOSED
|
|
jonathon
|
|
98618df6-635a-4bab-91c9-a30788598b2a
|
2025/06/13 06:57:07
|
2025/06/13 06:57:07
|
2025/06/13 06:57:07
|
96 ms
|
204 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#1937, data_type#1938, comment#1939]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#1937, data_type#1938, comment#1939]
== Optimized Logical Plan ==
CommandResult [col_name#1937, data_type#1938, comment#1939], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#1937, data_type#1938, comment#1939]
== Physical Plan ==
CommandResult [col_name#1937, data_type#1938, comment#1939]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#1937, data_type#1938, comment#1939]
|
jonathan
|
|
9589ebd7-ed14-4a9d-a173-a591f1db0105
|
2025/06/13 23:27:00
|
2025/06/13 23:27:00
|
2025/06/13 23:27:00
|
62 ms
|
333 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3674, data_type#3675, comment#3676]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3674, data_type#3675, comment#3676]
== Optimized Logical Plan ==
CommandResult [col_name#3674, data_type#3675, comment#3676], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3674, data_type#3675, comment#3676]
== Physical Plan ==
CommandResult [col_name#3674, data_type#3675, comment#3676]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3674, data_type#3675, comment#3676]
|
jonathan
|
|
94b17ac6-9bb4-4db3-8a74-8665ab43bdde
|
2025/06/13 23:34:48
|
2025/06/13 23:34:48
|
2025/06/13 23:34:48
|
55 ms
|
325 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3979, data_type#3980, comment#3981]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3979, data_type#3980, comment#3981]
== Optimized Logical Plan ==
CommandResult [col_name#3979, data_type#3980, comment#3981], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3979, data_type#3980, comment#3981]
== Physical Plan ==
CommandResult [col_name#3979, data_type#3980, comment#3981]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3979, data_type#3980, comment#3981]
|
jonathan
|
|
94009925-9452-4054-af2b-357089a04203
|
2025/06/13 23:19:33
|
2025/06/13 23:19:33
|
2025/06/13 23:19:33
|
64 ms
|
333 ms
|
DESCRIBE TABLE `default`.`AllTypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3209, data_type#3210, comment#3211]
+- 'UnresolvedTableOrView [default, AllTypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3209, data_type#3210, comment#3211]
== Optimized Logical Plan ==
CommandResult [col_name#3209, data_type#3210, comment#3211], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3209, data_type#3210, comment#3211]
== Physical Plan ==
CommandResult [col_name#3209, data_type#3210, comment#3211]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3209, data_type#3210, comment#3211]
|
jonathon
|
|
91365327-0f45-4e80-92a2-6e8dc7131317
|
2025/06/15 06:45:38
|
2025/06/15 06:45:39
|
2025/06/15 06:45:39
|
89 ms
|
243 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#5471, data_type#5472, comment#5473]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#5471, data_type#5472, comment#5473]
== Optimized Logical Plan ==
CommandResult [col_name#5471, data_type#5472, comment#5473], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#5471, data_type#5472, comment#5473]
== Physical Plan ==
CommandResult [col_name#5471, data_type#5472, comment#5473]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#5471, data_type#5472, comment#5473]
|
jonathan
|
[41]
|
8dfd850b-e621-4e50-a84a-47507293ee16
|
2025/06/13 23:34:48
|
2025/06/13 23:34:48
|
2025/06/13 23:34:49
|
110 ms
|
507 ms
|
select `STRING`,
`DOUBLE`,
`INTEGER`,
`BIGINT`,
`FLOAT`,
`DECIMAL`,
`NUMBER`,
`BOOLEAN`,
`DATE`,
`TIMESTAMP`,
`DATETIME`,
`BINARY`,
`ARRAY`,
`MAP`,
`STRUCT`,
`VARCHAR`,
`CHAR`
from `default`.`alltypes`
limit 1000
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'GlobalLimit 1000
+- 'LocalLimit 1000
+- 'Project ['STRING, 'DOUBLE, 'INTEGER, 'BIGINT, 'FLOAT, 'DECIMAL, 'NUMBER, 'BOOLEAN, 'DATE, 'TIMESTAMP, 'DATETIME, 'BINARY, 'ARRAY, 'MAP, 'STRUCT, 'VARCHAR, 'CHAR]
+- 'UnresolvedRelation [default, alltypes], [], false
== Analyzed Logical Plan ==
STRING: string, DOUBLE: double, INTEGER: int, BIGINT: bigint, FLOAT: float, DECIMAL: decimal(10,2), NUMBER: decimal(10,2), BOOLEAN: boolean, DATE: date, TIMESTAMP: timestamp, DATETIME: timestamp, BINARY: binary, ARRAY: array<int>, MAP: map<string,string>, STRUCT: struct<field1:string,field2:int>, VARCHAR: string, CHAR: string
GlobalLimit 1000
+- LocalLimit 1000
+- Project [STRING#4006, DOUBLE#4007, INTEGER#4008, BIGINT#4009L, FLOAT#4010, DECIMAL#4011, NUMBER#4012, BOOLEAN#4013, DATE#4014, TIMESTAMP#4015, DATETIME#4016, BINARY#4017, ARRAY#4018, MAP#4019, STRUCT#4020, VARCHAR#4021, CHAR#4022]
+- SubqueryAlias spark_catalog.default.alltypes
+- HiveTableRelation [`spark_catalog`.`default`.`alltypes`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Data Cols: [STRING#4006, DOUBLE#4007, INTEGER#4008, BIGINT#4009L, FLOAT#4010, DECIMAL#4011, NUMBER#4012, BOO..., Partition Cols: []]
== Optimized Logical Plan ==
GlobalLimit 1000
+- LocalLimit 1000
+- HiveTableRelation [`spark_catalog`.`default`.`alltypes`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Data Cols: [STRING#4006, DOUBLE#4007, INTEGER#4008, BIGINT#4009L, FLOAT#4010, DECIMAL#4011, NUMBER#4012, BOO..., Partition Cols: []]
== Physical Plan ==
CollectLimit 1000
+- Scan hive spark_catalog.default.alltypes [STRING#4006, DOUBLE#4007, INTEGER#4008, BIGINT#4009L, FLOAT#4010, DECIMAL#4011, NUMBER#4012, BOOLEAN#4013, DATE#4014, TIMESTAMP#4015, DATETIME#4016, BINARY#4017, ARRAY#4018, MAP#4019, STRUCT#4020, VARCHAR#4021, CHAR#4022], HiveTableRelation [`spark_catalog`.`default`.`alltypes`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Data Cols: [STRING#4006, DOUBLE#4007, INTEGER#4008, BIGINT#4009L, FLOAT#4010, DECIMAL#4011, NUMBER#4012, BOO..., Partition Cols: []]
|
jonathon
|
|
8d0bb0e4-de5d-4cf3-a066-b2d94ca5d1e0
|
2025/06/14 06:31:58
|
2025/06/14 06:31:58
|
2025/06/14 06:31:58
|
217 ms
|
310 ms
|
Listing tables 'catalog : null, schemaPattern : %, tableTypes : null, tableName : %'
|
CLOSED
|
|
jonathon
|
|
8c1d5a44-065a-46f9-89e3-c8f7e342178f
|
2025/06/13 07:55:32
|
2025/06/13 07:55:32
|
2025/06/13 07:55:32
|
96 ms
|
252 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#2225, data_type#2226, comment#2227]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2225, data_type#2226, comment#2227]
== Optimized Logical Plan ==
CommandResult [col_name#2225, data_type#2226, comment#2227], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2225, data_type#2226, comment#2227]
== Physical Plan ==
CommandResult [col_name#2225, data_type#2226, comment#2227]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#2225, data_type#2226, comment#2227]
|
jonathon
|
|
8bbb35ce-97f0-4a70-b383-3d91b7af96a3
|
2025/06/14 01:46:19
|
2025/06/14 01:46:19
|
2025/06/14 01:46:19
|
94 ms
|
191 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#4584, data_type#4585, comment#4586]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4584, data_type#4585, comment#4586]
== Optimized Logical Plan ==
CommandResult [col_name#4584, data_type#4585, comment#4586], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4584, data_type#4585, comment#4586]
== Physical Plan ==
CommandResult [col_name#4584, data_type#4585, comment#4586]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4584, data_type#4585, comment#4586]
|
jonathan
|
|
89b27579-c655-44ef-bf92-6c4bd6c1d8d5
|
2025/06/13 19:06:30
|
2025/06/13 19:06:30
|
2025/06/13 19:06:30
|
86 ms
|
355 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#2402, data_type#2403, comment#2404]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2402, data_type#2403, comment#2404]
== Optimized Logical Plan ==
CommandResult [col_name#2402, data_type#2403, comment#2404], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2402, data_type#2403, comment#2404]
== Physical Plan ==
CommandResult [col_name#2402, data_type#2403, comment#2404]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#2402, data_type#2403, comment#2404]
|
jonathan
|
[39]
|
8999cf5e-c88b-4980-aa38-13e2c37d95c1
|
2025/06/13 23:21:33
|
2025/06/13 23:21:33
|
2025/06/13 23:21:33
|
130 ms
|
539 ms
|
select `STRING`,
`DOUBLE`,
`INTEGER`,
`BIGINT`,
`FLOAT`,
`DECIMAL`,
`NUMBER`,
`BOOLEAN`,
`DATE`,
`TIMESTAMP`,
`DATETIME`,
`BINARY`,
`ARRAY`,
`MAP`,
`STRUCT`,
`VARCHAR`,
`CHAR`
from `default`.`alltypes`
limit 1000
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'GlobalLimit 1000
+- 'LocalLimit 1000
+- 'Project ['STRING, 'DOUBLE, 'INTEGER, 'BIGINT, 'FLOAT, 'DECIMAL, 'NUMBER, 'BOOLEAN, 'DATE, 'TIMESTAMP, 'DATETIME, 'BINARY, 'ARRAY, 'MAP, 'STRUCT, 'VARCHAR, 'CHAR]
+- 'UnresolvedRelation [default, alltypes], [], false
== Analyzed Logical Plan ==
STRING: string, DOUBLE: double, INTEGER: int, BIGINT: bigint, FLOAT: float, DECIMAL: decimal(10,2), NUMBER: decimal(10,2), BOOLEAN: boolean, DATE: date, TIMESTAMP: timestamp, DATETIME: timestamp, BINARY: binary, ARRAY: array<int>, MAP: map<string,string>, STRUCT: struct<field1:string,field2:int>, VARCHAR: string, CHAR: string
GlobalLimit 1000
+- LocalLimit 1000
+- Project [STRING#3514, DOUBLE#3515, INTEGER#3516, BIGINT#3517L, FLOAT#3518, DECIMAL#3519, NUMBER#3520, BOOLEAN#3521, DATE#3522, TIMESTAMP#3523, DATETIME#3524, BINARY#3525, ARRAY#3526, MAP#3527, STRUCT#3528, VARCHAR#3529, CHAR#3530]
+- SubqueryAlias spark_catalog.default.alltypes
+- HiveTableRelation [`spark_catalog`.`default`.`alltypes`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Data Cols: [STRING#3514, DOUBLE#3515, INTEGER#3516, BIGINT#3517L, FLOAT#3518, DECIMAL#3519, NUMBER#3520, BOO..., Partition Cols: []]
== Optimized Logical Plan ==
GlobalLimit 1000
+- LocalLimit 1000
+- HiveTableRelation [`spark_catalog`.`default`.`alltypes`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Data Cols: [STRING#3514, DOUBLE#3515, INTEGER#3516, BIGINT#3517L, FLOAT#3518, DECIMAL#3519, NUMBER#3520, BOO..., Partition Cols: []]
== Physical Plan ==
CollectLimit 1000
+- Scan hive spark_catalog.default.alltypes [STRING#3514, DOUBLE#3515, INTEGER#3516, BIGINT#3517L, FLOAT#3518, DECIMAL#3519, NUMBER#3520, BOOLEAN#3521, DATE#3522, TIMESTAMP#3523, DATETIME#3524, BINARY#3525, ARRAY#3526, MAP#3527, STRUCT#3528, VARCHAR#3529, CHAR#3530], HiveTableRelation [`spark_catalog`.`default`.`alltypes`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Data Cols: [STRING#3514, DOUBLE#3515, INTEGER#3516, BIGINT#3517L, FLOAT#3518, DECIMAL#3519, NUMBER#3520, BOO..., Partition Cols: []]
|
jonathon
|
[35]
|
8941a51a-eff4-4b55-b556-60fcfdad821f
|
2025/06/13 22:18:18
|
2025/06/13 22:18:19
|
2025/06/13 22:18:19
|
267 ms
|
366 ms
|
SELECT C_3 AS C_18, C_0 AS C_12, C_4 AS C_19, C_4331 AS C_14, C_4332 AS C_17, C_4333 AS C_22, C_7 AS C_23, C_8 AS C_15, C_9 AS C_20, C_10 AS C_16, C_43 AS C_25, C_2 AS C_13, C_11 AS C_24, C_1 AS C_21 FROM (SELECT C_64656661756c745f616972706f727473.`id` AS C_3, C_64656661756c745f616972706f727473.`type` AS C_0, C_64656661756c745f616972706f727473.`name` AS C_4, C_64656661756c745f616972706f727473.`lat` AS C_5, C_64656661756c745f616972706f727473.`lon` AS C_6, C_64656661756c745f616972706f727473.`elev` AS C_1, C_64656661756c745f616972706f727473.`continent` AS C_7, C_64656661756c745f616972706f727473.`country` AS C_8, C_64656661756c745f616972706f727473.`region` AS C_9, C_64656661756c745f616972706f727473.`city` AS C_10, C_64656661756c745f616972706f727473.`iata` AS C_43, C_64656661756c745f616972706f727473.`code` AS C_2, C_64656661756c745f616972706f727473.`gps` AS C_11, (round((C_64656661756c745f616972706f727473.`lat` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4331, (round((C_64656661756c745f616972706f727473.`lon` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4332, (round((C_64656661756c745f616972706f727473.`elev` * power(1.000000000000000E+001, 3)), 0) / power(1.000000000000000E+001, 3)) AS C_4333 FROM `default`.`airports` C_64656661756c745f616972706f727473 WHERE ((C_64656661756c745f616972706f727473.`lon` <= (- 1.040500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lon` >= (- 1.110500000000000E+002)) AND (C_64656661756c745f616972706f727473.`lat` >= 4.100000000000000E+001) AND (C_64656661756c745f616972706f727473.`lat` <= 4.500000000000000E+001)) ) C_4954424c ORDER BY C_21 DESC LIMIT 5
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'GlobalLimit 5
+- 'LocalLimit 5
+- 'Sort ['C_21 DESC NULLS LAST], true
+- 'Project ['C_3 AS C_18#2516, 'C_0 AS C_12#2517, 'C_4 AS C_19#2518, 'C_4331 AS C_14#2519, 'C_4332 AS C_17#2520, 'C_4333 AS C_22#2521, 'C_7 AS C_23#2522, 'C_8 AS C_15#2523, 'C_9 AS C_20#2524, 'C_10 AS C_16#2525, 'C_43 AS C_25#2526, 'C_2 AS C_13#2527, 'C_11 AS C_24#2528, 'C_1 AS C_21#2529]
+- 'SubqueryAlias C_4954424c
+- 'Project ['C_64656661756c745f616972706f727473.id AS C_3#2500, 'C_64656661756c745f616972706f727473.type AS C_0#2501, 'C_64656661756c745f616972706f727473.name AS C_4#2502, 'C_64656661756c745f616972706f727473.lat AS C_5#2503, 'C_64656661756c745f616972706f727473.lon AS C_6#2504, 'C_64656661756c745f616972706f727473.elev AS C_1#2505, 'C_64656661756c745f616972706f727473.continent AS C_7#2506, 'C_64656661756c745f616972706f727473.country AS C_8#2507, 'C_64656661756c745f616972706f727473.region AS C_9#2508, 'C_64656661756c745f616972706f727473.city AS C_10#2509, 'C_64656661756c745f616972706f727473.iata AS C_43#2510, 'C_64656661756c745f616972706f727473.code AS C_2#2511, 'C_64656661756c745f616972706f727473.gps AS C_11#2512, ('round(('C_64656661756c745f616972706f727473.lat * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4331#2513, ('round(('C_64656661756c745f616972706f727473.lon * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4332#2514, ('round(('C_64656661756c745f616972706f727473.elev * 'power(10.0, 3)), 0) / 'power(10.0, 3)) AS C_4333#2515]
+- 'Filter ((('C_64656661756c745f616972706f727473.lon <= -104.05) AND ('C_64656661756c745f616972706f727473.lon >= -111.05)) AND (('C_64656661756c745f616972706f727473.lat >= 41.0) AND ('C_64656661756c745f616972706f727473.lat <= 45.0)))
+- 'SubqueryAlias C_64656661756c745f616972706f727473
+- 'UnresolvedRelation [default, airports], [], false
== Analyzed Logical Plan ==
C_18: string, C_12: string, C_19: string, C_14: double, C_17: double, C_22: double, C_23: string, C_15: string, C_20: string, C_16: string, C_25: string, C_13: string, C_24: string, C_21: double
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_21#2529 DESC NULLS LAST], true
+- Project [C_3#2500 AS C_18#2516, C_0#2501 AS C_12#2517, C_4#2502 AS C_19#2518, C_4331#2513 AS C_14#2519, C_4332#2514 AS C_17#2520, C_4333#2515 AS C_22#2521, C_7#2506 AS C_23#2522, C_8#2507 AS C_15#2523, C_9#2508 AS C_20#2524, C_10#2509 AS C_16#2525, C_43#2510 AS C_25#2526, C_2#2511 AS C_13#2527, C_11#2512 AS C_24#2528, C_1#2505 AS C_21#2529]
+- SubqueryAlias C_4954424c
+- Project [id#2530 AS C_3#2500, type#2531 AS C_0#2501, name#2532 AS C_4#2502, lat#2533 AS C_5#2503, lon#2534 AS C_6#2504, elev#2535 AS C_1#2505, continent#2536 AS C_7#2506, country#2537 AS C_8#2507, region#2538 AS C_9#2508, city#2539 AS C_10#2509, iata#2540 AS C_43#2510, code#2541 AS C_2#2511, gps#2542 AS C_11#2512, (round((lat#2533 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4331#2513, (round((lon#2534 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4332#2514, (round((elev#2535 * POWER(10.0, cast(3 as double))), 0) / POWER(10.0, cast(3 as double))) AS C_4333#2515]
+- Filter (((lon#2534 <= -104.05) AND (lon#2534 >= -111.05)) AND ((lat#2533 >= 41.0) AND (lat#2533 <= 45.0)))
+- SubqueryAlias C_64656661756c745f616972706f727473
+- SubqueryAlias spark_catalog.default.airports
+- Relation spark_catalog.default.airports[id#2530,type#2531,name#2532,lat#2533,lon#2534,elev#2535,continent#2536,country#2537,region#2538,city#2539,iata#2540,code#2541,gps#2542] parquet
== Optimized Logical Plan ==
GlobalLimit 5
+- LocalLimit 5
+- Sort [C_21#2529 DESC NULLS LAST], true
+- Project [id#2530 AS C_18#2516, type#2531 AS C_12#2517, name#2532 AS C_19#2518, (round((lat#2533 * 1000.0), 0) / 1000.0) AS C_14#2519, (round((lon#2534 * 1000.0), 0) / 1000.0) AS C_17#2520, (round((elev#2535 * 1000.0), 0) / 1000.0) AS C_22#2521, continent#2536 AS C_23#2522, country#2537 AS C_15#2523, region#2538 AS C_20#2524, city#2539 AS C_16#2525, iata#2540 AS C_25#2526, code#2541 AS C_13#2527, gps#2542 AS C_24#2528, elev#2535 AS C_21#2529]
+- Filter ((isnotnull(lon#2534) AND isnotnull(lat#2533)) AND (((lon#2534 <= -104.05) AND (lon#2534 >= -111.05)) AND ((lat#2533 >= 41.0) AND (lat#2533 <= 45.0))))
+- Relation spark_catalog.default.airports[id#2530,type#2531,name#2532,lat#2533,lon#2534,elev#2535,continent#2536,country#2537,region#2538,city#2539,iata#2540,code#2541,gps#2542] parquet
== Physical Plan ==
TakeOrderedAndProject(limit=5, orderBy=[C_21#2529 DESC NULLS LAST], output=[C_18#2516,C_12#2517,C_19#2518,C_14#2519,C_17#2520,C_22#2521,C_23#2522,C_15#2523,C_20#2524,C_16#2525,C_25#2526,C_13#2527,C_24#2528,C_21#2529])
+- *(1) Project [id#2530 AS C_18#2516, type#2531 AS C_12#2517, name#2532 AS C_19#2518, (round((lat#2533 * 1000.0), 0) / 1000.0) AS C_14#2519, (round((lon#2534 * 1000.0), 0) / 1000.0) AS C_17#2520, (round((elev#2535 * 1000.0), 0) / 1000.0) AS C_22#2521, continent#2536 AS C_23#2522, country#2537 AS C_15#2523, region#2538 AS C_20#2524, city#2539 AS C_16#2525, iata#2540 AS C_25#2526, code#2541 AS C_13#2527, gps#2542 AS C_24#2528, elev#2535 AS C_21#2529]
+- *(1) Filter (((((isnotnull(lon#2534) AND isnotnull(lat#2533)) AND (lon#2534 <= -104.05)) AND (lon#2534 >= -111.05)) AND (lat#2533 >= 41.0)) AND (lat#2533 <= 45.0))
+- *(1) ColumnarToRow
+- FileScan parquet spark_catalog.default.airports[id#2530,type#2531,name#2532,lat#2533,lon#2534,elev#2535,continent#2536,country#2537,region#2538,city#2539,iata#2540,code#2541,gps#2542] Batched: true, DataFilters: [isnotnull(lon#2534), isnotnull(lat#2533), (lon#2534 <= -104.05), (lon#2534 >= -111.05), (lat#253..., Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/home/acdcadmin/spark-warehouse/airports], PartitionFilters: [], PushedFilters: [IsNotNull(lon), IsNotNull(lat), LessThanOrEqual(lon,-104.05), GreaterThanOrEqual(lon,-111.05), G..., ReadSchema: struct<id:string,type:string,name:string,lat:double,lon:double,elev:double,continent:string,count...
|
jonathon
|
|
8475f456-16b1-4470-9ca7-b4841d9b2d3e
|
2025/06/13 23:38:36
|
2025/06/13 23:38:36
|
2025/06/13 23:38:36
|
80 ms
|
177 ms
|
DESCRIBE default.airports
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#4242, data_type#4243, comment#4244]
+- 'UnresolvedTableOrView [default, airports], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4242, data_type#4243, comment#4244]
== Optimized Logical Plan ==
CommandResult [col_name#4242, data_type#4243, comment#4244], Execute DescribeTableCommand, [[id,string,null], [type,string,null], [name,string,null], [lat,double,null], [lon,double,null], [elev,double,null], [continent,string,null], [country,string,null], [region,string,null], [city,string,null], [iata,string,null], [code,string,null], [gps,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4242, data_type#4243, comment#4244]
== Physical Plan ==
CommandResult [col_name#4242, data_type#4243, comment#4244]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`airports`, false, [col_name#4242, data_type#4243, comment#4244]
|
jonathan
|
|
844a111b-04dd-49a5-ba46-598dd022910a
|
2025/06/13 23:27:01
|
2025/06/13 23:27:01
|
2025/06/13 23:27:02
|
15 ms
|
713 ms
|
Listing catalogs
|
CLOSED
|
|
jonathon
|
|
8418ef72-3034-46eb-9919-57a3a94cb1a3
|
2025/06/13 07:16:51
|
2025/06/13 07:16:51
|
2025/06/13 07:16:51
|
251 ms
|
346 ms
|
Listing tables 'catalog : null, schemaPattern : %, tableTypes : null, tableName : %'
|
CLOSED
|
|
jonathan
|
|
82ece942-c4e6-406e-9366-aed2ae67c729
|
2025/06/13 22:39:04
|
2025/06/13 22:39:04
|
2025/06/13 22:39:05
|
12 ms
|
770 ms
|
Listing catalogs
|
CLOSED
|
|