jonathan
|
|
44261df9-dd24-46c0-8629-c0e5c44b8ebf
|
2025/06/13 22:54:09
|
2025/06/13 22:54:09
|
2025/06/13 22:54:10
|
1 ms
|
753 ms
|
Listing catalogs
|
CLOSED
|
|
jonathan
|
|
79d4a8cf-f37e-4a5d-a268-9892c179688f
|
2025/06/13 23:34:51
|
2025/06/13 23:34:51
|
2025/06/13 23:34:52
|
10 ms
|
319 ms
|
SHOW TABLES IN `global_temp`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#4128, tableName#4129, isTemporary#4130]
+- 'UnresolvedNamespace [global_temp]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#4128, tableName#4129, isTemporary#4130]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Optimized Logical Plan ==
CommandResult [namespace#4128, tableName#4129, isTemporary#4130], ShowTables [namespace#4128, tableName#4129, isTemporary#4130], V2SessionCatalog(spark_catalog), [global_temp]
+- ShowTables [namespace#4128, tableName#4129, isTemporary#4130]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Physical Plan ==
CommandResult <empty>, [namespace#4128, tableName#4129, isTemporary#4130]
+- ShowTables [namespace#4128, tableName#4129, isTemporary#4130], V2SessionCatalog(spark_catalog), [global_temp]
|
jonathan
|
|
0bf6ef0b-29be-47bc-ac42-32a3e198fd40
|
2025/06/13 22:51:51
|
2025/06/13 22:51:51
|
2025/06/13 22:51:51
|
11 ms
|
777 ms
|
Listing catalogs
|
CLOSED
|
|
jonathan
|
|
206183d3-e6ac-420f-8a9e-1eb86dc07025
|
2025/06/13 23:34:49
|
2025/06/13 23:34:49
|
2025/06/13 23:34:49
|
11 ms
|
663 ms
|
Listing catalogs
|
CLOSED
|
|
jonathan
|
|
420aad98-a026-4b03-91ed-a2e3e50b509a
|
2025/06/13 23:27:03
|
2025/06/13 23:27:03
|
2025/06/13 23:27:04
|
11 ms
|
346 ms
|
SHOW TABLES IN `global_temp`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3798, tableName#3799, isTemporary#3800]
+- 'UnresolvedNamespace [global_temp]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3798, tableName#3799, isTemporary#3800]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Optimized Logical Plan ==
CommandResult [namespace#3798, tableName#3799, isTemporary#3800], ShowTables [namespace#3798, tableName#3799, isTemporary#3800], V2SessionCatalog(spark_catalog), [global_temp]
+- ShowTables [namespace#3798, tableName#3799, isTemporary#3800]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Physical Plan ==
CommandResult <empty>, [namespace#3798, tableName#3799, isTemporary#3800]
+- ShowTables [namespace#3798, tableName#3799, isTemporary#3800], V2SessionCatalog(spark_catalog), [global_temp]
|
jonathan
|
|
a25e8327-fe20-4539-9cbe-e24a2d8c1f1b
|
2025/06/13 23:21:12
|
2025/06/13 23:21:12
|
2025/06/13 23:21:13
|
11 ms
|
319 ms
|
SHOW TABLES IN `global_temp`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3450, tableName#3451, isTemporary#3452]
+- 'UnresolvedNamespace [global_temp]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3450, tableName#3451, isTemporary#3452]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Optimized Logical Plan ==
CommandResult [namespace#3450, tableName#3451, isTemporary#3452], ShowTables [namespace#3450, tableName#3451, isTemporary#3452], V2SessionCatalog(spark_catalog), [global_temp]
+- ShowTables [namespace#3450, tableName#3451, isTemporary#3452]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Physical Plan ==
CommandResult <empty>, [namespace#3450, tableName#3451, isTemporary#3452]
+- ShowTables [namespace#3450, tableName#3451, isTemporary#3452], V2SessionCatalog(spark_catalog), [global_temp]
|
jonathan
|
|
f62ad77d-f169-48df-bce1-bdbabab8b106
|
2025/06/13 22:51:53
|
2025/06/13 22:51:53
|
2025/06/13 22:51:54
|
11 ms
|
321 ms
|
SHOW TABLES IN `global_temp`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3011, tableName#3012, isTemporary#3013]
+- 'UnresolvedNamespace [global_temp]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3011, tableName#3012, isTemporary#3013]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Optimized Logical Plan ==
CommandResult [namespace#3011, tableName#3012, isTemporary#3013], ShowTables [namespace#3011, tableName#3012, isTemporary#3013], V2SessionCatalog(spark_catalog), [global_temp]
+- ShowTables [namespace#3011, tableName#3012, isTemporary#3013]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Physical Plan ==
CommandResult <empty>, [namespace#3011, tableName#3012, isTemporary#3013]
+- ShowTables [namespace#3011, tableName#3012, isTemporary#3013], V2SessionCatalog(spark_catalog), [global_temp]
|
jonathan
|
|
7e70771a-b47a-45c0-be4f-ef4ae70a2a9d
|
2025/06/13 22:39:07
|
2025/06/13 22:39:07
|
2025/06/13 22:39:08
|
12 ms
|
322 ms
|
SHOW TABLES IN `global_temp`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2787, tableName#2788, isTemporary#2789]
+- 'UnresolvedNamespace [global_temp]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2787, tableName#2788, isTemporary#2789]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Optimized Logical Plan ==
CommandResult [namespace#2787, tableName#2788, isTemporary#2789], ShowTables [namespace#2787, tableName#2788, isTemporary#2789], V2SessionCatalog(spark_catalog), [global_temp]
+- ShowTables [namespace#2787, tableName#2788, isTemporary#2789]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Physical Plan ==
CommandResult <empty>, [namespace#2787, tableName#2788, isTemporary#2789]
+- ShowTables [namespace#2787, tableName#2788, isTemporary#2789], V2SessionCatalog(spark_catalog), [global_temp]
|
jonathan
|
|
82ece942-c4e6-406e-9366-aed2ae67c729
|
2025/06/13 22:39:04
|
2025/06/13 22:39:04
|
2025/06/13 22:39:05
|
12 ms
|
770 ms
|
Listing catalogs
|
CLOSED
|
|
jonathan
|
|
d00ae85e-1e66-4e4b-81de-74575d789d5f
|
2025/06/13 22:54:12
|
2025/06/13 22:54:12
|
2025/06/13 22:54:13
|
12 ms
|
335 ms
|
SHOW TABLES IN `global_temp`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3091, tableName#3092, isTemporary#3093]
+- 'UnresolvedNamespace [global_temp]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3091, tableName#3092, isTemporary#3093]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Optimized Logical Plan ==
CommandResult [namespace#3091, tableName#3092, isTemporary#3093], ShowTables [namespace#3091, tableName#3092, isTemporary#3093], V2SessionCatalog(spark_catalog), [global_temp]
+- ShowTables [namespace#3091, tableName#3092, isTemporary#3093]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [global_temp]
== Physical Plan ==
CommandResult <empty>, [namespace#3091, tableName#3092, isTemporary#3093]
+- ShowTables [namespace#3091, tableName#3092, isTemporary#3093], V2SessionCatalog(spark_catalog), [global_temp]
|
jonathan
|
|
d05b3f61-2402-4b48-b9c2-c45932a10b4f
|
2025/06/13 23:21:09
|
2025/06/13 23:21:09
|
2025/06/13 23:21:10
|
13 ms
|
934 ms
|
Listing catalogs
|
CLOSED
|
|
jonathan
|
|
844a111b-04dd-49a5-ba46-598dd022910a
|
2025/06/13 23:27:01
|
2025/06/13 23:27:01
|
2025/06/13 23:27:02
|
15 ms
|
713 ms
|
Listing catalogs
|
CLOSED
|
|
jonathan
|
|
e779eaf6-19a6-4e51-87a1-9cdc92370694
|
2025/06/13 23:34:51
|
2025/06/13 23:34:51
|
2025/06/13 23:34:51
|
18 ms
|
336 ms
|
SHOW TABLES IN `test`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#4118, tableName#4119, isTemporary#4120]
+- 'UnresolvedNamespace [test]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#4118, tableName#4119, isTemporary#4120]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Optimized Logical Plan ==
CommandResult [namespace#4118, tableName#4119, isTemporary#4120], ShowTables [namespace#4118, tableName#4119, isTemporary#4120], V2SessionCatalog(spark_catalog), [test]
+- ShowTables [namespace#4118, tableName#4119, isTemporary#4120]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Physical Plan ==
CommandResult <empty>, [namespace#4118, tableName#4119, isTemporary#4120]
+- ShowTables [namespace#4118, tableName#4119, isTemporary#4120], V2SessionCatalog(spark_catalog), [test]
|
jonathan
|
|
5f9e9c75-faf6-4f50-9159-5ef4655a51ee
|
2025/06/13 22:39:07
|
2025/06/13 22:39:07
|
2025/06/13 22:39:07
|
19 ms
|
331 ms
|
SHOW TABLES IN `test`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2777, tableName#2778, isTemporary#2779]
+- 'UnresolvedNamespace [test]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2777, tableName#2778, isTemporary#2779]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Optimized Logical Plan ==
CommandResult [namespace#2777, tableName#2778, isTemporary#2779], ShowTables [namespace#2777, tableName#2778, isTemporary#2779], V2SessionCatalog(spark_catalog), [test]
+- ShowTables [namespace#2777, tableName#2778, isTemporary#2779]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Physical Plan ==
CommandResult <empty>, [namespace#2777, tableName#2778, isTemporary#2779]
+- ShowTables [namespace#2777, tableName#2778, isTemporary#2779], V2SessionCatalog(spark_catalog), [test]
|
jonathan
|
|
49ebf4e0-34db-4e16-8552-40f05c9b3765
|
2025/06/13 22:51:53
|
2025/06/13 22:51:53
|
2025/06/13 22:51:53
|
20 ms
|
338 ms
|
SHOW TABLES IN `test`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3001, tableName#3002, isTemporary#3003]
+- 'UnresolvedNamespace [test]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3001, tableName#3002, isTemporary#3003]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Optimized Logical Plan ==
CommandResult [namespace#3001, tableName#3002, isTemporary#3003], ShowTables [namespace#3001, tableName#3002, isTemporary#3003], V2SessionCatalog(spark_catalog), [test]
+- ShowTables [namespace#3001, tableName#3002, isTemporary#3003]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Physical Plan ==
CommandResult <empty>, [namespace#3001, tableName#3002, isTemporary#3003]
+- ShowTables [namespace#3001, tableName#3002, isTemporary#3003], V2SessionCatalog(spark_catalog), [test]
|
jonathan
|
|
82681c0c-59dc-4ddb-a8bf-459d962d6b81
|
2025/06/13 23:21:12
|
2025/06/13 23:21:12
|
2025/06/13 23:21:12
|
20 ms
|
327 ms
|
SHOW TABLES IN `test`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3440, tableName#3441, isTemporary#3442]
+- 'UnresolvedNamespace [test]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3440, tableName#3441, isTemporary#3442]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Optimized Logical Plan ==
CommandResult [namespace#3440, tableName#3441, isTemporary#3442], ShowTables [namespace#3440, tableName#3441, isTemporary#3442], V2SessionCatalog(spark_catalog), [test]
+- ShowTables [namespace#3440, tableName#3441, isTemporary#3442]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Physical Plan ==
CommandResult <empty>, [namespace#3440, tableName#3441, isTemporary#3442]
+- ShowTables [namespace#3440, tableName#3441, isTemporary#3442], V2SessionCatalog(spark_catalog), [test]
|
jonathon
|
|
9fc768ea-28aa-40a0-a236-dbae9669cc6c
|
2025/06/13 23:20:56
|
2025/06/13 23:20:56
|
2025/06/13 23:20:56
|
22 ms
|
117 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
aebb6834-45b8-4ce9-99c7-81e49b2a4950
|
2025/06/14 06:31:58
|
2025/06/14 06:31:59
|
2025/06/14 06:31:59
|
22 ms
|
114 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
d03c9acc-0fe4-4e21-84e1-736635b4db5d
|
2025/06/14 06:13:09
|
2025/06/14 06:13:09
|
2025/06/14 06:13:09
|
22 ms
|
117 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
44528be3-10bd-467f-b2eb-e296c381fef1
|
2025/06/13 23:38:37
|
2025/06/13 23:38:37
|
2025/06/13 23:38:37
|
23 ms
|
121 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
7db97e51-8703-4fc1-8f61-2a343cb12304
|
2025/06/13 22:44:55
|
2025/06/13 22:44:55
|
2025/06/13 22:44:56
|
23 ms
|
132 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
8114f5b7-6ee0-40f1-942f-9ea912c309c5
|
2025/06/15 06:48:31
|
2025/06/15 06:48:31
|
2025/06/15 06:48:31
|
23 ms
|
116 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
cded8feb-be09-499b-8a5b-645bcfd08839
|
2025/06/15 06:43:48
|
2025/06/15 06:43:48
|
2025/06/15 06:43:48
|
23 ms
|
115 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
f4914c47-dc63-4c48-b54d-9a23d0814768
|
2025/06/14 01:46:19
|
2025/06/14 01:46:19
|
2025/06/14 01:46:19
|
23 ms
|
119 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
557d3a44-cbef-455b-99fc-f26ee1546387
|
2025/06/13 22:37:47
|
2025/06/13 22:37:47
|
2025/06/13 22:37:47
|
24 ms
|
116 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
6504b11a-5997-41dd-9b7a-85372e54da21
|
2025/06/13 23:38:36
|
2025/06/13 23:38:36
|
2025/06/13 23:38:36
|
24 ms
|
120 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
98a0c092-b2d5-4d49-be3a-4740a60763fb
|
2025/06/13 22:54:10
|
2025/06/13 22:54:10
|
2025/06/13 22:54:10
|
24 ms
|
350 ms
|
Listing databases 'catalog : , schemaPattern : null'
|
CLOSED
|
|
jonathon
|
|
e0086b25-30c5-4faa-8f94-69f3645f7be2
|
2025/06/14 05:46:27
|
2025/06/14 05:46:27
|
2025/06/14 05:46:27
|
24 ms
|
177 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
0f303db8-cb06-43c4-8d23-de2570a9098d
|
2025/06/13 23:29:59
|
2025/06/13 23:29:59
|
2025/06/13 23:29:59
|
25 ms
|
137 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
12459dfe-ccf0-4f29-ae6a-39888705b926
|
2025/06/13 23:27:02
|
2025/06/13 23:27:02
|
2025/06/13 23:27:02
|
25 ms
|
342 ms
|
SHOW TABLES IN `default`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3748, tableName#3749, isTemporary#3750]
+- 'UnresolvedNamespace [default]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3748, tableName#3749, isTemporary#3750]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Optimized Logical Plan ==
CommandResult [namespace#3748, tableName#3749, isTemporary#3750], ShowTables [namespace#3748, tableName#3749, isTemporary#3750], V2SessionCatalog(spark_catalog), [default], [[0,2000000007,2800000008,0,746c7561666564,7374726f70726961], [0,2000000007,2800000008,0,746c7561666564,73657079746c6c61], [0,2000000007,2800000009,0,746c7561666564,73657079746c6c61,32], [0,2000000007,280000000d,0,746c7561666564,73657079746c6c61,6369736162], [0,2000000007,280000000e,0,746c7561666564,73657079746c6c61,326369736162], [0,2000000007,2800000009,0,746c7561666564,7079747961727261,65], [0,2000000007,280000000a,0,746c7561666564,7974746e69676962,6570], [0,2000000007,280000000a,0,746c7561666564,79747972616e6962,6570], [0,2000000007,2800000008,0,746c7561666564,6570797465746164], [0,2000000007,280000000b,0,746c7561666564,746c616d69636564,657079], [0,2000000007,2800000009,0,746c7561666564,70797474616f6c66,65], [0,2000000007,2800000008,0,746c7561666564,736570797470616d], [0,2000000007,280000000b,0,746c7561666564,646978617463796e,617461], [0,2000000007,280000000f,0,746c7561666564,746978617463796e,61746164706972], [0,2000000007,2800000010,0,746c7561666564,7365745f656d6f73,32656c6261745f74], [0,2000000007,280000000a,0,746c7561666564,7974746375727473,6570], [0,2000000007,280000000e,0,746c7561666564,656e6f7a69786174,70756b6f6f6c], [0,2000000007,280000000c,0,746c7561666564,74676e696b726f77,73657079], [0,2000000007,2800000016,0,746c7561666564,74676e696b726f77,6874697773657079,7265626d756e]]
+- ShowTables [namespace#3748, tableName#3749, isTemporary#3750]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Physical Plan ==
CommandResult [namespace#3748, tableName#3749, isTemporary#3750]
+- ShowTables [namespace#3748, tableName#3749, isTemporary#3750], V2SessionCatalog(spark_catalog), [default]
|
jonathon
|
|
2e20f6db-4145-4b48-b397-bbea3e2aa67d
|
2025/06/15 06:45:39
|
2025/06/15 06:45:39
|
2025/06/15 06:45:39
|
26 ms
|
179 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
9bb688da-63cc-4516-91e2-55d08768cd7b
|
2025/06/13 07:55:32
|
2025/06/13 07:55:32
|
2025/06/13 07:55:32
|
26 ms
|
182 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
9dc4101e-0cb8-45a6-b9d0-d89d90d2788b
|
2025/06/13 23:34:50
|
2025/06/13 23:34:50
|
2025/06/13 23:34:50
|
26 ms
|
357 ms
|
SHOW TABLES IN `default`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#4078, tableName#4079, isTemporary#4080]
+- 'UnresolvedNamespace [default]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#4078, tableName#4079, isTemporary#4080]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Optimized Logical Plan ==
CommandResult [namespace#4078, tableName#4079, isTemporary#4080], ShowTables [namespace#4078, tableName#4079, isTemporary#4080], V2SessionCatalog(spark_catalog), [default], [[0,2000000007,2800000008,0,746c7561666564,7374726f70726961], [0,2000000007,2800000008,0,746c7561666564,73657079746c6c61], [0,2000000007,2800000009,0,746c7561666564,73657079746c6c61,32], [0,2000000007,280000000d,0,746c7561666564,73657079746c6c61,6369736162], [0,2000000007,280000000e,0,746c7561666564,73657079746c6c61,326369736162], [0,2000000007,2800000009,0,746c7561666564,7079747961727261,65], [0,2000000007,280000000a,0,746c7561666564,7974746e69676962,6570], [0,2000000007,280000000a,0,746c7561666564,79747972616e6962,6570], [0,2000000007,2800000008,0,746c7561666564,6570797465746164], [0,2000000007,280000000b,0,746c7561666564,746c616d69636564,657079], [0,2000000007,2800000009,0,746c7561666564,70797474616f6c66,65], [0,2000000007,2800000008,0,746c7561666564,736570797470616d], [0,2000000007,280000000b,0,746c7561666564,646978617463796e,617461], [0,2000000007,280000000f,0,746c7561666564,746978617463796e,61746164706972], [0,2000000007,2800000010,0,746c7561666564,7365745f656d6f73,32656c6261745f74], [0,2000000007,280000000a,0,746c7561666564,7974746375727473,6570], [0,2000000007,280000000e,0,746c7561666564,656e6f7a69786174,70756b6f6f6c], [0,2000000007,280000000c,0,746c7561666564,74676e696b726f77,73657079], [0,2000000007,2800000016,0,746c7561666564,74676e696b726f77,6874697773657079,7265626d756e]]
+- ShowTables [namespace#4078, tableName#4079, isTemporary#4080]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Physical Plan ==
CommandResult [namespace#4078, tableName#4079, isTemporary#4080]
+- ShowTables [namespace#4078, tableName#4079, isTemporary#4080], V2SessionCatalog(spark_catalog), [default]
|
jonathan
|
|
bcc16f8c-edd3-4141-87ec-368a4f20cc6c
|
2025/06/13 22:51:53
|
2025/06/13 22:51:53
|
2025/06/13 22:51:53
|
26 ms
|
335 ms
|
SHOW TABLES IN `onetableschema`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2981, tableName#2982, isTemporary#2983]
+- 'UnresolvedNamespace [onetableschema]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2981, tableName#2982, isTemporary#2983]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Optimized Logical Plan ==
CommandResult [namespace#2981, tableName#2982, isTemporary#2983], ShowTables [namespace#2981, tableName#2982, isTemporary#2983], V2SessionCatalog(spark_catalog), [onetableschema], [[0,200000000e,300000000c,0,656c626174656e6f,616d65686373,73657079746c6c61,74736574]]
+- ShowTables [namespace#2981, tableName#2982, isTemporary#2983]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Physical Plan ==
CommandResult [namespace#2981, tableName#2982, isTemporary#2983]
+- ShowTables [namespace#2981, tableName#2982, isTemporary#2983], V2SessionCatalog(spark_catalog), [onetableschema]
|
jonathon
|
|
c15f7b20-7e3e-4f66-9e0b-e3e86343ca24
|
2025/06/14 01:23:35
|
2025/06/14 01:23:35
|
2025/06/14 01:23:35
|
26 ms
|
179 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
d3607e54-df5f-406c-b493-bbdc49b92c86
|
2025/06/13 23:27:03
|
2025/06/13 23:27:03
|
2025/06/13 23:27:03
|
26 ms
|
337 ms
|
SHOW TABLES IN `onetableschema`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3768, tableName#3769, isTemporary#3770]
+- 'UnresolvedNamespace [onetableschema]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3768, tableName#3769, isTemporary#3770]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Optimized Logical Plan ==
CommandResult [namespace#3768, tableName#3769, isTemporary#3770], ShowTables [namespace#3768, tableName#3769, isTemporary#3770], V2SessionCatalog(spark_catalog), [onetableschema], [[0,200000000e,300000000c,0,656c626174656e6f,616d65686373,73657079746c6c61,74736574]]
+- ShowTables [namespace#3768, tableName#3769, isTemporary#3770]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Physical Plan ==
CommandResult [namespace#3768, tableName#3769, isTemporary#3770]
+- ShowTables [namespace#3768, tableName#3769, isTemporary#3770], V2SessionCatalog(spark_catalog), [onetableschema]
|
jonathon
|
|
d8e08b2c-1486-4408-b18e-49868f58bf30
|
2025/06/13 23:20:55
|
2025/06/13 23:20:55
|
2025/06/13 23:20:55
|
26 ms
|
121 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
fc40a089-9a06-46e1-a564-f544f85f247b
|
2025/06/14 01:47:47
|
2025/06/14 01:47:47
|
2025/06/14 01:47:47
|
26 ms
|
119 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
fe2eeb0c-fe37-4f9f-849d-846445773af0
|
2025/06/13 22:54:11
|
2025/06/13 22:54:11
|
2025/06/13 22:54:12
|
26 ms
|
345 ms
|
SHOW TABLES IN `onetableschema`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3061, tableName#3062, isTemporary#3063]
+- 'UnresolvedNamespace [onetableschema]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3061, tableName#3062, isTemporary#3063]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Optimized Logical Plan ==
CommandResult [namespace#3061, tableName#3062, isTemporary#3063], ShowTables [namespace#3061, tableName#3062, isTemporary#3063], V2SessionCatalog(spark_catalog), [onetableschema], [[0,200000000e,300000000c,0,656c626174656e6f,616d65686373,73657079746c6c61,74736574]]
+- ShowTables [namespace#3061, tableName#3062, isTemporary#3063]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Physical Plan ==
CommandResult [namespace#3061, tableName#3062, isTemporary#3063]
+- ShowTables [namespace#3061, tableName#3062, isTemporary#3063], V2SessionCatalog(spark_catalog), [onetableschema]
|
jonathan
|
|
2952ca04-a82d-4b6f-bba3-517aa8866d70
|
2025/06/13 22:54:12
|
2025/06/13 22:54:12
|
2025/06/13 22:54:12
|
27 ms
|
361 ms
|
SHOW TABLES IN `test`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3081, tableName#3082, isTemporary#3083]
+- 'UnresolvedNamespace [test]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3081, tableName#3082, isTemporary#3083]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Optimized Logical Plan ==
CommandResult [namespace#3081, tableName#3082, isTemporary#3083], ShowTables [namespace#3081, tableName#3082, isTemporary#3083], V2SessionCatalog(spark_catalog), [test]
+- ShowTables [namespace#3081, tableName#3082, isTemporary#3083]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Physical Plan ==
CommandResult <empty>, [namespace#3081, tableName#3082, isTemporary#3083]
+- ShowTables [namespace#3081, tableName#3082, isTemporary#3083], V2SessionCatalog(spark_catalog), [test]
|
jonathon
|
|
680ed7d0-342a-4620-b3a1-2f57b0058141
|
2025/06/13 22:18:18
|
2025/06/13 22:18:18
|
2025/06/13 22:18:18
|
27 ms
|
122 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
fd73230f-0d28-4938-91c3-3beae7d8300b
|
2025/06/13 06:57:07
|
2025/06/13 06:57:07
|
2025/06/13 06:57:07
|
27 ms
|
122 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
14fe4125-ce7b-47ec-8eae-1f27d268514f
|
2025/06/13 23:27:01
|
2025/06/13 23:27:01
|
2025/06/13 23:27:02
|
28 ms
|
364 ms
|
Listing databases 'catalog : , schemaPattern : null'
|
CLOSED
|
|
jonathon
|
|
b771d1e2-ba8c-47c4-ad6f-e0223ade0b2c
|
2025/06/13 22:37:48
|
2025/06/13 22:37:48
|
2025/06/13 22:37:48
|
28 ms
|
120 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
ebbdceac-2358-42f5-91b4-13bf7f038e68
|
2025/06/13 23:21:11
|
2025/06/13 23:21:11
|
2025/06/13 23:21:12
|
28 ms
|
336 ms
|
SHOW TABLES IN `onetableschema`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3420, tableName#3421, isTemporary#3422]
+- 'UnresolvedNamespace [onetableschema]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3420, tableName#3421, isTemporary#3422]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Optimized Logical Plan ==
CommandResult [namespace#3420, tableName#3421, isTemporary#3422], ShowTables [namespace#3420, tableName#3421, isTemporary#3422], V2SessionCatalog(spark_catalog), [onetableschema], [[0,200000000e,300000000c,0,656c626174656e6f,616d65686373,73657079746c6c61,74736574]]
+- ShowTables [namespace#3420, tableName#3421, isTemporary#3422]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Physical Plan ==
CommandResult [namespace#3420, tableName#3421, isTemporary#3422]
+- ShowTables [namespace#3420, tableName#3421, isTemporary#3422], V2SessionCatalog(spark_catalog), [onetableschema]
|
jonathon
|
|
22a79b78-383b-4630-9d21-3f3edc29d332
|
2025/06/14 01:47:47
|
2025/06/14 01:47:48
|
2025/06/14 01:47:48
|
29 ms
|
124 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
2641dfe3-8486-4965-b35e-af9a5acad4e5
|
2025/06/13 07:16:52
|
2025/06/13 07:16:52
|
2025/06/13 07:16:52
|
29 ms
|
125 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
32526ce1-c7c7-4fb6-9e54-86de31256f92
|
2025/06/13 23:34:49
|
2025/06/13 23:34:49
|
2025/06/13 23:34:49
|
29 ms
|
337 ms
|
Listing databases 'catalog : , schemaPattern : null'
|
CLOSED
|
|
jonathan
|
|
8081e0a0-fa1f-4bd2-a749-eebb493b1c08
|
2025/06/13 22:51:51
|
2025/06/13 22:51:51
|
2025/06/13 22:51:51
|
29 ms
|
357 ms
|
Listing databases 'catalog : , schemaPattern : null'
|
CLOSED
|
|
jonathan
|
|
69bf0b73-65ae-4ec0-b457-1ad9a9ab7c3f
|
2025/06/13 23:27:02
|
2025/06/13 23:27:02
|
2025/06/13 23:27:02
|
30 ms
|
363 ms
|
SHOW TABLES IN `c3ba675f1fb64660ba4a90155b35924e`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3728, tableName#3729, isTemporary#3730]
+- 'UnresolvedNamespace [c3ba675f1fb64660ba4a90155b35924e]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3728, tableName#3729, isTemporary#3730]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Optimized Logical Plan ==
CommandResult [namespace#3728, tableName#3729, isTemporary#3730], ShowTables [namespace#3728, tableName#3729, isTemporary#3730], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e], [[0,2000000020,400000000c,0,6635373661623363,3036363436626631,3531303961346162,6534323935336235,69746e656469796d,72656966]]
+- ShowTables [namespace#3728, tableName#3729, isTemporary#3730]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Physical Plan ==
CommandResult [namespace#3728, tableName#3729, isTemporary#3730]
+- ShowTables [namespace#3728, tableName#3729, isTemporary#3730], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
|
jonathon
|
|
63f33d59-8ed5-4bfc-a086-5532e1e184fa
|
2025/06/13 07:16:51
|
2025/06/13 07:16:51
|
2025/06/13 07:16:51
|
32 ms
|
126 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
adb63f8e-9127-4633-a15c-57cbd5cdcc78
|
2025/06/14 05:46:26
|
2025/06/14 05:46:26
|
2025/06/14 05:46:26
|
32 ms
|
185 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
1b050a71-1141-4469-a1b5-a1b8be6eaf70
|
2025/06/13 23:34:51
|
2025/06/13 23:34:51
|
2025/06/13 23:34:51
|
33 ms
|
343 ms
|
SHOW TABLES IN `onetableschema`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#4098, tableName#4099, isTemporary#4100]
+- 'UnresolvedNamespace [onetableschema]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#4098, tableName#4099, isTemporary#4100]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Optimized Logical Plan ==
CommandResult [namespace#4098, tableName#4099, isTemporary#4100], ShowTables [namespace#4098, tableName#4099, isTemporary#4100], V2SessionCatalog(spark_catalog), [onetableschema], [[0,200000000e,300000000c,0,656c626174656e6f,616d65686373,73657079746c6c61,74736574]]
+- ShowTables [namespace#4098, tableName#4099, isTemporary#4100]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Physical Plan ==
CommandResult [namespace#4098, tableName#4099, isTemporary#4100]
+- ShowTables [namespace#4098, tableName#4099, isTemporary#4100], V2SessionCatalog(spark_catalog), [onetableschema]
|
jonathon
|
|
73745f82-25b8-47d8-967b-3efa1b283367
|
2025/06/15 06:45:37
|
2025/06/15 06:45:37
|
2025/06/15 06:45:37
|
33 ms
|
262 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#5423, value#5424, meaning#5425, Since version#5426], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#5423, value#5424, meaning#5425, Since version#5426]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
b457c7ec-4fa6-4ab0-aed5-e03441cf70d2
|
2025/06/15 06:48:30
|
2025/06/15 06:48:30
|
2025/06/15 06:48:30
|
33 ms
|
176 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#5567, value#5568, meaning#5569, Since version#5570], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#5567, value#5568, meaning#5569, Since version#5570]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathan
|
|
c4177770-6474-4fde-940d-efc9fd04cb71
|
2025/06/13 22:39:05
|
2025/06/13 22:39:05
|
2025/06/13 22:39:05
|
34 ms
|
355 ms
|
Listing databases 'catalog : , schemaPattern : null'
|
CLOSED
|
|
jonathan
|
|
fef01b9d-1d9a-4022-a02c-24c2343faa8c
|
2025/06/13 22:39:06
|
2025/06/13 22:39:06
|
2025/06/13 22:39:07
|
34 ms
|
342 ms
|
SHOW TABLES IN `onetableschema`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2757, tableName#2758, isTemporary#2759]
+- 'UnresolvedNamespace [onetableschema]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2757, tableName#2758, isTemporary#2759]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Optimized Logical Plan ==
CommandResult [namespace#2757, tableName#2758, isTemporary#2759], ShowTables [namespace#2757, tableName#2758, isTemporary#2759], V2SessionCatalog(spark_catalog), [onetableschema], [[0,200000000e,300000000c,0,656c626174656e6f,616d65686373,73657079746c6c61,74736574]]
+- ShowTables [namespace#2757, tableName#2758, isTemporary#2759]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [onetableschema]
== Physical Plan ==
CommandResult [namespace#2757, tableName#2758, isTemporary#2759]
+- ShowTables [namespace#2757, tableName#2758, isTemporary#2759], V2SessionCatalog(spark_catalog), [onetableschema]
|
jonathon
|
|
2013b900-8b91-46bc-8f52-ea83e6bc1e23
|
2025/06/14 06:31:57
|
2025/06/14 06:31:57
|
2025/06/14 06:31:58
|
35 ms
|
177 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#5135, value#5136, meaning#5137, Since version#5138], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#5135, value#5136, meaning#5137, Since version#5138]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
ed3cdc17-162d-4b0b-8d62-e4943ebd2ef3
|
2025/06/14 01:47:46
|
2025/06/14 01:47:46
|
2025/06/14 01:47:47
|
35 ms
|
179 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4703, value#4704, meaning#4705, Since version#4706], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4703, value#4704, meaning#4705, Since version#4706]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
de4e9f6b-1dde-46e8-a9f8-94f35fbf6cd1
|
2025/06/13 22:44:54
|
2025/06/13 22:44:54
|
2025/06/13 22:44:54
|
36 ms
|
209 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#2797, value#2798, meaning#2799, Since version#2800], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#2797, value#2798, meaning#2799, Since version#2800]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathan
|
|
52fc6c77-7c71-4a15-bcf7-b3b231790177
|
2025/06/13 23:35:50
|
2025/06/13 23:35:50
|
2025/06/13 23:35:50
|
37 ms
|
119 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : alltypes, columnName : null'
|
CLOSED
|
|
jonathon
|
|
b311ad42-7f00-40ab-a53b-11a9f42d88bf
|
2025/06/14 06:13:08
|
2025/06/14 06:13:08
|
2025/06/14 06:13:08
|
37 ms
|
180 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4991, value#4992, meaning#4993, Since version#4994], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4991, value#4992, meaning#4993, Since version#4994]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathan
|
|
215ae1d6-e907-4ea3-a95a-762cd54e7fb6
|
2025/06/13 22:51:52
|
2025/06/13 22:51:52
|
2025/06/13 22:51:52
|
38 ms
|
365 ms
|
SHOW TABLES IN `c3ba675f1fb64660ba4a90155b35924e`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2941, tableName#2942, isTemporary#2943]
+- 'UnresolvedNamespace [c3ba675f1fb64660ba4a90155b35924e]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2941, tableName#2942, isTemporary#2943]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Optimized Logical Plan ==
CommandResult [namespace#2941, tableName#2942, isTemporary#2943], ShowTables [namespace#2941, tableName#2942, isTemporary#2943], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e], [[0,2000000020,400000000c,0,6635373661623363,3036363436626631,3531303961346162,6534323935336235,69746e656469796d,72656966]]
+- ShowTables [namespace#2941, tableName#2942, isTemporary#2943]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Physical Plan ==
CommandResult [namespace#2941, tableName#2942, isTemporary#2943]
+- ShowTables [namespace#2941, tableName#2942, isTemporary#2943], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
|
jonathan
|
|
500a6e02-2c16-4a5f-aa79-c12bf659d6f8
|
2025/06/13 23:21:10
|
2025/06/13 23:21:10
|
2025/06/13 23:21:10
|
38 ms
|
387 ms
|
Listing databases 'catalog : , schemaPattern : null'
|
CLOSED
|
|
jonathon
|
|
fddd6370-e99c-4ab9-81e3-a34c12f9ed88
|
2025/06/14 01:23:33
|
2025/06/14 01:23:33
|
2025/06/14 01:23:33
|
38 ms
|
271 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4415, value#4416, meaning#4417, Since version#4418], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4415, value#4416, meaning#4417, Since version#4418]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
8085a422-355a-48ff-88af-508c1d5bdb35
|
2025/06/14 05:46:25
|
2025/06/14 05:46:25
|
2025/06/14 05:46:26
|
39 ms
|
272 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4847, value#4848, meaning#4849, Since version#4850], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4847, value#4848, meaning#4849, Since version#4850]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
a388ecc4-fd27-4ce8-b100-f7efb4dc03f5
|
2025/06/13 23:38:35
|
2025/06/13 23:38:36
|
2025/06/13 23:38:36
|
39 ms
|
240 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4217, value#4218, meaning#4219, Since version#4220], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4217, value#4218, meaning#4219, Since version#4220]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathan
|
|
b35330e2-2881-406e-9686-7716a3933bd4
|
2025/06/13 23:27:03
|
2025/06/13 23:27:03
|
2025/06/13 23:27:03
|
39 ms
|
320 ms
|
SHOW TABLES IN `test`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3788, tableName#3789, isTemporary#3790]
+- 'UnresolvedNamespace [test]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3788, tableName#3789, isTemporary#3790]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Optimized Logical Plan ==
CommandResult [namespace#3788, tableName#3789, isTemporary#3790], ShowTables [namespace#3788, tableName#3789, isTemporary#3790], V2SessionCatalog(spark_catalog), [test]
+- ShowTables [namespace#3788, tableName#3789, isTemporary#3790]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [test]
== Physical Plan ==
CommandResult <empty>, [namespace#3788, tableName#3789, isTemporary#3790]
+- ShowTables [namespace#3788, tableName#3789, isTemporary#3790], V2SessionCatalog(spark_catalog), [test]
|
jonathon
|
|
0b3af494-d430-4b8d-8d5c-c65a2a33f5fa
|
2025/06/13 07:55:30
|
2025/06/13 07:55:30
|
2025/06/13 07:55:30
|
40 ms
|
272 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#2177, value#2178, meaning#2179, Since version#2180], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#2177, value#2178, meaning#2179, Since version#2180]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
339b74a3-7492-4035-b9f6-bfff4a4916de
|
2025/06/13 23:20:55
|
2025/06/13 23:20:55
|
2025/06/13 23:20:55
|
40 ms
|
184 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#3236, value#3237, meaning#3238, Since version#3239], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#3236, value#3237, meaning#3238, Since version#3239]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathan
|
|
d2b58a5b-b61f-4d30-92ef-9837d1d25622
|
2025/06/13 22:39:05
|
2025/06/13 22:39:06
|
2025/06/13 22:39:06
|
40 ms
|
379 ms
|
SHOW TABLES IN `c3ba675f1fb64660ba4a90155b35924e`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2717, tableName#2718, isTemporary#2719]
+- 'UnresolvedNamespace [c3ba675f1fb64660ba4a90155b35924e]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2717, tableName#2718, isTemporary#2719]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Optimized Logical Plan ==
CommandResult [namespace#2717, tableName#2718, isTemporary#2719], ShowTables [namespace#2717, tableName#2718, isTemporary#2719], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e], [[0,2000000020,400000000c,0,6635373661623363,3036363436626631,3531303961346162,6534323935336235,69746e656469796d,72656966]]
+- ShowTables [namespace#2717, tableName#2718, isTemporary#2719]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Physical Plan ==
CommandResult [namespace#2717, tableName#2718, isTemporary#2719]
+- ShowTables [namespace#2717, tableName#2718, isTemporary#2719], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
|
jonathon
|
|
be589622-87e4-495f-ab8e-f1bb824cae28
|
2025/06/13 07:16:50
|
2025/06/13 07:16:51
|
2025/06/13 07:16:51
|
41 ms
|
183 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#2033, value#2034, meaning#2035, Since version#2036], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#2033, value#2034, meaning#2035, Since version#2036]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
089371d8-0d89-489e-bad8-9288df1edeee
|
2025/06/13 22:18:17
|
2025/06/13 22:18:17
|
2025/06/13 22:18:17
|
42 ms
|
184 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#2429, value#2430, meaning#2431, Since version#2432], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#2429, value#2430, meaning#2431, Since version#2432]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
107e708d-28e8-47cf-bdff-8031a1fe0208
|
2025/06/15 06:43:47
|
2025/06/15 06:43:47
|
2025/06/15 06:43:47
|
42 ms
|
180 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#5279, value#5280, meaning#5281, Since version#5282], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#5279, value#5280, meaning#5281, Since version#5282]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
d5105876-2692-43cf-a8ec-b8e810803663
|
2025/06/13 23:29:58
|
2025/06/13 23:29:58
|
2025/06/13 23:29:58
|
42 ms
|
353 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#3808, value#3809, meaning#3810, Since version#3811], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#3808, value#3809, meaning#3810, Since version#3811]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathan
|
|
53514d4b-1396-435a-949f-58e0bbcf0f24
|
2025/06/13 23:35:49
|
2025/06/13 23:35:49
|
2025/06/13 23:35:49
|
44 ms
|
156 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4138, value#4139, meaning#4140, Since version#4141], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4138, value#4139, meaning#4140, Since version#4141]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
31082431-b5fd-4dcf-aa9f-4ac9f4f0b874
|
2025/06/15 06:48:31
|
2025/06/15 06:48:31
|
2025/06/15 06:48:31
|
45 ms
|
138 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
3688010e-d716-46d7-b78f-abecb7972de1
|
2025/06/15 06:45:38
|
2025/06/15 06:45:38
|
2025/06/15 06:45:38
|
45 ms
|
197 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
43a87bef-72a9-4d8b-94b3-ae38d5701de3
|
2025/06/13 22:37:47
|
2025/06/13 22:37:47
|
2025/06/13 22:37:47
|
45 ms
|
182 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#2573, value#2574, meaning#2575, Since version#2576], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#2573, value#2574, meaning#2575, Since version#2576]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
b2bee24c-a2fb-400a-8659-3fb5d9b8f60c
|
2025/06/14 01:46:18
|
2025/06/14 01:46:18
|
2025/06/14 01:46:18
|
45 ms
|
195 ms
|
set -v
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
SetCommand (-v,None)
== Analyzed Logical Plan ==
key: string, value: string, meaning: string, Since version: string
SetCommand (-v,None)
== Optimized Logical Plan ==
CommandResult [key#4559, value#4560, meaning#4561, Since version#4562], Execute SetCommand, [[spark.sql.adaptive.advisoryPartitionSizeInBytes,<value of spark.sql.adaptive.shuffle.targetPostShuffleInputSize>,The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.,3.0.0], [spark.sql.adaptive.autoBroadcastJoinThreshold,<undefined>,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.,3.2.0], [spark.sql.adaptive.coalescePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.,3.0.0], [spark.sql.adaptive.coalescePartitions.initialPartitionNum,<undefined>,The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.,3.0.0], [spark.sql.adaptive.coalescePartitions.minPartitionSize,1MB,The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.,3.2.0], [spark.sql.adaptive.coalescePartitions.parallelismFirst,true,When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.,3.2.0], [spark.sql.adaptive.customCostEvaluatorClass,<undefined>,The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.,3.2.0], [spark.sql.adaptive.enabled,true,When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.,1.6.0], [spark.sql.adaptive.forceOptimizeSkewedJoin,false,When true, force enable OptimizeSkewedJoin even if it introduces extra shuffle.,3.3.0], [spark.sql.adaptive.localShuffleReader.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.,3.0.0], [spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold,0b,Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.,3.2.0], [spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.,3.2.0], [spark.sql.adaptive.optimizer.excludedRules,<undefined>,Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.,3.1.0], [spark.sql.adaptive.rebalancePartitionsSmallPartitionFactor,0.2,A partition will be merged during splitting if its size is small than this factor multiply spark.sql.adaptive.advisoryPartitionSizeInBytes.,3.3.0], [spark.sql.adaptive.skewJoin.enabled,true,When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.,3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionFactor,5.0,A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes',3.0.0], [spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes,256MB,A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.,3.0.0], [spark.sql.allowNamedFunctionArguments,true,If true, Spark will turn on support for named parameters for all functions that has it implemented.,3.5.0], [spark.sql.ansi.doubleQuotedIdentifiers,false,When true and 'spark.sql.ansi.enabled' is true, Spark SQL reads literals enclosed in double quoted (") as identifiers. When false they are read as string literals.,3.4.0], [spark.sql.ansi.enabled,false,When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style,3.0.0], [spark.sql.ansi.enforceReservedKeywords,false,When true and 'spark.sql.ansi.enabled' is true, the Spark SQL parser enforces the ANSI reserved keywords and forbids SQL queries that use reserved keywords as alias names and/or identifiers for table, view, function, etc.,3.3.0], [spark.sql.ansi.relationPrecedence,false,When true and 'spark.sql.ansi.enabled' is true, JOIN takes precedence over comma when combining relation. For example, `t1, t2 JOIN t3` should result to `t1 X (t2 X t3)`. If the config is false, the result is `(t1 X t2) X t3`.,3.4.0], [spark.sql.autoBroadcastJoinThreshold,10MB,Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command `ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan` has been run, and file-based data source tables where the statistics are computed directly on the files of data.,1.1.0], [spark.sql.avro.compression.codec,snappy,Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.,2.4.0], ... 183 more fields]
+- SetCommand (-v,None)
== Physical Plan ==
CommandResult [key#4559, value#4560, meaning#4561, Since version#4562]
+- Execute SetCommand
+- SetCommand (-v,None)
|
jonathon
|
|
d0d4aba6-8208-43a0-a733-6f6a2284b010
|
2025/06/13 22:44:55
|
2025/06/13 22:44:55
|
2025/06/13 22:44:55
|
47 ms
|
147 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
42443fae-e69b-4568-86b8-a04c28fdddd2
|
2025/06/13 22:39:06
|
2025/06/13 22:39:06
|
2025/06/13 22:39:06
|
48 ms
|
325 ms
|
SHOW TABLES IN `default`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2737, tableName#2738, isTemporary#2739]
+- 'UnresolvedNamespace [default]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2737, tableName#2738, isTemporary#2739]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Optimized Logical Plan ==
CommandResult [namespace#2737, tableName#2738, isTemporary#2739], ShowTables [namespace#2737, tableName#2738, isTemporary#2739], V2SessionCatalog(spark_catalog), [default], [[0,2000000007,2800000008,0,746c7561666564,7374726f70726961], [0,2000000007,2800000008,0,746c7561666564,73657079746c6c61], [0,2000000007,2800000009,0,746c7561666564,73657079746c6c61,32], [0,2000000007,280000000d,0,746c7561666564,73657079746c6c61,6369736162], [0,2000000007,280000000e,0,746c7561666564,73657079746c6c61,326369736162], [0,2000000007,2800000009,0,746c7561666564,7079747961727261,65], [0,2000000007,280000000a,0,746c7561666564,7974746e69676962,6570], [0,2000000007,280000000a,0,746c7561666564,79747972616e6962,6570], [0,2000000007,2800000008,0,746c7561666564,6570797465746164], [0,2000000007,280000000b,0,746c7561666564,746c616d69636564,657079], [0,2000000007,2800000009,0,746c7561666564,70797474616f6c66,65], [0,2000000007,2800000008,0,746c7561666564,736570797470616d], [0,2000000007,280000000b,0,746c7561666564,646978617463796e,617461], [0,2000000007,280000000f,0,746c7561666564,746978617463796e,61746164706972], [0,2000000007,2800000010,0,746c7561666564,7365745f656d6f73,32656c6261745f74], [0,2000000007,280000000a,0,746c7561666564,7974746375727473,6570], [0,2000000007,280000000e,0,746c7561666564,656e6f7a69786174,70756b6f6f6c], [0,2000000007,280000000c,0,746c7561666564,74676e696b726f77,73657079], [0,2000000007,2800000016,0,746c7561666564,74676e696b726f77,6874697773657079,7265626d756e]]
+- ShowTables [namespace#2737, tableName#2738, isTemporary#2739]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Physical Plan ==
CommandResult [namespace#2737, tableName#2738, isTemporary#2739]
+- ShowTables [namespace#2737, tableName#2738, isTemporary#2739], V2SessionCatalog(spark_catalog), [default]
|
jonathan
|
|
639315b9-5fb3-48fb-a136-577613f544f8
|
2025/06/13 22:54:11
|
2025/06/13 22:54:11
|
2025/06/13 22:54:11
|
48 ms
|
318 ms
|
SHOW TABLES IN `default`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3041, tableName#3042, isTemporary#3043]
+- 'UnresolvedNamespace [default]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3041, tableName#3042, isTemporary#3043]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Optimized Logical Plan ==
CommandResult [namespace#3041, tableName#3042, isTemporary#3043], ShowTables [namespace#3041, tableName#3042, isTemporary#3043], V2SessionCatalog(spark_catalog), [default], [[0,2000000007,2800000008,0,746c7561666564,7374726f70726961], [0,2000000007,2800000008,0,746c7561666564,73657079746c6c61], [0,2000000007,2800000009,0,746c7561666564,73657079746c6c61,32], [0,2000000007,280000000d,0,746c7561666564,73657079746c6c61,6369736162], [0,2000000007,280000000e,0,746c7561666564,73657079746c6c61,326369736162], [0,2000000007,2800000009,0,746c7561666564,7079747961727261,65], [0,2000000007,280000000a,0,746c7561666564,7974746e69676962,6570], [0,2000000007,280000000a,0,746c7561666564,79747972616e6962,6570], [0,2000000007,2800000008,0,746c7561666564,6570797465746164], [0,2000000007,280000000b,0,746c7561666564,746c616d69636564,657079], [0,2000000007,2800000009,0,746c7561666564,70797474616f6c66,65], [0,2000000007,2800000008,0,746c7561666564,736570797470616d], [0,2000000007,280000000b,0,746c7561666564,646978617463796e,617461], [0,2000000007,280000000f,0,746c7561666564,746978617463796e,61746164706972], [0,2000000007,2800000010,0,746c7561666564,7365745f656d6f73,32656c6261745f74], [0,2000000007,280000000a,0,746c7561666564,7974746375727473,6570], [0,2000000007,280000000e,0,746c7561666564,656e6f7a69786174,70756b6f6f6c], [0,2000000007,280000000c,0,746c7561666564,74676e696b726f77,73657079], [0,2000000007,2800000016,0,746c7561666564,74676e696b726f77,6874697773657079,7265626d756e]]
+- ShowTables [namespace#3041, tableName#3042, isTemporary#3043]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Physical Plan ==
CommandResult [namespace#3041, tableName#3042, isTemporary#3043]
+- ShowTables [namespace#3041, tableName#3042, isTemporary#3043], V2SessionCatalog(spark_catalog), [default]
|
jonathon
|
|
e68ea7bc-3ad4-4647-854d-1d6c077dde95
|
2025/06/14 06:13:08
|
2025/06/14 06:13:08
|
2025/06/14 06:13:08
|
48 ms
|
142 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
23d0bb45-2016-4751-8b44-535843d1a17f
|
2025/06/13 23:21:11
|
2025/06/13 23:21:11
|
2025/06/13 23:21:11
|
49 ms
|
317 ms
|
SHOW TABLES IN `default`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3400, tableName#3401, isTemporary#3402]
+- 'UnresolvedNamespace [default]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3400, tableName#3401, isTemporary#3402]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Optimized Logical Plan ==
CommandResult [namespace#3400, tableName#3401, isTemporary#3402], ShowTables [namespace#3400, tableName#3401, isTemporary#3402], V2SessionCatalog(spark_catalog), [default], [[0,2000000007,2800000008,0,746c7561666564,7374726f70726961], [0,2000000007,2800000008,0,746c7561666564,73657079746c6c61], [0,2000000007,2800000009,0,746c7561666564,73657079746c6c61,32], [0,2000000007,280000000d,0,746c7561666564,73657079746c6c61,6369736162], [0,2000000007,280000000e,0,746c7561666564,73657079746c6c61,326369736162], [0,2000000007,2800000009,0,746c7561666564,7079747961727261,65], [0,2000000007,280000000a,0,746c7561666564,7974746e69676962,6570], [0,2000000007,280000000a,0,746c7561666564,79747972616e6962,6570], [0,2000000007,2800000008,0,746c7561666564,6570797465746164], [0,2000000007,280000000b,0,746c7561666564,746c616d69636564,657079], [0,2000000007,2800000009,0,746c7561666564,70797474616f6c66,65], [0,2000000007,2800000008,0,746c7561666564,736570797470616d], [0,2000000007,280000000b,0,746c7561666564,646978617463796e,617461], [0,2000000007,280000000f,0,746c7561666564,746978617463796e,61746164706972], [0,2000000007,2800000010,0,746c7561666564,7365745f656d6f73,32656c6261745f74], [0,2000000007,280000000a,0,746c7561666564,7974746375727473,6570], [0,2000000007,280000000e,0,746c7561666564,656e6f7a69786174,70756b6f6f6c], [0,2000000007,280000000c,0,746c7561666564,74676e696b726f77,73657079], [0,2000000007,2800000016,0,746c7561666564,74676e696b726f77,6874697773657079,7265626d756e]]
+- ShowTables [namespace#3400, tableName#3401, isTemporary#3402]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Physical Plan ==
CommandResult [namespace#3400, tableName#3401, isTemporary#3402]
+- ShowTables [namespace#3400, tableName#3401, isTemporary#3402], V2SessionCatalog(spark_catalog), [default]
|
jonathon
|
|
24b341de-4aa7-4439-9d38-e87f336daa63
|
2025/06/13 07:55:31
|
2025/06/13 07:55:31
|
2025/06/13 07:55:31
|
49 ms
|
203 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
35f04b2d-808f-4f23-b9f6-508233a04899
|
2025/06/13 22:54:10
|
2025/06/13 22:54:11
|
2025/06/13 22:54:11
|
50 ms
|
375 ms
|
SHOW TABLES IN `c3ba675f1fb64660ba4a90155b35924e`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3021, tableName#3022, isTemporary#3023]
+- 'UnresolvedNamespace [c3ba675f1fb64660ba4a90155b35924e]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3021, tableName#3022, isTemporary#3023]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Optimized Logical Plan ==
CommandResult [namespace#3021, tableName#3022, isTemporary#3023], ShowTables [namespace#3021, tableName#3022, isTemporary#3023], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e], [[0,2000000020,400000000c,0,6635373661623363,3036363436626631,3531303961346162,6534323935336235,69746e656469796d,72656966]]
+- ShowTables [namespace#3021, tableName#3022, isTemporary#3023]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Physical Plan ==
CommandResult [namespace#3021, tableName#3022, isTemporary#3023]
+- ShowTables [namespace#3021, tableName#3022, isTemporary#3023], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
|
jonathon
|
|
cc399ebb-36c2-40d4-90fa-ebfe29305ea1
|
2025/06/15 06:43:48
|
2025/06/15 06:43:48
|
2025/06/15 06:43:48
|
50 ms
|
144 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
c14d354d-1dd0-45a8-9612-9f0604b08f7a
|
2025/06/13 23:29:58
|
2025/06/13 23:29:58
|
2025/06/13 23:29:58
|
51 ms
|
154 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
f9358d8e-5868-443d-af86-f9f24842f4a8
|
2025/06/13 22:51:52
|
2025/06/13 22:51:52
|
2025/06/13 22:51:52
|
51 ms
|
329 ms
|
SHOW TABLES IN `default`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#2961, tableName#2962, isTemporary#2963]
+- 'UnresolvedNamespace [default]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#2961, tableName#2962, isTemporary#2963]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Optimized Logical Plan ==
CommandResult [namespace#2961, tableName#2962, isTemporary#2963], ShowTables [namespace#2961, tableName#2962, isTemporary#2963], V2SessionCatalog(spark_catalog), [default], [[0,2000000007,2800000008,0,746c7561666564,7374726f70726961], [0,2000000007,2800000008,0,746c7561666564,73657079746c6c61], [0,2000000007,2800000009,0,746c7561666564,73657079746c6c61,32], [0,2000000007,280000000d,0,746c7561666564,73657079746c6c61,6369736162], [0,2000000007,280000000e,0,746c7561666564,73657079746c6c61,326369736162], [0,2000000007,2800000009,0,746c7561666564,7079747961727261,65], [0,2000000007,280000000a,0,746c7561666564,7974746e69676962,6570], [0,2000000007,280000000a,0,746c7561666564,79747972616e6962,6570], [0,2000000007,2800000008,0,746c7561666564,6570797465746164], [0,2000000007,280000000b,0,746c7561666564,746c616d69636564,657079], [0,2000000007,2800000009,0,746c7561666564,70797474616f6c66,65], [0,2000000007,2800000008,0,746c7561666564,736570797470616d], [0,2000000007,280000000b,0,746c7561666564,646978617463796e,617461], [0,2000000007,280000000f,0,746c7561666564,746978617463796e,61746164706972], [0,2000000007,2800000010,0,746c7561666564,7365745f656d6f73,32656c6261745f74], [0,2000000007,280000000a,0,746c7561666564,7974746375727473,6570], [0,2000000007,280000000e,0,746c7561666564,656e6f7a69786174,70756b6f6f6c], [0,2000000007,280000000c,0,746c7561666564,74676e696b726f77,73657079], [0,2000000007,2800000016,0,746c7561666564,74676e696b726f77,6874697773657079,7265626d756e]]
+- ShowTables [namespace#2961, tableName#2962, isTemporary#2963]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [default]
== Physical Plan ==
CommandResult [namespace#2961, tableName#2962, isTemporary#2963]
+- ShowTables [namespace#2961, tableName#2962, isTemporary#2963], V2SessionCatalog(spark_catalog), [default]
|
jonathon
|
|
3ab9b884-54e8-4b6f-9c4b-ab186038af66
|
2025/06/14 01:23:34
|
2025/06/14 01:23:34
|
2025/06/14 01:23:34
|
53 ms
|
206 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathon
|
|
6e627648-02c6-4e67-a474-ba16f6865bfb
|
2025/06/14 06:31:58
|
2025/06/14 06:31:58
|
2025/06/14 06:31:58
|
53 ms
|
147 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
dc6650c6-d345-46fa-808d-9b59408d5af6
|
2025/06/13 23:34:50
|
2025/06/13 23:34:50
|
2025/06/13 23:34:50
|
53 ms
|
328 ms
|
SHOW TABLES IN `c3ba675f1fb64660ba4a90155b35924e`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#4058, tableName#4059, isTemporary#4060]
+- 'UnresolvedNamespace [c3ba675f1fb64660ba4a90155b35924e]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#4058, tableName#4059, isTemporary#4060]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Optimized Logical Plan ==
CommandResult [namespace#4058, tableName#4059, isTemporary#4060], ShowTables [namespace#4058, tableName#4059, isTemporary#4060], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e], [[0,2000000020,400000000c,0,6635373661623363,3036363436626631,3531303961346162,6534323935336235,69746e656469796d,72656966]]
+- ShowTables [namespace#4058, tableName#4059, isTemporary#4060]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Physical Plan ==
CommandResult [namespace#4058, tableName#4059, isTemporary#4060]
+- ShowTables [namespace#4058, tableName#4059, isTemporary#4060], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
|
jonathan
|
|
fea01ec8-cffd-4fd3-8768-91a727bf47fe
|
2025/06/13 23:21:10
|
2025/06/13 23:21:11
|
2025/06/13 23:21:11
|
53 ms
|
397 ms
|
SHOW TABLES IN `c3ba675f1fb64660ba4a90155b35924e`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'ShowTables [namespace#3380, tableName#3381, isTemporary#3382]
+- 'UnresolvedNamespace [c3ba675f1fb64660ba4a90155b35924e]
== Analyzed Logical Plan ==
namespace: string, tableName: string, isTemporary: boolean
ShowTables [namespace#3380, tableName#3381, isTemporary#3382]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Optimized Logical Plan ==
CommandResult [namespace#3380, tableName#3381, isTemporary#3382], ShowTables [namespace#3380, tableName#3381, isTemporary#3382], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e], [[0,2000000020,400000000c,0,6635373661623363,3036363436626631,3531303961346162,6534323935336235,69746e656469796d,72656966]]
+- ShowTables [namespace#3380, tableName#3381, isTemporary#3382]
+- ResolvedNamespace V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
== Physical Plan ==
CommandResult [namespace#3380, tableName#3381, isTemporary#3382]
+- ShowTables [namespace#3380, tableName#3381, isTemporary#3382], V2SessionCatalog(spark_catalog), [c3ba675f1fb64660ba4a90155b35924e]
|
jonathan
|
|
de84919a-5ae5-4156-acc5-4af4137969e9
|
2025/06/13 23:24:48
|
2025/06/13 23:24:48
|
2025/06/13 23:24:48
|
54 ms
|
343 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3647, data_type#3648, comment#3649]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3647, data_type#3648, comment#3649]
== Optimized Logical Plan ==
CommandResult [col_name#3647, data_type#3648, comment#3649], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3647, data_type#3648, comment#3649]
== Physical Plan ==
CommandResult [col_name#3647, data_type#3648, comment#3649]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3647, data_type#3648, comment#3649]
|
jonathan
|
|
2fe876c5-3a65-4db2-bf8a-7b713edf57cc
|
2025/06/13 23:27:00
|
2025/06/13 23:27:00
|
2025/06/13 23:27:00
|
55 ms
|
322 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3701, data_type#3702, comment#3703]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3701, data_type#3702, comment#3703]
== Optimized Logical Plan ==
CommandResult [col_name#3701, data_type#3702, comment#3703], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3701, data_type#3702, comment#3703]
== Physical Plan ==
CommandResult [col_name#3701, data_type#3702, comment#3703]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3701, data_type#3702, comment#3703]
|
jonathon
|
|
4c448d36-25b1-4d61-b0d9-616170b9ed52
|
2025/06/13 22:18:18
|
2025/06/13 22:18:18
|
2025/06/13 22:18:18
|
55 ms
|
150 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
94b17ac6-9bb4-4db3-8a74-8665ab43bdde
|
2025/06/13 23:34:48
|
2025/06/13 23:34:48
|
2025/06/13 23:34:48
|
55 ms
|
325 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3979, data_type#3980, comment#3981]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3979, data_type#3980, comment#3981]
== Optimized Logical Plan ==
CommandResult [col_name#3979, data_type#3980, comment#3981], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3979, data_type#3980, comment#3981]
== Physical Plan ==
CommandResult [col_name#3979, data_type#3980, comment#3981]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3979, data_type#3980, comment#3981]
|
jonathon
|
|
2d9802cf-8cba-4a98-97b3-b0a3f26b46b3
|
2025/06/14 01:46:18
|
2025/06/14 01:46:18
|
2025/06/14 01:46:19
|
59 ms
|
451 ms
|
Listing columns 'catalog : null, schemaPattern : default, tablePattern : airports, columnName : null'
|
CLOSED
|
|
jonathan
|
|
b9dd0169-a02e-4def-a8dd-397345129b11
|
2025/06/13 23:23:32
|
2025/06/13 23:23:32
|
2025/06/13 23:23:32
|
60 ms
|
327 ms
|
DESCRIBE TABLE `default`.`alltypes`
|
CLOSED
|
== Parsed Logical Plan ==
+details
== Parsed Logical Plan ==
'DescribeRelation false, [col_name#3593, data_type#3594, comment#3595]
+- 'UnresolvedTableOrView [default, alltypes], DESCRIBE TABLE, true
== Analyzed Logical Plan ==
col_name: string, data_type: string, comment: string
DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3593, data_type#3594, comment#3595]
== Optimized Logical Plan ==
CommandResult [col_name#3593, data_type#3594, comment#3595], Execute DescribeTableCommand, [[STRING,string,null], [DOUBLE,double,null], [INTEGER,int,null], [BIGINT,bigint,null], [FLOAT,float,null], [DECIMAL,decimal(10,2),null], [NUMBER,decimal(10,2),null], [BOOLEAN,boolean,null], [DATE,date,null], [TIMESTAMP,timestamp,null], [DATETIME,timestamp,null], [BINARY,binary,null], [ARRAY,array<int>,null], [MAP,map<string,string>,null], [STRUCT,struct<field1:string,field2:int>,null], [VARCHAR,string,null], [CHAR,string,null]]
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3593, data_type#3594, comment#3595]
== Physical Plan ==
CommandResult [col_name#3593, data_type#3594, comment#3595]
+- Execute DescribeTableCommand
+- DescribeTableCommand `spark_catalog`.`default`.`alltypes`, false, [col_name#3593, data_type#3594, comment#3595]
|