我有一個表我想添加一些數據和partitoned。我想使用動態分區但我得到這個錯誤
org.apache.spark。SparkException:動態分區嚴格模式至少需要一個靜態分區列。關掉這組hive.exec.dynamic.partition.mode = nonstrict
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult lzycompute美元(InsertIntoHiveTable.scala: 168)org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult (InsertIntoHiveTable.scala: 127)org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute (InsertIntoHiveTable.scala: 263)我已經設置
hive.exec.dynamic.partition.mode = nonstrict
在洋麻nonstrict我重啟蜂巢。但是當我重新運行spark-shell工作我還是錯誤?
我應該把它在其他地方,在蜂房裏配置?
這是命令
df2.write.mode(“追加”)。partitionBy .saveAsTable (“p_date”、“p_store_id”) (“TLD.ticket_pa
yment_testinsert”)df2 dataframe有一堆csv數據讀入。
我試著將它設置在我的spark-shell命令
spark-shell——主yarn-client包com.databricks: spark-csv_2.11:1.4.0——num-executors 4——5 executor-cores executor-memory 8 g——隊列hadoop-capq參看“hive.exec.dynamic.partition.mode = nonstrict”
但我得到這個警告
警告:忽略non-spark配置屬性:hive.exec.dynamic.partition.mode = nonstrict
試試這個:
hiveContext.setConf (“hive.exec.dynamic。分區”、“真實”)hiveContext.setConf (“hive.exec.dynamic.partition。模式”、“nonstrict”)試試這個:
hiveContext.setConf (“hive.exec.dynamic。分區”、“真實”)hiveContext.setConf (“hive.exec.dynamic.partition。模式”、“nonstrict”)我也遇到類似問題了,通過上麵的方法解決了,謝謝@peyman !
類JavaSparkSessionSingletonUtil{私有靜態瞬態SparkSession實例=零;公共靜態SparkSession getInstance(字符串瀏覽器名稱){SparkSession.clearDefaultSession ();如果(實例= = null){實例= SparkSession.builder () .appName . config (“hive.exec.dynamic(瀏覽器名稱)。分區”、“真實”). config (“hive.exec.dynamic.partition。/ / config模式”、“nonstrict”) (“spark.sql.warehouse。dir”,新文件(“spark-warehouse”) .getAbsolutePath ()) / / config (“spark.driver。allowMultipleContexts”、“true”) .enableHiveSupport () .getOrCreate ();}返回實例;}}