取消
顯示的結果
而不是尋找
你的意思是:

GET_COLUMNS和意想不到的失敗字符(\ \ ' t \ \ '(代碼116)):預計逗號分離對象條目——如何修複?

yzaehringer
新的因素

我隻是運行cursor.columns()”通過python客戶機和我將回到“org.apache.hive.service.cli。HiveSQLException”反應。還有一個長的堆棧跟蹤,我就粘貼最後一點,因為它可能照明:

org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors美元:hiveOperatingError: HiveThriftServerErrors。scala: org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors 66: hiveOperatingError: HiveThriftServerErrors。scala: 60 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation anonfun onError美元美元1:applyOrElse: SparkAsyncOperation。scala: 196 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation anonfun onError美元美元1:applyOrElse: SparkAsyncOperation。scala: 181 scala.runtime.AbstractPartialFunction:應用:AbstractPartialFunction.scala: 38 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation: anonfun wrappedExecute美元$ 1:SparkAsyncOperation。scala:”scala.runtime.java8.JFunction0 169 $ sp:應用:JFunction0專門sp.java美元:23美元com.databricks.unity.EmptyHandle: runWith: UCSHandle。scala: 103 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation: org apache引發美元sql蜂巢thriftserver SparkAsyncOperation $ $美元美元wrappedExecute: SparkAsyncOperation。scala: 144 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation: runInternal: SparkAsyncOperation。scala: 79 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation: runInternal美元:SparkAsyncOperation。scala: 44 org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation: runInternal: SparkGetColumnsOperation。scala: 54 org.apache.hive.service.cli.operation.Operation:運行操作。java: 383 org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation: org apache引發美元sql蜂巢thriftserver SparkOperation $ $美元美元超級美元運行:SparkGetColumnsOperation。scala: 54 org.apache.spark.sql.hive.thriftserver.SparkOperation:運行:SparkOperation。運行scala: 113 org.apache.spark.sql.hive.thriftserver.SparkOperation:美元:SparkOperation。scala: 111 org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation:運行:SparkGetColumnsOperation。scala: 54 org.apache.hive.service.cli.session.HiveSessionImpl: getColumns: HiveSessionImpl。java: 704 org.apache.hive.service.cli.CLIService: getColumns: CLIService。java:411 org.apache.hive.service.cli.thrift.OSSTCLIServiceIface:GetColumns:ThriftCLIService.java:1159 com.databricks.sql.hive.thriftserver.thrift.DelegatingThriftHandler:GetColumns:DelegatingThriftHandler.scala:81

請求看起來如下:

TGetColumnsReq (sessionHandle = TSessionHandle (sessionId = THandleIdentifier (…), serverProtocolVersion = None), catalogName = None, schemaName = None,表名= None, columnName = None, getDirectResults = TSparkGetDirectResults (maxRows = 100000, maxBytes = 10485760), runAsync = False, operationId = None, sessionConf = None,)

總結是:

  • 磚接收節儉的請求
  • 磚傳播到蜂巢節儉層
  • 蜂窩層失敗SQL錯誤

有人遇到過嗎?這裏的解決方案是什麼?

1回複1

Aviral-Bhardwaj
尊敬的貢獻者三世

這可以包問題或運行時問題,試著改變

歡迎來到磚社區:讓學習、網絡和一起慶祝

加入我們的快速增長的數據專業人員和專家的80 k +社區成員,準備發現,幫助和合作而做出有意義的聯係。

點擊在這裏注冊今天,加入!

參與令人興奮的技術討論,加入一個組與你的同事和滿足我們的成員。

Baidu
map