我的新磚
我想創建一個外部表在磚與下麵的格式:
創建外部表Salesforce.Account
(
Id字符串,
IsDeleted長整型數字,
名稱字符串,
字符串類型,
RecordTypeId字符串,
ParentId字符串,
ShippingStreet字符串,
ShippingCity字符串,
ShippingState字符串,
ShippingPostalCode字符串,
ShippingCountry字符串,
ShippingStateCode字符串,
ShippingCountryCode字符串,
電話字符串,
傳真字符串,
AccountNumber字符串,
Sic字符串,
行業的字符串,
AnnualRevenue浮動,
NumberOfEmployees浮動,
所有權的字符串,
描述字符串,
評級字符串,
CurrencyIsoCode字符串,
OwnerId字符串,
CreatedDate長整型數字,
CreatedById字符串,
IsPartner長整型數字,
AccountSource字符串,
SicDesc字符串,
IsGlobalKeyAccount__c長整型數字,
Rating__c字符串,
AccountNumberAuto__c字符串,
AccountStatus__c字符串,
BUID__c字符串,
CompanyName__c字符串,
CreditLimit__c浮動,
CreditOnHold__c長整型數字,
CustomerClassification__c字符串,
DUNSNumber__c字符串,
DepartmentLabel__c字符串,
DepartmentName__c字符串,
DepartmentType__c字符串,
DiscountGroup__c字符串,
DoNotAllowBulkEmails__c長整型數字,
Email__c字符串,
EnglishCompanyName__c字符串,
Interest__c字符串,
Language__c字符串,
LastCheckedBy__c字符串,
LastCheckedOn__c浮動,
MarketOrganization__c字符串,
OtherPhone__c字符串,
PaymentTerms__c字符串,
Price_Book__c字符串,
RecordType__c字符串,
RelatedToGlobalKeyAccount__c長整型數字,
RequestDeletion__c長整型數字,
RequestEdit__c長整型數字,
Segment__c字符串,
ShippingCountry__c字符串,
Subsegment1__c字符串,
Subsegment2__c字符串,
TermsOfDelivery__c字符串,
Status__c字符串,
SynchronizeBillingAddress__c長整型數字,
Target_Account__c長整型數字,
TravelZone__c字符串,
DynamicsAutoNumber__c字符串,
Goal__c字符串,
OriginOfData__c字符串,
CustomDUNS__c字符串,
TAP_Description__c字符串
)
存儲為拚花
位置“abfss: / / Storagename@containername.dfs.core.windows.net/Bronze/Salesforce/Account/ *”
在SQL語句錯誤:AnalysisException: org.apache.hadoop.hive.ql.metadata。HiveException: MetaException(信息:有例外:shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.contracts.exceptions.KeyProviderException未能初始化配置)
com.databricks.backend.common.rpc.SparkDriverExceptions SQLExecutionException美元:org.apache.spark.sql。AnalysisException: org.apache.hadoop.hive.ql.metadata。HiveException: MetaException(信息:有例外:shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.contracts.exceptions.KeyProviderException未能初始化配置)
在org.apache.spark.sql.hive.HiveExternalCatalog。anonfun withClient美元2美元(HiveExternalCatalog.scala: 163)
org.apache.spark.sql.hive.HiveExternalCatalog.maybeSynchronized (HiveExternalCatalog.scala: 115)
在org.apache.spark.sql.hive.HiveExternalCatalog。anonfun withClient美元1美元(HiveExternalCatalog.scala: 153)
com.databricks.backend.daemon.driver.ProgressReporter .withStatusCode美元(ProgressReporter.scala: 377)
com.databricks.backend.daemon.driver.ProgressReporter .withStatusCode美元(ProgressReporter.scala: 363)
com.databricks.spark.util.SparkDatabricksProgressReporter .withStatusCode美元(ProgressReporter.scala: 34)
org.apache.spark.sql.hive.HiveExternalCatalog.withClient (HiveExternalCatalog.scala: 152)
org.apache.spark.sql.hive.HiveExternalCatalog.createTable (HiveExternalCatalog.scala: 335)
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.createTable (ExternalCatalogWithListener.scala: 102)
org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.createTable (SessionCatalog.scala: 875)
com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.createTableInternal (ManagedCatalogSessionCatalog.scala: 728)
com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.createTable (ManagedCatalogSessionCatalog.scala: 689)
com.databricks.sql.DatabricksSessionCatalog.createTable (DatabricksSessionCatalog.scala: 205)
org.apache.spark.sql.execution.command.CreateTableCommand.run (tables.scala: 186)
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult lzycompute美元(commands.scala: 80)
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult (commands.scala: 78)
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect (commands.scala: 89)
在org.apache.spark.sql.execution.QueryExecution anonfun nestedInanonfun eagerlyExecuteCommands美元美元1美元1美元。anonfun applyOrElse美元1美元(QueryExecution.scala: 202)
嗨@Shanmugavel Chandrakasu,我也有同樣的問題,你能請寫的火花在集群級別配置設置嗎?
這是指導如何與存儲賬戶:https://learn.microsoft.com/en-us/azure/databricks/external-data/azure-storage。
設置集群級別:https://docs.www.eheci.com/clusters/configure.html spark-configuration。
“spark.hadoop.fs.azure.account.key。< storage-account-name > .dfs.core.windows.net”: < storage-account-key >”