我想拯救我的三角洲數據庫表在我的磚。
當我saveAsTable時,有一個錯誤消息Azure磚:AnalysisException:數據庫沒有找到男朋友
你們沒有數據庫命名為“男朋友”在我的數據庫。
這是我的完整代碼:
導入操作係統導入numpy一樣np進口熊貓pd從pyspark進口SparkFiles pyspark進口SparkContext pyspark。sql從pyspark進口SparkSession。sql從pyspark.sql導入功能。功能導入* #導入avg、坳pyspark udf。sql從pyspark進口SQLContext。sql從pyspark.sql進口DataFrame。類型導入*導入列表json #,重命名,並自動將所有文件保存為三角洲湖#數據寫入存儲掛載點的路徑(/ mnt)以外的DBFS根path_30min = ' / DBFS / mnt /金融/ FirstRate30min ' filename_lists_30min = os.listdir (path_30min) df_30min_ = {} delta_30min ={}為filename_30min os.listdir (path_30min): #分裂文件名rawname_30min = filename_30min.split (“_”) [0] name_30min = rawname_30min.split(“-”)[0] #創建clolumn頭名稱temp_30min = StructType ([StructField (name_30min +“_dateTime StringType(),真的),StructField (name_30min +“_adjOpen FloatType(),真的),StructField (name_30min +“_adjHigh FloatType(),真的),StructField (name_30min +“_adjLow FloatType(),真的),StructField (name_30min +“_adjClose FloatType(),真的),StructField (name_30min +“_adjVolume IntegerType(),真的)])#列表並創建csv dataframes temp_df_30min = spark.read.format (csv)。選項(“頭”,“假”). schema (temp_30min) .load (“/ mnt /金融/ FirstRate30min”+ filename_30min)。withColumn(“股票”,點燃(name_30min)) #名字每個dataframes df_30min_ [name_30min] = temp_df_30min #名字每個表table_name_30min = name_30min +“_30min_delta”#為每個dataframes創建三角洲湖df_30min_ [name_30min] .write.format .mode(“δ”)(“覆蓋”).option (“overwriteSchema”、“True”) .saveAsTable (table_name_30min)
我試著調試,但隻有這一步失敗了。
df_30min_ [name_30min] .write.format .mode(“δ”)(“覆蓋”).option (“overwriteSchema”、“True”) .saveAsTable (table_name_30min)
。