raise NotImplementedError("sparkContext() is not implemented.")
NotImplementedError: sparkContext() is not implemented.
Here is the content from error log:
Fri Jun 9 05:57:38 2023 Connection to spark from PID 1377
Fri Jun 9 05:57:38 2023 Initialized gateway on port 44015
Fri Jun 9 05:57:38 2023 Connected to spark.
Tried to attach usage logger `pyspark.databricks.pandas.usage_logger`, but an exception was raised: <property object at 0x7fdd5dc448b0> is not a callable object
Fri Jun 9 06:04:15 2023 Connection to spark from PID 1629
Fri Jun 9 06:04:15 2023 Initialized gateway on port 37083
Fri Jun 9 06:04:16 2023 Connected to spark.
Tried to attach usage logger `pyspark.databricks.pandas.usage_logger`, but an exception was raised: <property object at 0x7f6e23727770> is not a callable object
Fri Jun 9 06:13:51 2023 Connection to spark from PID 1924
Fri Jun 9 06:13:51 2023 Initialized gateway on port 33139
Fri Jun 9 06:13:51 2023 Connected to spark.
Hi,
I have passed the exam for Databricks Certified Associate Developer for Apache Spark 3.0 with 85% on 10 jun 2023. I received a mail where badge and credentials mentioned but didn't received any certificate with it. I raised a ticket also - #00334153
Please send me the certificate on mail
My piece of code:
print("First approach: ", df["Purchase Address"][0]) print("Second approach: ", df.loc[0,"Purchase Address"])
These lines are equal to each other. For me more comfortable to use first version. Is there any recommends in pandas how to access the content?
Can anyone help me to write the script using PySpark in Databricks. I have to use Azure Cloud Services for this.
It keeps on running for almost 30 mins and still shows as 'Running command'.
I have restarted the cluster many times and tried changing the resource runtime as well.
Please note I'm using azure free subscription plan
# Get the latitude and longitude latitude = 37.7716736 longitude = -122.4485852 # Get the resolution resolution = 7 # Get the H3 hex ID h3_hex_id = grid_longlatascellid(lit(latitude), lit(longitude), lit(resolution)).hex # Print the H3 hex ID print(h3_hex_id) Column<'grid_longlatascellid(CAST(37.7716736 AS DOUBLE), CAST(-122.4485852 AS DOUBLE), 7)[hex]'>
How do I see the actual hex id in the code above?
According the docs, the `h3 hex id` returned by `grid_longlatascellid` looks different from what is returned by `h3.geo_to_h3` method.
h3.geo_to_h3(float(latitude), float(longitude), 7) '872830829ffffff'
df = spark.createDataFrame([{'lon': 30., 'lat': 10.}]) df.select(grid_longlatascellid('lon', 'lat', lit(10))).show(1, False) +----------------------------------+ |grid_longlatascellid(lon, lat, 10)| +----------------------------------+ | 623385352048508927|
How do I obtain the `h3 hex id` using Databricks Mosaic library? I have the following imports and configurations:
import h3 from mosaic import enable_mosaic enable_mosaic(spark, dbutils) from mosaic import * spark.conf.set("spark.databricks.labs.mosaic.index.system", "H3")
Or Is there any other workaround to achieve this scenario?
@Hubert Dudek ,@Werner Stinckens , @Aviral Bhardwaj , @Omkar G , @Taha Hussain , @Adam Pavlacka , @Ananth Arunachalam , @Vidula Khanna , @Jose Alfonso , @Kaniz Fatma
So we have a dev environment in Databricks and want to migrate it to prod.
I need it o go over every single table, schema, notebooks, and artifacts in the databricks and make sure nothing is hard-coded for example or that there is nothing compromising the prod environment.
Do you any checklist or resource to help in this regards? Maybe a checklist of what are the best practices and what to look over. I want to prepare a diagnosis of the current status of the project.
thank you all!
thanks and regards
NB