Tutorial: Work with PySpark DataFrames on Databricks

This article shows you how to load and transform data using the Apache Spark Python (PySpark) DataFrame API in Databricks.

看到alsoApache Spark PySpark API reference.

What is a DataFrame?

A DataFrame is a two-dimensional labeled data structure with columns of potentially different types. You can think of a DataFrame like a spreadsheet, a SQL table, or a dictionary of series objects. Apache Spark DataFrames provide a rich set of functions (select columns, filter, join, aggregate) that allow you to solve common data analysis problems efficiently.

Apache Spark DataFrames are an abstraction built on top of Resilient Distributed Datasets (RDDs). Spark DataFrames and Spark SQL use a unified planning and optimization engine, allowing you to get nearly identical performance across all supported languages on Databricks (Python, SQL, Scala, and R).

Create a DataFrame with Python

Most Apache Spark queries return a DataFrame. This includes reading from a table, loading data from files, and operations that transform data.

You can also create a Spark DataFrame from a list or a pandas DataFrame, such as in the following example:

importpandasaspddata=[[1,"Elia"],[2,"Teo"],[3,"Fang"]]pdf=pd.DataFrame(data,columns=["id","name"])df1=spark.createDataFrame(pdf)df2=spark.createDataFrame(data,schema="id LONG, name STRING")

Read a table into a DataFrame

Databricks uses Delta Lake for all tables by default. You can easily load tables to DataFrames, such as in the following example:

spark.read.table("..")

Load data into a DataFrame from files

You can load data from many supportedfile formats. The following example uses a dataset available in the/databricks-datasets目錄,可以從大部分工作區。看到Sample datasets.

df=(spark.read.format("csv").option("header","true").option("inferSchema","true").load("/databricks-datasets/samples/population-vs-price/data_geo.csv"))

Assign transformation steps to a DataFrame

The results of most Spark transformations return a DataFrame. You can assign these results back to a DataFrame variable, similar to how you might use CTEs, temp views, or DataFrames in other systems.

Combine DataFrames with join and union

DataFrames use standard SQL semantics for join operations. A join returns the combined results of two DataFrames based on the provided matching conditions and join type. The following example is an inner join, which is the default:

joined_df=df1.join(df2,how="inner",on="id")

You can add the rows of one DataFrame to another using the union operation, as in the following example:

unioned_df=df1.union(df2)

Filter rows in a DataFrame

You can filter rows in a DataFrame using.filter()or.where(). There is no difference in performance or syntax, as seen in the following example:

filtered_df=df.filter("id > 1")filtered_df=df.where("id > 1")

Use filtering to select a subset of rows to return or modify in a DataFrame.

Select columns from a DataFrame

You can select columns by passing one or more column names to.select(), as in the following example:

select_df=df.select("id","name")

You can combine select and filter queries to limit rows and columns returned.

subset_df=df.filter("id > 1").select("name")

View the DataFrame

To view this data in a tabular format, you can use the Databricksdisplay()下麵的示例命令,如:

display(df)

Save a DataFrame to a table

Databricks uses Delta Lake for all tables by default. You can save the contents of a DataFrame to a table using the following syntax:

df.write.saveAsTable("")

Write a DataFrame to a collection of files

Most Spark applications are designed to work on large datasets and work in a distributed fashion, and Spark writes out a directory of files rather than a single file. Many data systems are configured to read these directories of files. Databricks recommends using tables over filepaths for most applications.

The following example saves a directory of JSON files:

df.write.format("json").save("/tmp/json_data")

Run SQL queries in PySpark

Spark DataFrames provide a number of options to combine SQL with Python.

TheselectExpr()method allows you to specify each column as a SQL query, such as in the following example:

display(df.selectExpr("id","upper(name) as big_name"))

You can import theexpr()function frompyspark.sql.functionsto use SQL syntax anywhere a column would be specified, as in the following example:

frompyspark.sql.functionsimportexprdisplay(df.select("id",expr("lower(name) as little_name")))

You can also usespark.sql()to run arbitrary SQL queries in the Python kernel, as in the following example:

query_df=spark.sql("SELECT * FROM ")

Because logic is executed in the Python kernel and all SQL queries are passed as strings, you can use Python formatting to parameterize SQL queries, as in the following example:

table_name="my_table"query_df=spark.sql(f"SELECT * FROM{table_name}")