Reading multiple CSV files in Spark and make a DataFrame

I am using following code to read multiple csv files and and converting them to pandas df then concat it as a single pandas df. Finally converting again into spark DataFrame. I want to skip conversion to pandas df part and simply want to have spark DataFrame.

File Paths



list = []

for month in range(1,3,1):
  for day in range(1,31,1):
    for hour in range(0,24,1):
      file_location = "//xxxxxx/abc/year=2021/month="+str(month)+"/dayofmonth="+str(day)+"/hour="+str(hour)+"/*.csv"    
        try : 
          spark_df ="csv").option("header", "true").load(file_location)
          pandas_df = spark_df.toPandas()
        except AnalysisException as e:

final_pandas_df = pd.concat(list)
df = spark.createDataFrame(final_pandas_df)


You can load all the files and apply a filter on the partitioning columns:

df ="csv").option("header", "true").load("abfss://xxxxxx/abc/").filter(
    'year = 2021 and month between 1 and 2 and day between 1 and 30 and hour between 0 and 23'