How to pivot pyspark dataframe

My question is related to this one.

I have a PySpark DataFrame which is How many times come each ID to my site by day.

testdf = (
    sc.parallelize([
        ('A', 1, 3, 4), ('B', 3, 4, 6), ('C', 2, 6,8),
        ('E', 3, 2,5),
    ]).toDF(["id", "data1", "date2", "date3"])
)


+---+-----+-----+-----+
| id|data1|date2|date3|
+---+-----+-----+-----+
|  A|    1|    3|    4|
|  B|    3|    4|    6|
|  C|    2|    6|    8|
|  E|    3|    2|    5|
+---+-----+-----+-----+

I want to convert this Dataframe as following

+---+-----+-----+
| id|data |count|
+---+-----+-----+
|  A|date1|    1|
+---+-----+-----+
|  A|date2|    3|
+---+-----+-----+
|  A|date3|    4|
+---+-----+-----+
|  B|date1|    3|
+---+-----+-----+
|  B|date2|    4|
+---+-----+-----+
|  B|date3|    6|
+---+-----+-----+
|  C|date1|    2|
+---+-----+-----+
|  C|date2|    6|
+---+-----+-----+
|  C|date3|    8|
+---+-----+-----+
|  D|date1|    3|
+---+-----+-----+
|  D|date2|    2|
+---+-----+-----+
|  D|date3|    5|
+---+-----+-----+

I tried this using explode function like this. But It didn’t work.

expression = ""
cnt=0
for column in testdf.columns:
    if column != id:
        cnt +=1
        expression += f"{column},"
expression = f"explode(array({expression[:-1]}))"

testdf2 = testdf.selectExpr(site_key, expression)

Please let me know, How to convert it.

Thanks.

Answer

Try this

testdf = (
    sc.parallelize([
        ('A', 1, 3, 4), ('B', 3, 4, 6), ('C', 2, 6,8),
        ('E', 3, 2,5),
    ]).toDF(["id", "data1", "date2", "date3"])
)
testdf.show()

testdf.selectExpr("id", "stack(3, 'data1', data1, 'date2', date2, 'date3', date3) as (data, count)").show()