Update column values in a group based on one row in that group

I have a dataframe from source data that resembles the following:

In[1]: df = pd.DataFrame({'test_group': [1, 1, 1, 2, 2, 2, 3, 3, 3],
         'test_type': [np.nan,'memory', np.nan, np.nan, 'visual', np.nan, np.nan,
         'auditory', np.nan]}
Out[1]:
   test_group test_type
0           1       NaN
1           1    memory
2           1       NaN
3           2       NaN
4           2    visual
5           2       NaN
6           3       NaN
7           3  auditory
8           3       NaN

test_group represents the grouping of the rows, which represent a test. I need to replace the NaNs in column test_type in each test_group with the value of the row that is not a NaN, e.g. memory, visual, etc.

I’ve tried a variety of approaches including isolating the “real” value in test_type such as

In [4]: df.groupby('test_group')['test_type'].unique()
Out[4]:
test_group
1      [nan, memory]
2      [nan, visual]
3    [nan, auditory]

Easy enough, I can index into each row and pluck out the value I want. This seems to head in the right direction:

In [6]: df.groupby('test_group')['test_type'].unique().apply(lambda x: x[1])
Out[6]:
test_group
1      memory
2      visual
3    auditory

I tried this among many other things but it doesn’t quite work (note: apply and transform give the same result):

In [15]: grp = df.groupby('test_group')
In [16]: df['test_type'] = grp['test_type'].unique().transform(lambda x: x[1])

In [17]: df
Out[17]:
   test_group test_type
0           1       NaN
1           1    memory
2           1    visual
3           2  auditory
4           2       NaN
5           2       NaN
6           3       NaN
7           3       NaN
8           3       NaN

I’m sure if I looped it I’d be done with things, but loops are too slow as the data set is millions of records per file.

Answer

Under the assumption that there’s a unique non-nan value per group, the following should satisfy your request.

>>> df['test_type'] = df.groupby('test_group')['test_type'].ffill().bfill() 
>>> df
   test_group test_type
0           1    memory
1           1    memory
2           1    memory
3           2    visual
4           2    visual
5           2    visual
6           3  auditory
7           3  auditory
8           3  auditory

edit:

The original answer used

df.groupby('test_group')['test_type'].fillna(method='ffill').fillna(method='bfill') 

but it looks like according to schwim‘s timings ffill/bfill is significantly faster (for some reason).