I’m trying to do a simple merge between two dataframes. These come from two different SQL tables, where the joining keys are strings:
>>> df1.col1.dtype dtype('O') >>> df2.col2.dtype dtype('O')
I try to merge them using this:
>>> merge_res = pd.merge(df1, df2, left_on='col1', right_on='col2')
The result of the inner join is empty, which first prompted me that there might not be any entries in the intersection:
>>> merge_res.shape (0, 19)
But when I try to match a single element, I see this really odd behavior.
# Pick random element in second dataframe >>> df2.iloc[5,:].col2 '95498208100000' # Manually look for it in the first dataframe >>> df1[df1.col1 == '95498208100000'] 0 rows × 19 columns # Empty, which makes sense given the above merge result # Now look for the same value as an integer >>> df1[df1.col1 == 95498208100000] 1 rows × 19 columns # FINDS THE ELEMENT!?!
So, the columns are defined with the ‘object’ dtype. Searching for them as strings don’t yield any results. Searching for them as integers does return a result, and I think this is the reason why the merge doesn’t work above..
Any ideas what’s going on?
It’s almost as thought Pandas converts
df1.col1 to an integer just because it can, even though it should be treated as a string while matching.
(I tried to replicate this using sample dataframes, but for small examples, I don’t see this behavior. Any suggestions on how I can find a more descriptive example would be appreciated as well.)
The issue was that the
object dtype is misleading. I thought it mean that all items were strings. But apparently, while reading the file pandas was converting some elements to ints, and leaving the remainders as strings.
The solution was to make sure that every field is a string:
>>> df1.col1 = df1.col1.astype(str) >>> df2.col2 = df2.col2.astype(str)
Then the merge works as expected.
(I wish there was a way of specifying a