How to handle an array of `4`s returned by redis-py `pipeline.execute()`?

I’m using redis-py to process bulk insertions into a Redis store.

I wrote the following very simple method:

import redis

def push_metadata_to_redis(list_of_nested_dictionaries):

    redis_client = redis.Redis(host='localhost', port=6379, db=0)
    redis_pipeline = redis_client.pipeline(transaction=False)

    for dictionary in list_of_nested_dictionaries:
        for k, inner_dict in dictionary.items()
            redis_pipeline.hset(k, mapping=inner_dict)

        result = redis_pipeline.execute(raise_on_error=True)
        print(result)

Basically it:

  1. takes in input a list of several thousands dictionaries
  2. for each of those dictionary, pushes in Redis each key/value item (values are also dictionaries, that’s why I’m using hset)

Each dictionary contains ~10k elements, so redis_pipeline.execute(raise_on_error=True) happens once every ~10k hset.

I noticed that after few minutes, result value step from arrays of 0s to arrays of 4s and this worries me.

On one side I expect that any error should be raised as Exception (raise_on_error=True), but on the other side I’m failing to find any reference about this behaviour in the documentation and I don’t understand what does that mean.

So my questions are:

  1. Does result equals to an array of 4s mean that something went wrong in the redis_pipeline.execute(raise_on_error=True) operation?
  2. If yes, how can I understand what went wrong?
  3. If nope, what does that mean instead?

Thanks in advance.

Answer

So when using the HSET command the return value is the number of fields that were added

# check if key exists
127.0.0.1:6379> EXISTS key1
(integer) 0
# add a hash with 4 k/v pairs
127.0.0.1:6379> HSET key1 a 1 b 1 c 1 d 1
(integer) 4
# Set same fields for an existing hash
127.0.0.1:6379> HSET key1 a 1 b 1 c 1 d 1
(integer) 0
# Add an additional k/v pair
127.0.0.1:6379> HSET key1 a 1 b 1 c 1 d 1 e 1
(integer) 1
127.0.0.1:6379> HSET key1 f 1
(integer) 1


so it’s possible that those entries with 0 already existed in the cache and no new fields were added.