Parallel (proxy) request and take the fastest result

I’m trying to optimize requests through an external proxy (rotator). Sometimes the response is fast, sometimes very slow. So the idea is to send multiple requests in parallel of the same url request, take the fastest response, return the data, close the function without waiting for the other slower response(s).

There are a lot of tutorials online and SO questions regarding parallel requests in python, but all of them are for parallel requests of different requests instead of a duplicate request. Additionally the code waits until all requests are finished. I want to kind kill the parallel requests logic (preferably in a clean way) once the fastest response answers.

My app is running in Python Flask and runs with Gunicorn + Eventlet. I tried Eventlet green pools and Python Concurrent Futures, but using an Eventlet Greenpool seems like a better match, since the code will run in Gunicorn + Eventlet workers and Celery with Eventlet workers.

Im currently using Luminati Proxy Manager (LPM) to retry failed requests. An older version seemed to support parallel requests in the box, but the current versions do not support this function anymore. So I’m either trying to solve it with code in my Python app, or add another service/ tool (like LPM) that takes care of parallel requests and picks the fastest one.

Proxy service Luminati.io provides a ‘high performance parallel request’ code example (based on Eventlet Greenpool). See ‘original example’

I edited to code without a proxy and login’s to make it more repeatable and avoid unpredictable proxy response timings. I’m not getting any support from Luminati, so I’m trying figure it out on SO. For this test I’m using simulated slow 5 sec response, and a fast response from httpstat.us:

['http://httpstat.us/200?sleep=5000','http://httpstat.us/200']

In the edited code I added print statements with timings to see which response comes back first. I got two problems with this code. Sometimes I can see the fast response coming back first and it prints the response data (‘OK’), and the slow respone 5 sec later. However, often it seems like the code waits until both resposnes are back (both timings exactly the same).

The other problem is that while I’m able to print and see the data immidiatly of the ‘fast’ response, the logic still waits until all responses are finished. I would like to return the data and close the function once the first response comes back. In my edited code you can see some code (commented out lines) were i tried to unsucessfilly kill the process (this however just restarts the eventlet process).

Original example

import eventlet
from eventlet.green.urllib import request
import random
import socket

super_proxy = socket.gethostbyname('zproxy.lum-superproxy.io')

class SingleSessionRetriever:

    url = "http://%s-session-%s:%[email protected]"+super_proxy+":%d"
    port = 22225

    def __init__(self, username, password, requests_limit, failures_limit):
        self._username = username
        self._password = password
        self._requests_limit = requests_limit
        self._failures_limit = failures_limit
        self._reset_session()

    def _reset_session(self):
        session_id = random.random()
        proxy = SingleSessionRetriever.url % (self._username, session_id, self._password,
                                              SingleSessionRetriever.port)
        proxy_handler = request.ProxyHandler({'http': proxy, 'https': proxy})
        self._opener = request.build_opener(proxy_handler)
        self._requests = 0
        self._failures = 0

    def retrieve(self, url, timeout):
        while True:
            if self._requests == self._requests_limit:
                self._reset_session()
            self._requests += 1
            try:
                timer = eventlet.Timeout(timeout)
                result = self._opener.open(url).read()
                timer.cancel()
                return result
            except:
                timer.cancel()
                self._failures += 1
                if self._failures == self._failures_limit:
                    self._reset_session()


class MultiSessionRetriever:

    def __init__(self, username, password, session_requests_limit, session_failures_limit):
        self._username = username
        self._password = password
        self._sessions_stack = []
        self._session_requests_limit = session_requests_limit
        self._session_failures_limit = session_failures_limit

    def retrieve(self, urls, timeout, parallel_sessions_limit, callback):
        pool = eventlet.GreenPool(parallel_sessions_limit)
        for url, body in pool.imap(lambda url: self._retrieve_single(url, timeout), urls):
            callback(url, body)

    def _retrieve_single(self, url, timeout):
        if self._sessions_stack:
            session = self._sessions_stack.pop()
        else:
            session = SingleSessionRetriever(self._username, self._password,
                                             self._session_requests_limit, self._session_failures_limit)
        body = session.retrieve(url, timeout)
        self._sessions_stack.append(session)
        return url, body

def output(url, body):
    print(body)

n_total_req = 100
req_timeout = 10
n_parallel_exit_nodes = 10
switch_ip_every_n_req = 10
max_failures = 2

MultiSessionRetriever('lum-customer-c_ba028d72-zone-static', 'akssw3iy6h3y', switch_ip_every_n_req, max_failures).retrieve(
    ["http://lumtest.com/myip.json"] * n_total_req, req_timeout, n_parallel_exit_nodes, output)

Edited code (without login’s and proxies)

def high_perf_parallel_requests(search_url):

    try:
        import datetime
        from eventlet.green.urllib import request

        results2 = []
        results1 = []

        class SingleSessionRetriever:


            def __init__(self, username, password, requests_limit, failures_limit):
                self._username = username
                self._password = password
                self._requests_limit = requests_limit
                self._failures_limit = failures_limit
                self._reset_session()

            def _reset_session(self):
                
                self._requests = 0
                self._failures = 0

            def retrieve(self, url, timeout):

                print("n SingleSessionRetriever.retrieve init")
                print(url)
                print(datetime.datetime.now())

                while True:

                    if self._requests == self._requests_limit:
                        self._reset_session()
                    self._requests += 1
                    try:
                        timer = eventlet.Timeout(timeout)

                        result = request.urlopen(url).read()
                        print("n SingleSessionRetriever.retrieve result")
                        print(url)
                        print(result)
                        print(datetime.datetime.now())

                        results1.append(result)

                        timer.cancel()
                        # eventlet.kill(pool)
                        # raise Exception("Got fastest result. Kill eventlet")
                        #eventlet.kill(self)
                        #pool.kill()
                        return result

                    except:
                        timer.cancel()
                        self._failures += 1
                        if self._failures == self._failures_limit:
                            self._reset_session()


        class MultiSessionRetriever:
        

            def __init__(self, username, password, session_requests_limit, session_failures_limit):
                self._returned = False
                self._username = username
                self._password = password
                self._sessions_stack = []
                self._session_requests_limit = session_requests_limit
                self._session_failures_limit = session_failures_limit

            def retrieve(self, urls, timeout, parallel_sessions_limit, callback):
                pool = eventlet.GreenPool(parallel_sessions_limit)
                try:
                    # for url in urls:
                    #     print("spawn {}".format(url))
                    #     pool.spawn_n(self._retrieve_single(url, timeout))
                    #pool.waitall()
                    for url, body in pool.imap(lambda url: self._retrieve_single(url, timeout), urls):


                        if body:
                            print("n MultiSessionRetriever.retrieve: Body received")
                            print(datetime.datetime.now())
                            # eventlet.Event.send_exception
                            #return body
                            #eventlet.kill(self)
                            # pool.kill()
                    
                        print("n MultiSessionRetriever.retrieve: in for loop")
                        print(url)
                        print(body)
                        print(datetime.datetime.now())
                        callback(url, body)

                except Exception as e:
                    # eventlet.kill(pool)
                    # eventlet.kill(self)
                    print(e)

                print("n MultiSessionRetriever.retrieve: after loop")
                print(datetime.datetime.now())
                # eventlet.kill(self)


            def _retrieve_single(self, url, timeout):
                print("n MultiSessionRetriever._retrieve_single url:")
                print(url)
                print(datetime.datetime.now())
                if self._sessions_stack:
                    session = self._sessions_stack.pop()
                else:
                    session = SingleSessionRetriever(self._username, self._password,
                                                    self._session_requests_limit, self._session_failures_limit)
                body = session.retrieve(url, timeout)
                print("n MultiSessionRetriever._retrieve_single body:")
                print(body)
                print(datetime.datetime.now())
                self._sessions_stack.append(session)
                return url, body


        def output(url, body):
            print("n MultiSessionRetriever.output:")
            print(url)
            print(body)
            print(datetime.datetime.now())
            results2.append(body)


        # n_total_req = 2
        req_timeout = 10
        n_parallel_exit_nodes = 2
        switch_ip_every_n_req = 1
        max_failures = 2

        urls = ['http://httpstat.us/200?sleep=5000','http://httpstat.us/200']

        print("start")
        print(datetime.datetime.now())

        x = MultiSessionRetriever('', '', switch_ip_every_n_req, max_failures).retrieve(
            urls, req_timeout, n_parallel_exit_nodes, output)

        print("result1:")
        print(results1)
        
        print("result2:")
        print(results2)

        return results2

Console output (I used two other urls that respond with Fast and Slow as response text).

web_1          | high_perf_parallel_requests: start
web_1          | start
web_1          | 2021-02-04 02:28:17.503574
web_1          | 
web_1          |  MultiSessionRetriever._retrieve_single url:
web_1          | http://httpstat.us/200?sleep=5000
web_1          | 2021-02-04 02:28:17.503903
web_1          | 
web_1          |  SingleSessionRetriever.retrieve init
web_1          | http://httpstat.us/200?sleep=5000
web_1          | 2021-02-04 02:28:17.503948
web_1          | 
web_1          |  MultiSessionRetriever._retrieve_single url:
web_1          | http://httpstat.us/200
web_1          | 2021-02-04 02:28:17.511720
web_1          | 
web_1          |  SingleSessionRetriever.retrieve init
web_1          | http://httpstat.us/200
web_1          | 2021-02-04 02:28:17.511783
web_1          | 
web_1          |  SingleSessionRetriever.retrieve result
web_1          | http://httpstat.us/200
web_1          | b'"fast response result"n'
web_1          | 2021-02-04 02:28:18.269042
web_1          | 
web_1          |  MultiSessionRetriever._retrieve_single body:
web_1          | b'"fast response result"n'
web_1          | 2021-02-04 02:28:18.269220
web_1          | 
web_1          |  SingleSessionRetriever.retrieve result
web_1          | http://httpstat.us/200?sleep=5000
web_1          | b'"slow response result"n'
web_1          | 2021-02-04 02:28:24.458372
web_1          | 
web_1          |  MultiSessionRetriever._retrieve_single body:
web_1          | b'"slow response result"n'
web_1          | 2021-02-04 02:28:24.458499
web_1          | 
web_1          |  MultiSessionRetriever.retrieve: Body received
web_1          | 2021-02-04 02:28:24.458814
web_1          | 
web_1          |  MultiSessionRetriever.retrieve: in for loop
web_1          | http://httpstat.us/200?sleep=5000
web_1          | b'"slow response result"n'
web_1          | 2021-02-04 02:28:24.458857
web_1          | 
web_1          |  MultiSessionRetriever.output:
web_1          | http://httpstat.us/200?sleep=5000
web_1          | b'"slow response result"n'
web_1          | 2021-02-04 02:28:24.458918
web_1          | 
web_1          |  MultiSessionRetriever.retrieve: Body received
web_1          | 2021-02-04 02:28:24.459057
web_1          | 
web_1          |  MultiSessionRetriever.retrieve: in for loop
web_1          | http://httpstat.us/200
web_1          | b'"fast response result"n'
web_1          | 2021-02-04 02:28:24.459158
web_1          | 
web_1          |  MultiSessionRetriever.output:
web_1          | http://httpstat.us/200
web_1          | b'"fast response result"n'
web_1          | 2021-02-04 02:28:24.459206
web_1          | 
web_1          |  MultiSessionRetriever.retrieve: after loop
web_1          | 2021-02-04 02:28:24.459482
web_1          | result1
web_1          | [b'"fast response result"n', b'"slow response result"n']
web_1          | result2
web_1          | [b'"slow response result"n', b'"fast response result"n']
web_1          | Parallel resp = [b'"slow response result"n', b'"fast response result"n']

Other attempts with Eventlet and Concurrent Futures

def parallel_request(url):

    fastest_result = None

    try:
        import datetime
        import eventlet
        from eventlet.green.urllib.request import urlopen

        # urls = ["http://www.google.com/intl/en_ALL/images/logo.gif",
        #     "https://www.python.org/static/img/python-logo.png",
        #     "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]

        urls = ['http://httpstat.us/200?sleep=5000','http://httpstat.us/200']

        def fetch(url):
            print("n Fetch start")
            print(url)
            print(datetime.datetime.now())
            result = urlopen(url).read()
            print("n Fetch result")
            print(result)
            print(datetime.datetime.now())

            return result

        pool = eventlet.GreenPool()
        print("n Parallel start")
        print(datetime.datetime.now())
        for body in pool.imap(fetch, urls):
            print("n Pool result")
            print(body)
            print(datetime.datetime.now())

        print("n Parallel end")
        print(datetime.datetime.now())
    
    except Exception as e:
            print(e)

    print("Fastest result= {}".format(fastest_result))


Futures

def request_futures(url):

    try:
        import datetime
        import concurrent.futures
        import urllib.request

        urls = ['http://httpstat.us/200?sleep=5000','http://httpstat.us/200']

        print("n Start Futures")
        print(datetime.datetime.now())

        # Retrieve a single page and report the URL and contents
        def load_url(url, timeout):
            with urllib.request.urlopen(url, timeout=timeout) as conn:
                print("n load url")
                print(datetime.datetime.now())
                result = conn.read()
                print(result)
                print(datetime.datetime.now())

                return result

        # We can use a with statement to ensure threads are cleaned up promptly
        with concurrent.futures.ThreadPoolExecutor() as executor:
            # Start the load operations and mark each future with its URL
            future_to_url = {executor.submit(load_url, url, 60): url for url in urls}
            for future in concurrent.futures.as_completed(future_to_url):
                print("n Iterate future")  
                print(datetime.datetime.now())

                url = future_to_url[future]
                try:
                    print("n Try future")
                    print(url)
                    print(datetime.datetime.now())
                    data = future.result()
                    print("n Data future")
                    print(data)
                    print(datetime.datetime.now())
                    
                except Exception as exc:
                    print('%r generated an exception: %s' % (url, exc))
                else:
                    print('%r page is %d bytes' % (url, len(data)))

        print("n End Futures")
        print(datetime.datetime.now())

    except Exception as e:
            print(e)

Answer

I was overcomplicating things and figured out that the easiest way was to send the parallel url requests through multiple tasks in a Celery background worker (which I was using already). The Celery background worker uses Eventlet and multiple workers to handle a lot of concurrent tasks (especially with a lot of I/O wait time)

Using the code below I’m calling a Celery task twice with the same URL. Check every x millisecond if one of the requests is ready. If so, take the first finished request and cancel the other Celery task. The only limitation of this setup using Eventlet that Celery does not support terminating a task completely when it is running using Eventlet. In the future, I might want to improve this by using a key in Redis to let both parallel tasks check if the other one is finished. If that is true the remaining task can be canceled.

from datetime import date time
from app.blueprints.api.v1.tasks import parallel_request

t_start =datetime.now()

# Request two requests in parallel using Celery background tasks 
job1 = parallel_request.apply_async(args=[search_url])

job2 = parallel_request.apply_async(args=[search_url])

        
ready = False
while not ready:
    if job1.ready():
        ready = True    
        print("Parallel job 1 finished first")
        job = job1
        job_cancel= job2
        proxy = proxy0
        break
    if job2.ready():
        ready = True    
        print("Parallel job 2 finished first")
        proxy = proxy4
        job = job2
        job_cancel = job1
        break
    # Check 
    sleep(0.1)

t_end = datetime.now()
proxy_time = int((t_end - t_start).total_seconds() * 1000)

print("Result in {} ms".format(proxy_time))
data = job.get()

# Remove other parallel request in celery. #Terminate/Sigkill does not work using Eventlet
revoke(job_cancel.id)