Spring-xd-batch job locking up

I’m using spring-xd for data ingestion via batch jobs. A good number of jobs run in parallel across 4 containers. Any where between 10 and 40 jobs. Most of these complete in less than a minute. I’m using redis (not rabbitmq) and mysql for data store. Spring-xd-batch uses a different mysql-db for job/step stats and my application uses a different mysql-db for its own purpose. Both mysql-dbs are on the same server. All 4 containers AND the admin point to the same mysql-db.

What I’ve noticed is that most times, everything works fine but once in a while (at least once daily), I do get the exception below:

Caused by: org.springframework.dao.DataAccessResourceFailureException: Could not obtain last_insert_id(); nested exception is java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction.

This is thrown by spring-xd (not my application). The exception is not specific to a particular container (it happens on any container randomly) and its not specific to a particular job.

I’ve spent time weeding throw google. I’ve in updated my innodb_lock_wait_timeout to 600 from 50. I’ve updated my trx_isolation to read-committed from repeatable read but none of these seems to make any difference. Also, I’ve enabled slow query logging but all the logged queries don’t match the timeframe when the exception gets thrown.

Any pointers will be greatly appreciated.


my workaround for this was to change the *_seq tables from innodb to mysam engine. Been running it like that for over a month now and no table lock ups.

Leave a Reply

Your email address will not be published. Required fields are marked *