task scheduler

+5 votes

Dear tech team,

I setup a multichain network where the validated blocks received by my node can contain one hundred transaction or more.

Once the block is received, I need to complete a task linked to the transaction id. I added such directive in the config file:

walletnotify=/home/user_name/.multichain/mainnet/script.sh %s %c %n

Everything works fine but some time i get the following warning and err message in the log file:

2023-06-29 04:34:08 WARNING: request rejected because http work queue depth exceeded, it can be increased with the -rpcworkqueue= setting

2023-06-29 04:34:11 runCommand error: system(/home/user_name/.multichain/mainnet/script.sh 3c0ddd21781b0587f0616f7b051700057cbcf56773f18323522020bcf7c503f7 0 21015) returned 256

This is due to the high number of requests.

My idea is to solve by using a queue manager in order to feed the queue on wallet notify and then reading the buffer in sequential way.

What do you think? could you suggest a way out?






asked Jun 29, 2023 by fabio_test

2 Answers

0 votes

If your wallet notify script is accessing the MultiChain JSON-RPC API, then you can indeed have a problem if many of these scripts are fired in quick succession – you will have too many API requests waiting in the queue and it will reach its limit.

Building a queue manager is one good way to solve the problem. You can also try increasing the rpcworkqueue runtime parameter to allow a larger buffer of waiting API requests.

answered Jun 30, 2023 by MultiChain
Yes, this is my scenario. I'll keep you posted about it
0 votes

Dear Fabio,

Your idea of implementing a queue system to manage the high number of requests is indeed a viable solution. You can use message queue services such as RabbitMQ, Redis, or Amazon SQS, which are capable of handling high volumes of messages.

Instead of processing the transactions directly, when a block is received, push an event into the queue with the transaction details. This approach would help to process these transactions asynchronously and significantly reduce the system overload.

Moreover, you can have a separate worker process or multiple processes listening to the queue, which pull out events and process them. This decoupling of event creation and processing helps to balance the system load effectively.

For instance, replace the walletnotify command with a script that pushes a message to the queue. A worker process or service, possibly written in a language like Python or Node.js, could then listen to this queue and process the transactions accordingly.

Remember to manage potential script failures effectively by implementing appropriate error handling mechanisms.



Salesforce marketing cloud training

answered Jul 26, 2023 by raavikant
edited Aug 14, 2023 by raavikant
Thanks raavikant for your hints. I have in my agenda to work on this topic in the short time. As soon as I have some results I'll share them with the community.
After some tests, I found my solution. Here are the guidelines.

I installed task scheduler as described:

>> update package repository
sudo apt-get update -y

>> installation
sudo apt-get install -y task-spooler

>> change multichain.conf
walletnotify=/<path_to_multichain_foleder>/qscript.sh %s %c %n

>> create queue script (qscript.sh)

# chech mining condition
if [ $2 == 0 ] && [ $3 -gt -1 ];
  tsp /<path_to_multichain_foleder>/your_script.sh $1 $2 $3

>> useful commands
tsp            init tsp
tsp -K            reset tsp (kill)
tsp -i <id>        info about process <id>
tsp -c <id>        output about process <id>, useful in case of error

>> how to check errors
tsp_error=$(su - ${TSP_USER} -c "tsp |grep finished"| awk -F' +' 'BEGIN {count=0} $4 != "0" {count++} END {print count}')

After this change, I didn't face the issue anymore.