Multichain benchmarking with NodeJS wrapper

+1 vote
I was running a multichain benchmark test, using this ( https://github.com/scoin/multichain-node ) node wrapper as listed on your developer page to make the calls.

When I try a fairly high number of requests (say, publish to a stream), such as 500 transactions per second or higher ( on a single peer), I get "connect ETIMEDOUT" or "connect ECONNRESET" errors from node. I tried setting a higher value for timeout in the library code, but same problem. What is causing the issue here? The library itself or do I need to configure the blockchain differently to avoid this problem?  I'm using default parameters at the moment (except anyone-can-connect=true - but no additional peers were connected for my tests). Does the peer drop the http request if the send rate is very high?

Multichain version is 1.0-release
asked Dec 11, 2017 by amolpednekar
edited Dec 11, 2017 by amolpednekar

1 Answer

+1 vote

It sounds like a simple case of pushing the node beyond its capacity to process transactions on your setup. We see around 1000 tx/sec for basic publish operations, but this can be slowed down by (a) your CPU performance, (b) the size of the items being published, (c) your node being subscribed to the stream and (d) a larger number of UTXOs in the node's wallet.

answered Dec 12, 2017 by MultiChain
I'm afraid we're not aware of anything like this. Is it possible that the Docker environment has some kind of limit or throttle on CPU usage?
So I tried the CLI approach that you guys suggested, and I am able to publish, but I dont think that means that the problem is with the library.
Because even after I get the ETIMEDOUT /ECONNRESET issues, it's only for some keys and it continues publishing again successfully for subsequent keys.

I tried a completely different NodeJS library ( https://github.com/Tilkal/multichain-api ) and ended up with the same errors.

Just to confirm that these requests are actual being sent correctly from client to the node, I ran the tcpdump tool and all the requests are sent + well formed.

One thing I noticed is that when I had debug=mcapi enabled, printing to console, usually it scrolls very fast as the requests come in, but suddenly it hangs/lags and that is when I start getting these errors. But the debugger itself shows absolutely no errors, and continues normally after a while. So that lag and the subsequent errors kind of make me think its a multichain issue. Also the fact that this issue gets better if you move to a better system for the node(keeping the client machine as it is). The main problem is the node not logging anything about the dropped requests, which is making it difficult to pinpoint the error.

Edit 2 - You guys might be able to replicate this but simply starting a node, and use any NodeJS wrapper (https://github.com/scoin/multichain-node/ ) (or any other async lib) and running the publish command inside a for loop, for a large number of iterations (like 10k).
Thanks for the update. I've forwarded this to the dev team and we'll update asap.
To circle back... we still believe this is a consequence of the library you're using rather than MultiChain. You might want to confirm this by running the same sort of test (in terms of HTTP requests per second) against an Apache web server running locally, delivering some tiny pixel.

Here's some PHP code you should be able to adapt to demonstrate that MultiChain does not have a problem with a transaction rate of 1000 tps:

$url='http://'.$host.':'.$port.'/';
$strUserPass64= base64_encode($user.':'.$password);

$payload=json_encode(array(
    'id' => time(),
    'method' => $method,
    'params' => $params,
));

$header='Content-Type: application/json'."\r\n".
    'Content-Length: '.strlen($payload)."\r\n".
    'Connection: close'."\r\n".
    'Authorization: Basic '.$strUserPass64."\r\n";

$options = array(
    'http' => array(
        'header'  => $header,
        'method'  => 'POST',
        'content' => $payload
    )
);

$context  = stream_context_create($options);
$response = file_get_contents($url, false, $context);
Thanks for trying, I will take another look at the node library and its parameters.

I wanted to ask you guys if using an asynchronous library to send requests might cause issues compared to a synchronous library? As many requests are remaining pending state as more keep coming, and that might be cause timeout problems/breaks?

Because php is synchronous, and even the multichain-cli is synchronous as far as I know.
...