Multichain perfomance

+2 votes

I work with 3 multichain nodes into local network.

I’d tested blockchain (size more than 60GB) on dev pc (physic machine on windows OS, Core i5, 32 GB RAM) and has a good performance (200 req/sec and less than 10 ms response).

Then I deployed it on virtual servers and at the beginning performance was the same.

But after certain time, I’ve got a problem - rpc requests was processed very slowly (up 10 sec on getstreamitem and other methods). 

My virtual servers configuration:

3 multichain nodes in local network 100Mbit/s

One of each:

  • Virtual machine Ubuntu 16.4 x64
  • CPU 4 core
  • RAM 8 GB
  • HDD 60 GB (22 GB is used for blockchain)
  • Multichain 2.0.2 version

Average stream item size about 5-10 KB. Unique items by key about 100 000. Average load is about 5 req/sec.

Perhaps you have already got a problem like this, tell me in which direction to look.

Thanks for attention.

asked Aug 19, 2019 by strogen

1 Answer

0 votes
This isn't normal behavior at all, so we need to isolate the problem. Are you making the API requests locally on each server or remotely over the Internet. If remotely, please try doing it locally. If that solves the problem, then it will be a matter of slow network performance and not related to MultiChain specifically.

If that does not solve the problem, what about the drives? Are these local to the virtual server, or are they network connected drives? If network connected that could also be the cause, if the connection is very slow.
answered Aug 20, 2019 by MultiChain
Thank you for answer.
Sorry for did not pay attention to these moments.
I make the API requests locally, and drivers are local to the virtual server (sas) .

Moreover, if you make queries via multichain-cli, the result is the same.
It is possible the node is locking up as a result of either (a) the number of transactions it has to process, or (b) the complexity of building certain API responses. If you run top on this VM, does it show multichaind taking 100% CPU capacity, or close to it?
Yes, the htop show me that multichaind taking 100% 1 of 4 Core capacity
OK, so that is likely the explanation. Are you making any APIs calls that create large responses? Or can you tell me something about the number of transactions per second being processed by the network?
I want to noted that this problem was detected on multichain node without load. Just single request through multichain-cli.

It was detected not at the level of communication between API and multichain, but inside of multichain, which can be seen in multichain-cli , for example multichain-cli chainname  liststreamkeyitems mystream mykey.

Here my tests
https://drive.google.com/open?id=1CSXko0Hjm7EOlyN4FQCGADx3ciwGAZ1B
Thanks for your reply. If the node is under very heavy load due to processing blockchain transactions, it could still exhibit this behavior even if it is not processing any other API calls. Can you please provide some guidance on how many transactions per second are being processed by the network. And if you stop sending new transactions (from all nodes), does this problem go away?
In my test above the results for single request without any transactions on other nodes (no one send transactions to other nodes).
Moreover i was reproduced this problem on multichain network with only ONE node.
OK, in that case it certainly sounds like a problem. Please stop the node and then run it again using these additional command-line options:

-debug=mchn -debug=mcblock -debug=mcapi -debug=mcminer

Then please email the debug.log file in the blockchain directory to us at:

multichain dot debug at gmail dot com

We can then take a look.
I did everything exactly as you said and you can find results by this link
 https://drive.google.com/open?id=1D6Gke7ktcLkTW6Rx7HgZVgl-bGCZSwsR

It was not possible to reproduce this issue right after multichaind restart.
To do it I've saved a few thousands stream items.

Thank you for your attention.
OK, so can you please wait until the problem appears again and then send us the debug log again. That will hopefully help us identify it.
Thanks, I will forward this to the dev team and be in touch.
We took a look at the log. The ultimate reason appears to be that you are pushing the node too hard for the hardware configuration you are now running it on, so the "memory pool" of transactions waiting confirmation grows and grows and this causes the delays you see while a new block is being processed. You can see this memory pool growing by using the getmempoolinfo API.

A possible contributor to the problem is that the disks on this virtual servers are not physically attached to them, but are accessed over some sort of network.
At the moment is

{"method":"getmempoolinfo","params":[],"id":"47083026-1567753571","chain_name":"testchain"}

{
    "size" : 110362,
    "bytes" : 2165011891
}

and continues to grow
OK, this means what I said - you are pushing transactions to this node faster than it can process them. As mentioned, a possible contributor to the problem is that the disks on this virtual servers are not physically attached to them, but are accessed over some sort of network.
...