Multichain rpc server connection question (performance)

+2 votes
At the moment I am evaluating multichain performance measuring stuff like, network sync times, Max tx/s in a multi-node network ~ 20 Nodes, also on different locations.

A first step was to check the performance of a single node over the internet in regards to [tx/sec].

When there is a high load I observed an error on the operating system level (Ubuntu) that says too many open files. So it seems, that on high rpc request load the ulimit is fully used up, but this should not happen in my opinion.

How are connections handled on the internal rpc server? Is there some keep-alive or a too high connection timeout? It seems that rpc requests that opened a connection don't get closed immediately and at some point use up all the resources. Of course I could increase the "ulimit -n" but I just wanted to check if this could be a possible optimization on multichain side.

 

Would it help if the client that send the rpc-requests set the header "Connection: close"?
asked Sep 21, 2017 by Alexoid

2 Answers

+1 vote
Do you happen to be using the walletnotify runtime parameter ? It tends to launch the scripts a lot of time, and it will hit the ulimit pretty often if you are pushing a lot of transactions at the same time. It tends to impact performance quite a bit too.
answered Sep 22, 2017 by Bric3d
No, I don't use this.
What do you use to send your requests ?
I am using golang with standard http.Client and Keep-Alive: false
0 votes

I think this will depend on the method you are using to make the API requests. For example we have seen this type of problem after around 30,000 requests using curl under PHP. Switching to ab (Apache Benchmark) or direct socket requests (including "Connection: close") solved it.

answered Sep 22, 2017 by MultiChain
I am using golang http.Client with Connection: close.
I will take another look at it and check which component exactly leaves the connections open.
OK, thanks and please let us know how you get on.
...