Write to stream in sequence

+1 vote
I need to write data to multiple streams moving fast via RPC, with the C# Library I use the "publish" command (I've added the method because the current version does not include it), but  I often get the error message: {"code":-4,"message":"Error: The transaction was rejected! This might happen if some of the coins in your wallet were already spent, such as if you used a copy of wallet.dat and coins were spent in the copy but not marked as spent here."}

What is the best way to avoid this messagge and write to my streams?

Thanks!
asked Mar 9, 2017 by dave1981

1 Answer

0 votes
 
Best answer

Are you publishing simultaneously on multiple nodes, where you have shared private keys between those nodes?

If so you should solve this by either only publishing on one node at a time, or by using different addresses to publish on the different nodes, or by manually managing "unspent transaction outputs" using the createrawtransaction command.

If not, please let us know, and we'll continue to investigate.

answered Mar 9, 2017 by MultiChain
selected Mar 23, 2017 by dave1981
No, I'm publishing only on that node (Ubuntu) that has a single address. I haven't shared the private keys, I'm using only json-rpc.
The code tries to write up to 5 times the data if an exception occurs. I tried to write 224 pairs key-value to 2 streams (112 + 112), every minute the program wrote 8 pairs in sequence then sleeps. In 59 cases a retry was necessary to publish the data (1 to 4 retry), 3 times the record was not published.
Thanks, it definitely sounds like something is up in your case, this is not the normal or expected behavior. Are you willing to help us diagnose this? If so please stop MultiChain (using the API command 'stop') then zip up the blockchain directory (from inside the ~/.multichain/ directory on Linux) and send it to us at multichain dot debug at gmail dot com.
One of the internal databases on the "secondary" node is corrupted (probably result of hard kill or crash). We made significant improvements in the code of Alpha 29 (to be released in the next week) to prevent database corruption in the future.

To fix the chain data you shoud restart multichaind with -reindex=1. It will take some time to repair the chain, because you have relatively big chain. To see the progress you can use getinfo API - look for "blocks" and "reindex" fields
Thanks for your help, the first test is working correctly (I've already done a restart with reindex some days ago, I hope this was the last)
A question: what do you mean when you say "have relatively big chain"... I'v done only a test, we'd like to put possibly quite a large amount of data, can we be confident about an efficient functioning of the multichain? Or there's a sort of "thresold" that we should not exceed?
I just meant reindexing may take some time because you have about 150K blocks. When reindexing, MultiChain have to scan all blocks.

There is no threshold for the amount of data you can put in the chain.
...