Concurrency and Double Spends

+1 vote

I am designing an api for using custom tokens that I aim to implement on Multichain.  The plan is to issue a tranche of tokens to an issuing account and to then distribute from there to members that require tokens.  

Because I will be starting with a single output of say 1 million tokens, I'm concerned that I have a concurrency issue straight away in that multiple requests to start distributing the tokens will become blocked if I use "preparelockunspent" to ensure no two requests can spend the same output, or I end up generating doublespends that get dropped by the chain but means I haven't distributed tokens, and not sure I would know when this happens??

Have you considered this and is there a strategy for mitigating this potential issue beyond issuing the tokens in smaller lots so that I have many outputs from which to distribute with?


Your thoughts on this would be appreciated........



asked Jun 17, 2016 by marty

1 Answer

0 votes

If you just use the regular wallet send* APIs, everything should be fine and I don't see why you'll have a concurrency issue. You can spend the change of one of your own transactions immediately, without waiting for it to be confirmed.

answered Jun 18, 2016 by MultiChain
Two further points:

1) I've been designing the API to use the raw transaction api so that I have greater flexability in handling transactions from multiple parties using the chain

2) Agree I shouldn't have to wait on an individual transaction being confirmed, but if there are multiple concurrent requests to transfer tokens from a single large output then surely these wound need to be serialised to avoid a double spend.  This feels like I will have a performance bottleneck on distributing assets unless I break up the initial issuance into multiple outputs?

The API is single threaded / subject to mutex locks so there's no way for concurrent requests to create a clash.

FYI in the next month or so we should be releasing a new version with a totally rewritten wallet with much improved scalability and performance, so you'll see a much improved throughput for send API requests than in the current version.