Several Tx repeating with "was not accepted to mempool, setting INVALID flag"

+2 votes
We have a bunch of this Tx repeating on the logs. They are slowly growing and seems to be a problem sometime in the future. What is the problem and how can be analized/solved?

Regards
asked Aug 8, 2017 by germat

1 Answer

+1 vote
Is it possible you are generating transactoins from the same addresses on more than one node at the same time? That would lead to double-spend problems, causing some transactions to be invalid.
answered Aug 8, 2017 by MultiChain
If you mean using the same wallet address in more than 1 node, it won't be the case as we generate the node just by launching the daemon.
getaddresses command shows different address too.
Transaction can become "Invalid" as a result of blockchain fork (i.e. if two nodes temporarily see different sets of transactions). In the end, fork is automatically resolved, but some transactions from the losing set cannot be added to the winning set.

The simplest example - double spending: two transactions spend the same input. But in MultiChain there may be several other cases. For example, some addresses may have different permissions on different sides of the fork.

Usually it happens when transactions are generated rapidly and there are several miners. If you are not double-spending, permission issues are the most likely cause of the problem.

You may try to see the reason in debug.log if you run multichaind with -debug=mchn runtime parameter.

Anyway, it is not normal behavior, but it is legitimate. MultiChain just tries to reaccept these invalid transactions after every block (because they may become valid). If the number of such transactions is small, the only problem is the number  of log records created in debug.log after every block.

But if the number grows (rapidly) it may influence performance in the future when it reaches thousands. In this case I would recommend to find the reason.

We can consider automatic purging of such transactions after certain amount of blocks, it is on our roadmap.
Thanks for your answer.
I'm sure they were caused by a regular fork situation now. Still have concerns abot the fork situation resolution but I'll ask again it in another question.
Thanks.
...