Transactions stay unconfirmed - error on asset commit

+1 vote

Hi multichain team,


I'm doing an internship, working with Multichain.

I want to issue some assets with (a lot of) custom field(s). I searched for topics about metadata's maximum size and I found a few topics saying that the maximum size was around 8MB. I wanted to test it. When I reach 75 custom fields, the transaction stay in the memory pool and there's no further confirmation for any other transaction, but I don't think the mining stops.

Here's what I get in my docker log : 

I tried to increase  max-std-element-size, max-std-op-return-size, max-std-tx-size.

 The transaction is around 8500 kB.

Let me know if you need more informations, thanks a lot.


asked Apr 12, 2017 by AntoineVht
edited Apr 12, 2017 by AntoineVht
Do you mean you're adding a lot of custom fields when issuing the asset, as opposed to when you are transferring it?

Can you also post the output of getinfo so we can see what version and protocol you are running?
I try to add a lot of custom fields when issuing, the purpose is to exchange them later :), and maybe adding more later

"version" : "1.0 alpha 28",
    "nodeversion" : 10000128,
    "protocolversion" : 10007,
    "chainname" : "AntoineChain",
    "description" : "MultiChain AntoineChain",
    "protocol" : "multichain",
    "port" : 7557,
    "setupblocks" : 60,
    "nodeaddress" : "AntoineChain@",
    "burnaddress" : "1XXXXXXXYBXXXXXXnnXXXXXXa3XXXXXXZcwdKp",
    "incomingpaused" : false,
    "miningpaused" : false,
    "walletversion" : 60000,
    "balance" : 0.00000000,
    "walletdbversion" : 2,
    "reindex" : false,
    "blocks" : 7,
    "timeoffset" : 0,
    "connections" : 2,
    "proxy" : "",
    "difficulty" : 0.00000006,
    "testnet" : false,
    "keypoololdest" : 1492275158,
    "keypoolsize" : 2,
    "paytxfee" : 0.00000000,
    "relayfee" : 0.00000000,
    "errors" : ""

1 Answer

0 votes
Thanks for the extra information. There is a limit of around 4 KB of metadata for an asset's metadata – if you need to reference larger pieces of data, it's recommend to follow one of the techniques here:

But it does also sound like there's a bug here, where the transaction is accepted into the memory pool even though it never gets validated in a block. Can you please post a transcript of the full API call you used to generate this asset issuance with a large number of metadata fields?
answered Apr 16, 2017 by MultiChain
Also it would be helpful if you can confirm if this issue is still present in beta 1.
Still on the Alpha 28 (I get back to you as soon as I try on the Beta 1),

It seems to me that the problem really appears when I reach the limit of 4kB.
I was first trying with "issuefrom" command. Then I tried with "createrawsendfrom" and the problem occurs in both ways.

I build my JSON request using a C# loop to add the custom fields.

Request looks like :

"{\"method\":\"issuefrom\",\"params\":[\"1ZLyE9Rvxcu3Bygc3zxEDyscQEPQGjQKQUi9jx\",\"1ZLyE9Rvxcu3Bygc3zxEDyscQEPQGjQKQUi9jx\",{\"name\":\"wood\",\"open\":true},50000,0.1,0,{\"origin1\ »:\ »ukmkjmlkjmljkmlkjmlkjmlkjmlkjmlkjmkljmljmljl\"}],\"id\":1,\"chain_name\":\"AntoineChain\"}"

But with a lot more custom fields (full request on Pastebin :

I tried to change the value of the custom field with another size, sometimes I get « An HTTP request took too long to complete. » and the docker container of the master node turn off/crash.

I was also wondering why the size of the transaction decreases when it is confirmed (and thus removed from the memory pool). I noticed this with the Multichain explorer.

Again, thanks a lot.
This still happens on the beta 1, should there be an error message ?
Yes, I think there should. We'll look into this for the next beta release.