Error when writing more than 1MB of data into stream

+1 vote
When trying to write data that is more than 1MB to a stream I encountered this error:

{id: 23345, result: null, error: {message: Error: The transaction was rejected: 64: scriptpubkey, code: -26}}

Anything below 1MB seems to be fine. Can you advise why this happens? Thanks!

This is the result for getblockchainparams:

{
    "chain-protocol" : "multichain",
    "chain-description" : "MultiChain test-chain",
    "root-stream-name" : "root",
    "root-stream-open" : true,
    "chain-is-testnet" : false,
    "target-block-time" : 15,
    "maximum-block-size" : 8388608,
    "default-network-port" : 2913,
    "default-rpc-port" : 8001,
    "anyone-can-connect" : false,
    "anyone-can-send" : false,
    "anyone-can-receive" : false,
    "anyone-can-receive-empty" : true,
    "anyone-can-create" : false,
    "anyone-can-issue" : false,
    "anyone-can-mine" : false,
    "anyone-can-activate" : false,
    "anyone-can-admin" : false,
    "support-miner-precheck" : true,
    "allow-p2sh-outputs" : true,
    "allow-multisig-outputs" : true,
    "setup-first-blocks" : 60,
    "mining-diversity" : 0.30000000,
    "admin-consensus-admin" : 0.50000000,
    "admin-consensus-activate" : 0.50000000,
    "admin-consensus-mine" : 0.50000000,
    "admin-consensus-create" : 0.00000000,
    "admin-consensus-issue" : 0.00000000,
    "lock-admin-mine-rounds" : 10,
    "mining-requires-peers" : true,
    "mine-empty-rounds" : 10.00000000,
    "mining-turnover" : 0.50000000,
    "first-block-reward" : -1,
    "initial-block-reward" : 0,
    "reward-halving-interval" : 52560000,
    "reward-spendable-delay" : 1,
    "minimum-per-output" : 0,
    "maximum-per-output" : 100000000000000,
    "minimum-relay-fee" : 0,
    "native-currency-multiple" : 100000000,
    "skip-pow-check" : false,
    "pow-minimum-bits" : 8,
    "target-adjust-freq" : -1,
    "allow-min-difficulty-blocks" : false,
    "only-accept-std-txs" : true,
    "max-std-tx-size" : 4194304,
    "max-std-op-returns-count" : 10,
    "max-std-op-return-size" : 2097152,
    "max-std-op-drops-count" : 5,
    "max-std-element-size" : 8192,
    "chain-name" : "test-chain",
    "protocol-version" : 10007,
    "network-message-start" : "ffeae9f9",
    "address-pubkeyhash-version" : "00464c0e",
    "address-scripthash-version" : "0578291e",
    "private-key-version" : "8084499c",
    "address-checksum-value" : "85fb0526",
    "genesis-pubkey" : "026ea941c15367bab70aff34fcd34c44eea8c06e8f436690ec165c5aaab254365d",
    "genesis-version" : 1,
    "genesis-timestamp" : 1494293056,
    "genesis-nbits" : 536936447,
    "genesis-nonce" : 232,
    "genesis-pubkey-hash" : "a1959b7dd586656eb79abe33b4638a2c37dfb6d3",
    "genesis-hash" : "00559e068118e5adceab44f815287b3fa080c5e07741d8f3408e39c754038f76",
    "chain-params-hash" : "08bea92d274a2a6acd63a1131a07064427f1b850186945773ec1b675c0d346fb"
}
asked Mar 14, 2018 by Kit

1 Answer

+1 vote
 
Best answer

Your max-std-op-return-size parameter suggests that your maximum metadata size is 2MB, so it's surprising that you hit a limit at 1MB. Are you embedding two pieces of data in the same transaction, or perhaps double-converting your data to hexadecimal?

answered Mar 14, 2018 by MultiChain
selected Mar 14, 2018 by Kit
Thanks for your prompt response. It could be the way I encoded the data hence resulting in the data exceeding 2MB. I've since corrected it.

I read that this parameter can only be changed after the chain's creation and before starting the chain. Is that correct?

Also, are there any implications (such as performance) for increasing the current setting of 2MB to 64MB? Thanks again!
If you're using MultiChain 1.0.x, the parameter is fixed when the chain is created.

If you want, you can use MultiChain 2.0 which is still in alpha, and then you can upgrade this parameter – you may also need to upgrade your chain protocol first. More here: https://www.multichain.com/developers/multichain-2-0-preview-releases/

There are no performance implications for increasing the setting itself, but of course larger pieces of data will take longer to process, propagate, store, etc...
...