arrive in WARNING: request for bitcore-node - Record http work queue depth help on iquidus-based Zcash: Work queue depth Zcash is similar to WARNING: request rejected because for importaddress are: Arguments: exceeded. Code'; about 4 Bitcoin download script stops be any JSON type while reindexing Open issues JSON-RPC: Work queue May be. Jun 27, · braydonf changed the title Getting 'error: RPCError: Bitcoin JSON-RPC: Work queue depth exceeded' Bitcoin JSON-RPC: Work queue depth exceeded' while reindexing Aug 12, Copy link Quote reply. Aug 12, · 'Bitcoin JSON-RPC: Work queue depth exceeded. Code' # Levino opened this issue Aug 12, · 23 comments Comments. Copy link Quote reply Levino commented Aug 12,
Bitcoin json-rpc work queue depth exceeded'Bitcoin JSON-RPC: Work queue depth exceeded. Code' · Issue # · bitpay/insight-api · GitHub
I do not think so. It should be very easy to flood the global queue. One should additionally introduce rate limitation on ip basis or so. Maybe a "queue per ip" could work, where one takes on query from each queue in a circular second queue Is performance of newer bitcore-node using bitcoind over RPC expected to be much worse than original architecture?
If so, are there any plans to keep original bitcore-node maintained patch for 0. You're saying that bitcore does not scale well. I cannot second this; we do not have any performance issues so far. The reads to leveldb match that from the leveldb benchmarks.
Queries for txids, utxos and balances for addresses should all be improved, as well as the performance while reindexing. The typical bottleneck now is disk speed and LevelDB block compaction. I would take a look at adjusting the configuration options regarding the work queue limit, timeout and threads available for RPC. What are the defaults on these values? What are sane values for these? I cannot find consistent documentation of these settings.
I'm also having the same issue while receiving a new block and querying the api, not while reindexing. Any solutions yet? I am going to try implementing some of the changes suggested here and elsewhere but it seems that no one from the dev team is responding to these requests anymore Levino you can find the default values in httpserver.
We need spam protection, so basically a queue per user for example IP address with a maximum queue size which returns error too many requests when the queue size for this user has been reached. Of course the mapping of ip address to user is difficult when load balancing with reverse proxies is used and IP addresses are poorly forwarded.
But if that is done correctly, the solution would help to get from a dos vulnerability to a Ddos vulnerability. I would prefer this much over a built-in spam protection in the insight-api which feels to be mixing concerns at the wrong point in the stack.
This happens with no request received as well. I just started the service and during the index I've got the same error even if I had all the inbond ports to the API blocked. Dudes, a new install here. Traefik 1. Fixed the link. Has not too much to do with the problem you encounter: You cannot rate limit the requests during sync.
Skip to content. New issue. Jump to bottom. Copy link Quote reply. Levino mentioned this issue Aug 12, Copy link. Just do queries in parallel. I second this.
Global queue might solve the problem easily. Already on GitHub? Sign in to your account. The text was updated successfully, but these errors were encountered:. The idea is to send every transaction to another service. The problem is, that when a new block comes in, the event fires for all transactions at once. So the function. You may need to use async. You can also increase the workqueue limit in bitcoin. That might mitigate, but I still have no control of how many queries are coming at the same time through insight-api or some other service Thank you, braydonf.
You can close this or keep it open as a feature request. Would appreciate a global queuing. I will try async. Much easier in this context I assume. Is it for testnet or livenet? I used to see this, however it shouldn't be an issue now that zmq events are subscribed to after There can be many rapid tip updates resulting in many RPC calls, and while syncing it should poll at a slower interval for updates.
I was fully synched until a few hours ago, when I started reindex but then it crashed and now I am restarting reindex Resuming without reindex caused all the services to start and it keeps saying I found the same block over and over on livenet Something else: As far as I understand queries to the API now immediatly trigger queries to bitcoind rpc calls.
Does this scale? What about 10k queries hitting the server at the same time? Are there any load tests run in ci? Regarding getting this error while reindexing, I've run into this also.
I think we may be able to mitigate by making sure to delay polling if we've not yet received a reply. There are a few load tests around receiving many transactions into the mempool and emitting those events.
Getting the error from the insight api just by running multiple requests in parallel. As the current implementation is passing each query on to the bitcoind rpc interface this is exactly what should happen. All queries must be queued globally.
What do you mean by this? Multiple bitcoind nodes? And by overload I mean that others will probably get the work queue error too, even if they send a single request at that time. I get this error too. I get the Work queue depth exceeded Error a lot of times. It's not while reindexing, it's with regular use. Are there any solutions? This is happening with very few API requests. It's failing from the sync itself. Here's the output when it started failing:. What is possibly happening is that bitcore-node is starting to listen to tx events too early and is getting flooded with new transaction events from transactions within blocks.
You can increase the workqueue depth in bitcoin. I also get this error. I consider it a logic error when the data access strategy leads to non-deterministic behavior on a heavily-loaded server. I see this error frequently while performing a fresh sync of livenet over bitecored with insight-api and insight-web enabled but no clients connected.
We have this error frequently on mytrezor. I want to fix this by using a queue of requests instead of doing all requests at the same time. And it it's better to use async which you use already in here ; or "queue" from npm which is easier for me to use :. You also need spam protection at the web request level, probably as some express middleware in order to prevent individuals from flooding the queue.
Something like "only 20 requests per second".