This week we pushed some modifications for ArangoDB’s cursor API into the
The change will result in less copying of AQL query results between the AQL and the HTTP layers.
As a positive side effect, this will reduce the amount of garbage collection the built-in V8
has to do.
These modifications should improve the cursor API performance significantly for many cases, while at the same time keeping its REST API stable.
This blog post shows some first unscientific performance tests comparing the old cursor API with its new, improved implementation.
A good way to test the cursor API performance is to issue lots of queries from the ArangoShell. The ArangoShell will send the query to the server for execution. The server will respond with the first 1,000 results for the query.
Additionally the server will create a server-side cursor if the result set is bigger than 1,000 documents. In this case, the ArangoShell will issue subsequent HTTP requests that fetch the outstanding documents from the server.
The above behavior is triggered automatically when
db._query(query).toArray() is run in
Here is a test function that executes a query n times and measures the total execution time. It will issue n HTTP requests to the server’s cursor API for executing the query. It will also issue further HTTP requests if the total result set size is bigger than 1,000 documents. What is getting measured is thus the total execution time from the ArangoShell’s point of view, including time spent in the server-side cursor functions as well as in HTTP traffic.
1 2 3 4 5 6 7 8
The test function was run with different queries to check which types of queries will benefit from the cursor API change.
Note that the ArangoShell will issue all its requests to the cursor API sequentially. This is ok for the purpose of this test, as the purpose was to measure the relative performance change between the old and the new API implementation.
The ArangoShell and ArangoDB server were running on the same physical machine during the tests, so this is a localhost benchmark.
Detailed test results
Here are the results from my local machine.
The first query was about the simplest one I could come up with. The query was sent to the server 10,000 times. The result set size per query ws 1, resulting in 10,000 calls to the cursor API with not much data to be transferred per call:
Execution took 7.225 s with the old API, and 5.195 s with the new API (28 % improvement).
A query returning a slightly more complex result value:
This took 8.046 s with the old API, and 5.829 s with the new one (27 % improvement).
Another simple query, again executed 10,000 times, but now returning 10 values per query:
Execution of this query took 7.951 s with the old, and 5.779 s with the new API (27 % improvement).
Now raising the number of return values per query from 10 to 1,000:
This took 31.650 s with the old, and 28.504 s with the new API (10 % improvement).
So far all query results contained 1,000 or less values. In this case the server is able to send the whole query result in response in one go, so there were only as many calls to the cursor API as there were queries. Even though the ArangoShell called the cursor API, the cursor only existed temporarily on the server but directly vanished when the server sent its response.
Now let’s run a query that returns more than 1,000 values each. The first call to the cursor API will then only return the first 1,000 results and additionally establish a server-side cursor so the client can fetch more results. This will mean that for each client query, there will be multiple HTTP requests.
The following run issues 100,000 calls to the cursor API (10,000 queries times 10 batches per query):
This took 307.108 s with the old API, in contrast to 232.322 s with the new API (24 % improvement).
The next queries I tested were collection-based. They returned data from a collection named
docs. The collection contained 10,000 documents, and each document in the collection had
The first query returned only a single one (random) document from the collection per query.
This took 8.689 s with the old API and 6.245 s with the new API (28 % improvement).
The next query returned all the documents from the collection. The query was executed only 1,000 times because the result sets already got quite big. The combined size of all result sets was 1,000,000 documents (10,000 documents, 1,000 queries).
This took 453.736 s with the old, and 197.543 s with the new API (56 % improvement).
The final query returned all document keys from the collection. The combined size of all result sets was 10,000,000 values (10,000 documents, 10,000 queries):
With the old API, this took 529.765 s, and with the new API it took 348.243 s (34 % improvement).
The new cursor API was faster than its old counterpart for all queries tested here. Total execution time as measured by the ArangoShell (representative for any other client program sending queries to ArangoDB) was consistenly lower than it was with the old API implementation.
The improvements measured were varying. For the queries tested, the improvements fell into a range of 10 % to even more than 50 % speedup.
How much gain can be achieved in reality obviously depends on the type of query executed. There will also be queries that do not benefit from the new API implementation. For example, queries that do not return any results will not benefit much. This is because most of the optimizations done affect the buffering and the data transport internals of the cursor API. Furthermore, queries that run for a very long time but return only small amounts of data may not benefit considerably for the same reason. However, there should not be any queries which are negatively affected by the change.
All in all, this looks quite promising, especially as the change will come for free for client applications. Client programs do not need to be adjusted to reap the benefits. This is because all that has changed were the internals of the cursor API. Its public REST interface remains unchanged.
The changes are included in the
devel branch and can be tested there.