ArangoDB version 1.1 was released today. Builds for major distributions can be found on the downloads page.The new version offers several improvements, namely:
Estimated reading time: 4 minutes
Clients normally send individual operations to ArangoDB in individual HTTP requests. This is straightforward and simple, but has the disadvantage that the network overhead can be significant if many small requests are issued in a row.
To mitigate this problem, ArangoDB 1.1 offers a batch request API that clients can use to send multiple operations in one batch to ArangoDB. This method is especially useful when the client has to send many HTTP requests with a small body/payload and the individual request results do not depend on each other.
Estimated reading time: 5 minutes
ArangoDB 1.1 will come with a new API for batch requests. This batch request API allows clients to send multiple requests to the ArangoDB server inside one multipart HTTP request. The server will then decompose the multipart request into the individual parts and process them as if they were sent individually. The communication layer can sustain up-to 800.000 requests/second – but absolute numbers strongly depend on the number of cores, the type of the requests, network connections and other factors. More important are the relative numbers: Depending on your use-case you can reduce..
Estimated reading time: 5 minutes
As promised in one of the previous posts, here are some performance results that show the effect of different journal sizes for insert, update, delete, and get operations in ArangoDB.
Why journal size could matter
The journal file sizes determine how large a single datafile in ArangoDB is. The smaller that parameter is, the more datafiles need to be created, initially prefilled, closed, compacted etc.. These operations do have some overhead per file, and they occur more often the more datafiles are being used.
Estimated reading time: 5 minutes
In the last couple of posts, we have been looking at ArangoDB’s insert performance when using individual document insert, delete, and update operations. This time we’ll be looking at batched inserts. To have some reference, we’ll compare the results of ArangoDB to what can be achieved with CouchDB and MongoDB.
Estimated reading time: 1 minutes
To easily conduct bulk insert benchmarks with different NoSQL databases, we wrapped a small benchmark tool in PHP. The tool can be used to measure the time it takes to bulk upload data into MongoDB, CouchDB, and ArangoDB using the databases’ bulk documents APIs.The tool can also measure datafile sizes after the bulk load. The tool will upload documents to the databases in chunks, without concurrency (remember, this is PHP). It will report the total time needed, plus the amount of time needed for the database operations only (some of the total time might be spent in data generation etc., this..
Estimated reading time: 2 minutes
In a comment to the last post, there was a request to conduct some benchmarks with a mixed workload that does not test insert/delete/update/get operations in isolation but when they work together.
Estimated reading time: 5 minutes
A side-effect of measuring the impact of different journal sizes was that we generated some performance test results for CouchDB, too. They weren’t included in the previous post because it was about journal sizes in ArangoDB, but now we think it’s time to share them.
Test setup
The test setup and server specification is the one described in the previous post. In fact, this is the same test but now also including data for CouchDB.
Estimated reading time: 7 minutes
A while ago we wrote some blog article that explained how ArangoDB uses disk space. That article compared the disk usage of ArangoDB, CouchDB, and MongoDB for loading some particular datasets. In this post, we’ll show in more detail the disk usage of ArangoDB for insert, update, and delete operations. We’ll also compare it to CouchDB for reference.
Estimated reading time: 7 minutes
In the previous post we published some performance results for ArangoDB’s HTTP and networking layer in comparison to that of some popular web servers. We did that benchmark to assess the general performance (and overhead) of the network and HTTP layer in ArangoDB.
Using ArangoDB as an application server
While HTTP is a good and (relatively) portable mechanism of shipping data between clients and servers, it is only a transport protocol. People will likely be using ArangoDB not only because it supports HTTP, but primarily because it is a database and an application server.In this post, we’ll..
Get the latest tutorials,
blog posts and news:
