ArangoDB v3.3.20 Release Notes

Release Date: 2018-11-28 // almost 7 years ago
    • ⬆️ upgraded arangodb starter version to 0.13.9

    • ➕ Added RocksDB option --rocksdb.total-write-buffer-size to limit total memory usage across all RocksDB in-memory write buffers

    The total amount of data to build up in all in-memory buffers (backed by log files). This option, together with the block cache size configuration option, can be used to limit memory usage. If set to 0, the memory usage is not limited. This is the default setting in 3.3. The default setting may be adjusted in future versions of ArangoDB.

    If set to a value greater than 0, this will cap the memory usage for write buffers, but may have an effect on write performance.

    • ➕ Added RocksDB configuration option --rocksdb.enforce-block-cache-size-limit

    Whether or not the maximum size of the RocksDB block cache is strictly enforced. This option can be set to limit the memory usage of the block cache to at most the specified size. If then inserting a data block into the cache would exceed the cache's capacity, the data block will not be inserted. If the flag is not set, a data block may still get inserted into the cache. It is evicted later, but the cache may temporarily grow beyond its capacity limit.

    • Export version and storage engine in cluster health

    • ⏪ Potential fix for issue #7407: arangorestore very slow converting from mmfiles to rocksdb

    • ⚡️ Updated joi library (Web UI), improved foxx mount path validation

    • 🛠 fix internal issue #2786: improved confirmation dialog when clicking the Truncate button in the Web UI

    • 🛠 fix for supervision, which started failing servers using old transient store

    • 🛠 fixed Foxx queues not retrying jobs with infinite maxFailures

    • 🛠 Fixed a race condition in a coordinator, it could happen in rare cases and only with the maintainer mode enabled if the creation of a collection is in progress and at the same time a deletion is forced.

    • 🐧 disable startup warning for Linux kernel variable vm.overcommit_memory settings values of 0 or 1. Effectively overcommit_memory settings value of 0 or 1 fix two memory-allocation related issues with the default memory allocator used in ArangoDB release builds on 64bit Linux. The issues will remain when running with an overcommit_memory settings value of 2, so this is now discouraged. Setting overcommit_memory to 0 or 1 (0 is the Linux kernel's default) fixes issues with increasing numbers of memory mappings for the arangod process (which may lead to an out-of-memory situation if the kernel's maximum number of mappings threshold is hit) and an increasing amount of memory that the kernel counts as "committed". With an overcommit_memory setting of 0 or 1, an arangod process may either be killed by the kernel's OOM killer or will die with a segfault when accessing memory it has allocated before but the kernel could not provide later on. This is still more acceptable than the kernel not providing any more memory to the process when there is still physical memory left, which may have occurred with an overcommit_memory setting of 2 after the arangod process had done lots of allocations.

    In summary, the recommendation for the overcommit_memory setting is now to set it to 0 or 1 (0 is kernel default) and not use 2.

    • ⏱ force connection timeout to be 7 seconds to allow libcurl time to retry lost DNS queries.

    • increase maximum number of collections/shards in an AQL query from 256 to 2048

    • don't rely on _modules collection being present and usable for arangod startup

    • ⚡️ optimizes the web ui's routing which could possibly led to unwanted events.

    • 🛠 fixes some graph data parsing issues in the ui, e.g. cleaning up duplicate edges inside the graph viewer.

    • in a cluster environment, the arangod process now exits if wrong credentials are used during the startup process.

    • 🛠 Fixed an AQL bug where the optimize-traversals rule was falsely applied to extensions with inline expressions and thereby ignoring them

    • 🛠 fix side-effects of sorting larger arrays (>= 16 members) of constant literal values in AQL, when the array was not used only for IN-value filtering but also later in the query. The array values were sorted so the IN-value lookup could use a binary search instead of a linear search, but this did not take into account that the array could have been used elsewhere in the query, e.g. as a return value. The fix will create a copy of the array and sort the copy, leaving the original array untouched.

    • 🛠 fixed a bug when cluster indexes were usable for queries, while still being built on db servers

    • 🛠 fix move leader shard: wait until all but the old leader are in sync. This fixes some unstable tests.

    • cluster health features more elaborate agent records