ArangoDB v3.4.5 Release Notes

Release Date: 2019-03-27 // about 5 years ago
    • ๐Ÿ›  fixed a shutdown issue when the server was shut down while there were active Pregel jobs executing

    • ๐Ÿ›  fixed internal issue #3815: fixed the removal of connected edges when removing a vertex graph node in a SmartGraph environment.

    • โž• added AQL functions CRC32 and FNV64 for hashing data

    • ๐Ÿ›  internal issue #2276: fixed the sorting of the databases in the database selection dropdown in the web ui. The sort order differed based on whether authentication was enabled or disabled.

    • ๐Ÿ›  fixed internal issue #3789: restricted the allowed query names for user- defined custom queries within the web ui.

    • ๐Ÿ›  fixed internal issue #3546: improved the shards view in the web ui if there is only one shard to display.

    • ๐Ÿ›  fixed a display issues when editing a graph within the web ui

    • ๐Ÿ›  fixed internal-issue #3787: Let user know about conflicting attribute in AQL queries if information is available.

    • ๐Ÿ›  fixed issue #8294: wrong equals behavior on arrays with ArangoSearch

    • ๐Ÿ›  fixed internal issue #528: ArangoSearch range query sometimes doesn't work correctly with numeric values

    • ๐Ÿ›  fixed internal issue #3757: when restarting a follower in active failover mode, try an incremental sync instead of a full resync. Fixed also the case where a double resync was made

    • โฌ†๏ธ don't check for the presence of ArangoDB upgrades available when firing up an arangosh Enterprise Edition build

    • โž• added startup option --rocksdb.allow-fallocate

    When set to true, allows RocksDB to use the fallocate call. If false, fallocate calls are bypassed and no preallocation is done. Preallocation is turned on by default, but can be turned off for operating system versions that are known to have issues with it. This option only has an effect on operating systems that support fallocate.

    • โž• added startup option --rocksdb.limit-open-files-at-startup

    If set to true, this will limit the amount of .sst files RocksDB will inspect at startup, which can reduce the number of IO operations performed at start.

    • don't run compact() on a collection after a truncate() was done in the same transaction

    running compact() in the same transaction will only increase the data size on disk due to RocksDB not being able to remove any documents physically due to the snapshot that is taken at transaction start.

    This change also exposes db..compact() in the arangosh, in order to manually run a compaction on the data range of a collection should it be needed for maintenance.

    • ๐Ÿšš don't attempt to remove non-existing WAL files, because such attempt will trigger unnecessary error log messages in the RocksDB library

    • โšก๏ธ updated arangosync to 0.6.3

    • โž• added --log.file-mode to specify a file mode of newly created log files

    • โž• added --log.file-group to specify the group of newly created log files

    • ๐Ÿ›  fixed some escaping issues within the web ui.

    • ๐Ÿ›  fixed issue #8359: How to update a document with unique index (constraints)?

    • ๐Ÿ”€ when restarting a follower in active failover mode, try an incremental sync instead of a full resync

    • โž• add "PRUNE " to AQL Traversals (internal issue #3068). This allows to early abort searching of unnecessary branches within a traversal. PRUNE is only allowed in the Traversal statement and only between the graphdefinition and the options of the traversal. e.g.:

      FOR v, e, p IN 1..3 OUTBOUND @source GRAPH "myGraph" PRUNE v.value == "bar" OPTIONS {} /* These options remain optional */ RETURN v

    for more details refer to the documentation chapter.

    • โž• added option --console.history to arangosh for controlling whether the command-line history should be loaded from and persisted in a file.

    The default value for this option is true. Setting it to false will make arangosh not load any command-line history from the history file, and not store the current session's history when the shell is exited. The command-line history will then only be available in the current shell session.

    • display the server role when connecting arangosh against a server (e.g. SINGLE, COORDINATOR)

    • โž• added replication applier state figures totalDocuments and totalRemovals to access the number of document insert/replace operations and the number of document removals operations separately. Also added figures totalApplyTime and totalFetchTime for determining the total time the replication spent for applying changes or fetching new data from the master. Also added are the figures averageApplyTime and averageFetchTime, which show the average time spent for applying a batch or for fetching data from the master, resp.

    • ๐Ÿ›  fixed race condition in which the value of the informational replication applier figure ticksBehind could underflow and thus show a very huge number of ticks.

    • always clear all ongoing replication transactions on the slave if the slave discovers the data it has asked for is not present anymore on the master and the requireFromPresent value for the applier is set to false.

    In this case aborting the ongoing transactions on the slave is necessary because they may have held exclusive locks on collections, which may otherwise not be released.

    • โž• added option --rocksdb.wal-archive-size-limit for controlling the maximum total size (in bytes) of archived WAL files. The default is 0 (meaning: unlimited).

    When setting the value to a size bigger than 0, the RocksDB storage engine will force a removal of archived WAL files if the total size of the archive exceeds the configured size. The option can be used to get rid of archived WAL files in a disk size-constrained environment. Note that archived WAL files are normally deleted automatically after a short while when there is no follower attached that may read from the archive. However, in case when there are followers attached that may read from the archive, WAL files normally remain in the archive until their contents have been streamed to the followers. In case there are slow followers that cannot catch up this will cause a growth of the WAL files archive over time. The option --rocksdb.wal-archive-size-limit can now be used to force a deletion of WAL files from the archive even if there are followers attached that may want to read the archive. In case the option is set and a leader deletes files from the archive that followers want to read, this will abort the replication on the followers. Followers can however restart the replication doing a resync.

    • agents need to be able to overwrite a compacted state with same _key

    • in case of resigned leader, set isReady=false in clusterInventory

    • ๐Ÿ”€ abort RemoveFollower job if not enough in-sync followers or leader failure

    • ๐Ÿ›  fix shrinkCluster for satelliteCollections

    • ๐Ÿ›  fix crash in agency supervision when leadership is lost

    • ๐Ÿ‘ท speed up supervision in agency for large numbers of jobs

    • ๐Ÿ›  fix log spamming after leader resignation in agency

    • ๐Ÿ‘‰ make AddFollower less aggressive

    • ๐Ÿ›  fix cases, where invalid json could be generated in agents' store dumps

    • coordinator route for full agency dumps contains compactions and time stamps

    • ๐ŸŽ lots of agency performance improvements, mostly avoiding copying

    • ๐Ÿšง priority queue for maintenance jobs

    • ๐Ÿ‘ท do not wait for replication after each job execution in Supervision

    • ๐Ÿ›  fix a blockage in MoveShard if a failover happens during the operation

    • โฑ check health of servers in Current before scheduling removeFollower jobs

    • wait for statistics collections to be created before running resilience tests

    • ๐Ÿ›  fix ttl values in agency when key overwritten with no ttl