ArangoDB v3.4.4 Release Notes

Release Date: 2019-03-12 // about 5 years ago
    • โž• added missing test for success in failed leader: this could lead to a crash

    • follow up to fix JWT authentication in arangosh (#7530): also fix reconnect

    • ๐Ÿ‘ท now also syncing _jobs and _queues collections in active failover mode

    • ๐Ÿ›  fixed overflow in Windows NowNanos in RocksDB

    • ๐Ÿ›  fixed issue #8165: AQL optimizer does not pick up multiple geo index

    • when creating a new database with an initial user, set the database permission for this user as specified in the documentation

    • ๐Ÿ”’ Supervision fix: abort MoveShard job does not leave a lock behind,

    • ๐Ÿšš Supervision fix: abort MoveShard (leader) job moves forwards when point of no return has been reached,

    • ๐Ÿ‘ท Supervision fix: abort CleanOutServer job does not leave server in ToBeCleanedServers,

    • ๐Ÿšš Supervision fix: move shard with data stopped to early due to wrong usage of compare function

    • Supervision fix: AddFollower only counts good followers, fixing a situation after a FailedLeader job could not find a new working follower

    • Supervision fix: FailedLeader now also considers temporarily BAD servers as replacement followers and does not block servers which currently receive a new shard

    • Supervision fix: Servers in ToBeCleanedServers are no longer considered as replacement servers

    • ๐Ÿšง Maintenance fix: added precondition of unchanged Plan in phase2

    • ๐Ÿ‘ Allow MoveShard from leader to a follower, thus swapping the two

    • ๐Ÿ›  Supervision fix: SatellitCollections, various fixes

    • โž• Add coordinator route for agency dump

    • โšก๏ธ speed up replication of transactions containing updates of existing documents.

    The replication protocol does not provide any information on whether a document was inserted on the master or updated/replaced. Therefore the slave will always try an insert first, and move to a replace if the insert fails with "unique constraint violation". This case is however very costly in a bigger transaction, as the rollback of the insert will force the underlying RocksDB write batch to be entirely rewritten. To circumvent rewriting entire write batches, we now do a quick check if the target document already exists, and then branch to either insert or replace internally.