All Versions
266
Latest Version
Avg Release Cycle
22 days
Latest Release
1728 days ago

Changelog History
Page 2

  • v3.4.5 Changes

    March 27, 2019
    • ๐Ÿ›  fixed a shutdown issue when the server was shut down while there were active Pregel jobs executing

    • ๐Ÿ›  fixed internal issue #3815: fixed the removal of connected edges when removing a vertex graph node in a SmartGraph environment.

    • โž• added AQL functions CRC32 and FNV64 for hashing data

    • ๐Ÿ›  internal issue #2276: fixed the sorting of the databases in the database selection dropdown in the web ui. The sort order differed based on whether authentication was enabled or disabled.

    • ๐Ÿ›  fixed internal issue #3789: restricted the allowed query names for user- defined custom queries within the web ui.

    • ๐Ÿ›  fixed internal issue #3546: improved the shards view in the web ui if there is only one shard to display.

    • ๐Ÿ›  fixed a display issues when editing a graph within the web ui

    • ๐Ÿ›  fixed internal-issue #3787: Let user know about conflicting attribute in AQL queries if information is available.

    • ๐Ÿ›  fixed issue #8294: wrong equals behavior on arrays with ArangoSearch

    • ๐Ÿ›  fixed internal issue #528: ArangoSearch range query sometimes doesn't work correctly with numeric values

    • ๐Ÿ›  fixed internal issue #3757: when restarting a follower in active failover mode, try an incremental sync instead of a full resync. Fixed also the case where a double resync was made

    • โฌ†๏ธ don't check for the presence of ArangoDB upgrades available when firing up an arangosh Enterprise Edition build

    • โž• added startup option --rocksdb.allow-fallocate

    When set to true, allows RocksDB to use the fallocate call. If false, fallocate calls are bypassed and no preallocation is done. Preallocation is turned on by default, but can be turned off for operating system versions that are known to have issues with it. This option only has an effect on operating systems that support fallocate.

    • โž• added startup option --rocksdb.limit-open-files-at-startup

    If set to true, this will limit the amount of .sst files RocksDB will inspect at startup, which can reduce the number of IO operations performed at start.

    • don't run compact() on a collection after a truncate() was done in the same transaction

    running compact() in the same transaction will only increase the data size on disk due to RocksDB not being able to remove any documents physically due to the snapshot that is taken at transaction start.

    This change also exposes db..compact() in the arangosh, in order to manually run a compaction on the data range of a collection should it be needed for maintenance.

    • ๐Ÿšš don't attempt to remove non-existing WAL files, because such attempt will trigger unnecessary error log messages in the RocksDB library

    • โšก๏ธ updated arangosync to 0.6.3

    • โž• added --log.file-mode to specify a file mode of newly created log files

    • โž• added --log.file-group to specify the group of newly created log files

    • ๐Ÿ›  fixed some escaping issues within the web ui.

    • ๐Ÿ›  fixed issue #8359: How to update a document with unique index (constraints)?

    • ๐Ÿ”€ when restarting a follower in active failover mode, try an incremental sync instead of a full resync

    • โž• add "PRUNE " to AQL Traversals (internal issue #3068). This allows to early abort searching of unnecessary branches within a traversal. PRUNE is only allowed in the Traversal statement and only between the graphdefinition and the options of the traversal. e.g.:

      FOR v, e, p IN 1..3 OUTBOUND @source GRAPH "myGraph" PRUNE v.value == "bar" OPTIONS {} /* These options remain optional */ RETURN v

    for more details refer to the documentation chapter.

    • โž• added option --console.history to arangosh for controlling whether the command-line history should be loaded from and persisted in a file.

    The default value for this option is true. Setting it to false will make arangosh not load any command-line history from the history file, and not store the current session's history when the shell is exited. The command-line history will then only be available in the current shell session.

    • display the server role when connecting arangosh against a server (e.g. SINGLE, COORDINATOR)

    • โž• added replication applier state figures totalDocuments and totalRemovals to access the number of document insert/replace operations and the number of document removals operations separately. Also added figures totalApplyTime and totalFetchTime for determining the total time the replication spent for applying changes or fetching new data from the master. Also added are the figures averageApplyTime and averageFetchTime, which show the average time spent for applying a batch or for fetching data from the master, resp.

    • ๐Ÿ›  fixed race condition in which the value of the informational replication applier figure ticksBehind could underflow and thus show a very huge number of ticks.

    • always clear all ongoing replication transactions on the slave if the slave discovers the data it has asked for is not present anymore on the master and the requireFromPresent value for the applier is set to false.

    In this case aborting the ongoing transactions on the slave is necessary because they may have held exclusive locks on collections, which may otherwise not be released.

    • โž• added option --rocksdb.wal-archive-size-limit for controlling the maximum total size (in bytes) of archived WAL files. The default is 0 (meaning: unlimited).

    When setting the value to a size bigger than 0, the RocksDB storage engine will force a removal of archived WAL files if the total size of the archive exceeds the configured size. The option can be used to get rid of archived WAL files in a disk size-constrained environment. Note that archived WAL files are normally deleted automatically after a short while when there is no follower attached that may read from the archive. However, in case when there are followers attached that may read from the archive, WAL files normally remain in the archive until their contents have been streamed to the followers. In case there are slow followers that cannot catch up this will cause a growth of the WAL files archive over time. The option --rocksdb.wal-archive-size-limit can now be used to force a deletion of WAL files from the archive even if there are followers attached that may want to read the archive. In case the option is set and a leader deletes files from the archive that followers want to read, this will abort the replication on the followers. Followers can however restart the replication doing a resync.

    • agents need to be able to overwrite a compacted state with same _key

    • in case of resigned leader, set isReady=false in clusterInventory

    • ๐Ÿ”€ abort RemoveFollower job if not enough in-sync followers or leader failure

    • ๐Ÿ›  fix shrinkCluster for satelliteCollections

    • ๐Ÿ›  fix crash in agency supervision when leadership is lost

    • ๐Ÿ‘ท speed up supervision in agency for large numbers of jobs

    • ๐Ÿ›  fix log spamming after leader resignation in agency

    • ๐Ÿ‘‰ make AddFollower less aggressive

    • ๐Ÿ›  fix cases, where invalid json could be generated in agents' store dumps

    • coordinator route for full agency dumps contains compactions and time stamps

    • ๐ŸŽ lots of agency performance improvements, mostly avoiding copying

    • ๐Ÿšง priority queue for maintenance jobs

    • ๐Ÿ‘ท do not wait for replication after each job execution in Supervision

    • ๐Ÿ›  fix a blockage in MoveShard if a failover happens during the operation

    • โฑ check health of servers in Current before scheduling removeFollower jobs

    • wait for statistics collections to be created before running resilience tests

    • ๐Ÿ›  fix ttl values in agency when key overwritten with no ttl

  • v3.4.4 Changes

    March 12, 2019
    • โž• added missing test for success in failed leader: this could lead to a crash

    • follow up to fix JWT authentication in arangosh (#7530): also fix reconnect

    • ๐Ÿ‘ท now also syncing _jobs and _queues collections in active failover mode

    • ๐Ÿ›  fixed overflow in Windows NowNanos in RocksDB

    • ๐Ÿ›  fixed issue #8165: AQL optimizer does not pick up multiple geo index

    • when creating a new database with an initial user, set the database permission for this user as specified in the documentation

    • ๐Ÿ”’ Supervision fix: abort MoveShard job does not leave a lock behind,

    • ๐Ÿšš Supervision fix: abort MoveShard (leader) job moves forwards when point of no return has been reached,

    • ๐Ÿ‘ท Supervision fix: abort CleanOutServer job does not leave server in ToBeCleanedServers,

    • ๐Ÿšš Supervision fix: move shard with data stopped to early due to wrong usage of compare function

    • Supervision fix: AddFollower only counts good followers, fixing a situation after a FailedLeader job could not find a new working follower

    • Supervision fix: FailedLeader now also considers temporarily BAD servers as replacement followers and does not block servers which currently receive a new shard

    • Supervision fix: Servers in ToBeCleanedServers are no longer considered as replacement servers

    • ๐Ÿšง Maintenance fix: added precondition of unchanged Plan in phase2

    • ๐Ÿ‘ Allow MoveShard from leader to a follower, thus swapping the two

    • ๐Ÿ›  Supervision fix: SatellitCollections, various fixes

    • โž• Add coordinator route for agency dump

    • โšก๏ธ speed up replication of transactions containing updates of existing documents.

    The replication protocol does not provide any information on whether a document was inserted on the master or updated/replaced. Therefore the slave will always try an insert first, and move to a replace if the insert fails with "unique constraint violation". This case is however very costly in a bigger transaction, as the rollback of the insert will force the underlying RocksDB write batch to be entirely rewritten. To circumvent rewriting entire write batches, we now do a quick check if the target document already exists, and then branch to either insert or replace internally.

  • v3.4.3 Changes

    February 19, 2019
    • ๐Ÿ›  fixed JS AQL query objects with empty query strings not being recognized as AQL queries

    • fixed issue #8137: NULL input field generates U_ILLEGAL_ARGUMENT_ERROR

    • ๐Ÿ›  fixed issue #8108: AQL variable - not working query since upgrade to 3.4 release

    • ๐Ÿ›  fixed possible segfault when using COLLECT with a LIMIT and an offset

    • ๐Ÿ›  fixed COLLECT forgetting top-level variables after 1000 rows

    • ๐Ÿ›  fix undefined behavior when calling user-defined AQL functions from an AQL query via a streaming cursor

    • ๐Ÿ›  fix broken validation of tick range in arangodump

    • โšก๏ธ updated bundled curl library to version 7.63.0

    • โž• added "peakMemoryUsage" in query results figures, showing the peak memory usage of the executed query. In a cluster, the value contains the peak memory usage across all shards, but it is not summed up across shards.

    • ๐Ÿ“š data masking: better documentation, fixed default phone number, changed default range to -100 and 100 for integer masking function

    • ๐Ÿ›  fix supervision's failed server handling to transactionally create all failed leader/followers along

  • v3.4.2 Changes

    January 24, 2019
    • โž• added configurable masking of dumped data via arangodump tool to obfuscate exported sensible data

    • โฌ†๏ธ upgraded to OpenSSL 1.1.0j

    • ๐Ÿ›  fixed an issue with AQL query IN index lookup conditions being converted into empty arrays when they were shared between multiple nodes of a lookup condition that used an IN array lookup in an OR that was multiplied due to DNF transformations

    This issue affected queries such as the following

      FILTER (... && ...) || doc.indexAttribute IN non-empty-array
    
    • โฌ†๏ธ upgraded arangodb starter version to 0.14.0

    • โฌ†๏ธ upgraded arangosync version to 0.6.2

    • ๐Ÿ›  fixed an issue where a crashed coordinator can lead to some Foxx queue jobs erroneously either left hanging or being restarted

    • ๐Ÿ›  fix issue #7903: Regression on ISO8601 string compatibility in AQL

    millisecond parts of AQL date values were limited to up to 3 digits. Now the length of the millisecond part is unrestricted, but the millisecond precision is still limited to up to 3 digits.

    • ๐Ÿ›  fix issue #7900: Bind values of null are not replaced by empty string anymore, when toggling between json and table view in the web-ui.

    • ๐Ÿ‘‰ Use base64url to encode and decode JWT parts.

    • โž• added AQL function CHECK_DOCUMENT for document validity checks

    • โช when detecting parse errors in the JSON input sent to the restore API, now abort with a proper error containing the problem description instead of aborting but hiding there was a problem.

    • ๐Ÿ“œ do not respond with an internal error in case of JSON parse errors detected in incoming HTTP requests

    • โž• added arangorestore option --cleanup-duplicate-attributes to clean up input documents with redundant attribute names

    Importing such documents without the option set will make arangorestore fail with an error, and setting the option will make the restore process clean up the input by using just the first specified value for each redundant attribute.

    • โช the arangorestore options --default-number-of-shards and --default-replication-factor are now deprecated in favor of the much more powerful options --number-of-shards and --replication-factor

    The new options --number-of-shards and --replication-factor allow specifying default values for the number of shards and the replication factor, resp. for all restored collections. If specified, these default values will be used regardless of whether the number of shards or the replication factor values are already present in the metadata of the dumped collections.

    It is also possible to override the values on a per-collection level by specifying the options multiple times, e.g.

      --number-of-shards 2 --number-of-shards mycollection=3 --number-of-shards test=4
    

    The above will create all collections with 2 shards, except the collection "mycollection" (3 shards) and "test" (4 shards).

    By omitting the default value, it is also possible to use the number of shards/replication factor values from the dump for all collections but the explicitly specified ones, e.g.

      --number-of-shards mycollection=3 --number-of-shards test=4
    

    This will use the number of shards as specified in the dump, except for the collections "mycollection" and "test".

    The --replication-factor option works similarly.

    • validate uniqueness of attribute names in AQL in cases in which it was not done before. When constructing AQL objects via object literals, there was no validation about object attribute names being unique. For example, it was possible to create objects with duplicate attribute names as follows:

      INSERT { a: 1, a: 2 } INTO collection

    This resulted in a document having two "a" attributes, which is obviously undesired. Now, when an attribute value is used multiple times, only the first assigned value will be used for that attribute in AQL. It is not possible to specify the same attribute multiple times and overwrite the attribute's value with by that. That means in the above example, the value of "a" will be 1, and not 2. This changes the behavior for overriding attribute values in AQL compared to previous versions of ArangoDB, as previous versions in some cases allowed duplicate attribute names in objects/documents (which is undesired) and in other cases used the last value assigned to an attribute instead of the first value. In order to explicitly override a value in an existing object, use the AQL MERGE function.

    To avoid all these issues, users are encouraged to use unambiguous attribute names in objects/documents in AQL. Outside of AQL, specifying the same attribute multiple times may even result in a parse error, e.g. when sending such data to ArangoDB's HTTP REST API.

    • ๐Ÿ›  fixed issue #7834: AQL Query crashes instance

    • โž• Added --server.jwt-secret-keyfile option.

    • ๐Ÿ‘Œ Improve single threaded performance by scheduler optimization.

    • ๐Ÿšง Releveling logging in maintenance

  • v3.4.2.1 Changes

    February 01, 2019
    • โฌ†๏ธ upgrade to new velocypack version
  • v3.4.1 Changes

    December 19, 2018
    • ๐Ÿ›  fixed issue #7757: Using multiple filters on nested objects produces wrong results

    • ๐Ÿ›  fixed issue #7763: Collect after update does not execute updates

    • ๐Ÿ›  fixed issue #7586: a running query within the user interface was not shown if the active view was Running Queries or Slow Query History.

    • ๐Ÿ›  fixed issue #7749: AQL Query result changed for COLLECT used on empty data/array

    • ๐Ÿ›  fixed a rare thread local dead lock situation in replication: If a follower tries to get in sync in the last steps it requires a lock on the leader. If the follower cancels the lock before the leader has succeeded with locking we can end up with one thread being deadlocked.

    • ๐Ÿ›  fix thread shutdown in _WIN32 builds

    Previous versions used a wrong comparison logic to determine the current thread id when shutting down a thread, leading to threads hanging in their destructors on thread shutdown

    • โช reverted accidental change to error handling in geo index

    In previous versions, if non-valid geo coordinates were contained in the indexed field of a document, the document was simply ignored an not indexed. In 3.4.0, this was accidentally changed to generate an error, which caused the upgrade procedure to break in some cases.

    • ๐Ÿ›  fixed TypeError being thrown instead of validation errors when Foxx manifest validation fails

    • ๐Ÿšš make AQL REMOVE operations use less memory with the RocksDB storage engine

    the previous implementation of batch removals read everything to remove into memory first before carrying out the first remove operation. The new version will only read in about 1000 documents each time and then remove these. Queries such as

      FOR doc IN collection FILTER ... REMOVE doc IN collection
    

    will benefit from this change in terms of memory usage.

    • ๐Ÿ‘‰ make --help-all now also show all hidden program options

    Previously hidden program options were only returned when invoking arangod or a client tool with the cryptic --help-. option. Now --help-all simply retuns them as well.

    The program options JSON description returned by --dump-options was also improved as follows:

    • the new boolean attribute "dynamic" indicates whether the option has a dynamic default value, i.e. a value that depends on the target host capabilities or configuration

    • the new boolean attribute "requiresValue" indicates whether a boolean option requires a value of "true" or "false" when specified. If "requiresValue" is false, then the option can be specified without a boolean value following it, and the option will still be set to true, e.g. --server.authentication is identical to --server.authentication true.

    • the new "category" attribute will contain a value of "command" for command-like options, such as --version, --dump-options, --dump-dependencies etc., and "option" for all others.

      • ๐Ÿ›  Fixed a bug in synchroneous replication intialization for, where a shard's db server is rebooted during that period
  • v3.4.0 Changes

    December 06, 2018
    • โž• Add license key checking to Enterprise Edition in Docker containers.
  • v3.4.0-rc.5 Changes

    November 29, 2018
    • 0๏ธโƒฃ Persist and check default language (locale) selection. Previously we would not check if the language (--default-language) had changed when the server was restarted. This could cause issues with indexes over text fields, as it will resulted in undefined behavior within RocksDB (potentially missing entries, corruption, etc.). Now if the language is changed, ArangoDB will print out an error message on startup and abort.

    • ๐Ÿ›  fixed issue #7522: FILTER logic totally broke for my query in 3.4-rc4

    • export version and storage engine in _admin/cluster/health for Coordinators and DBServers.

    • ๐Ÿ— restrict the total amount of data to build up in all in-memory RocksDB write buffers by default to a certain fraction of the available physical RAM. This helps restricting memory usage for the arangod process, but may have an effect on the RocksDB storage engine's write performance.

    In ArangoDB 3.3 the governing configuration option --rocksdb.total-write-buffer-size had a default value of 0, which meant that the memory usage was not limited. ArangoDB 3.4 now changes the default value to about 50% of available physical RAM, and 512MiB for setups with less than 4GiB of RAM.

    • 0๏ธโƒฃ lower default value for --cache.size startup option from about 30% of physical RAM to about 25% percent of physical RAM.

    • ๐Ÿ›  fix internal issue #2786: improved confirmation dialog when clicking the truncate button in the web UI

    • โšก๏ธ Updated joi library (web UI), improved Foxx mount path validation

    • ๐Ÿง disable startup warning for Linux kernel variable vm.overcommit_memory settings values of 0 or 1. Effectively overcommit_memory settings value of 0 or 1 fix two memory-allocation related issues with the default memory allocator used in ArangoDB release builds on 64bit Linux. The issues will remain when running with an overcommit_memory settings value of 2, so this is now discouraged. Setting overcommit_memory to 0 or 1 (0 is the Linux kernel's default) fixes issues with increasing numbers of memory mappings for the arangod process (which may lead to an out-of-memory situation if the kernel's maximum number of mappings threshold is hit) and an increasing amount of memory that the kernel counts as "committed". With an overcommit_memory setting of 0 or 1, an arangod process may either be killed by the kernel's OOM killer or will die with a segfault when accessing memory it has allocated before but the kernel could not provide later on. This is still more acceptable than the kernel not providing any more memory to the process when there is still physical memory left, which may have occurred with an overcommit_memory setting of 2 after the arangod process had done lots of allocations.

    In summary, the recommendation for the overcommit_memory setting is now to set it to 0 or 1 (0 is kernel default) and not use 2.

    • ๐Ÿ›  fixed Foxx complaining about valid $schema value in manifest.json

    • ๐Ÿ›  fix for supervision, which started failing servers using old transient store

    • ๐Ÿ›  fixed a bug where indexes are used in the cluster while still being built on the db servers

    • ๐Ÿ›  fix move leader shard: wait until all but the old leader are in sync. This fixes some unstable tests.

    • cluster health features more elaborate agent records

    • agency's supervision edited for advertised endpoints

  • v3.4.0-rc.4 Changes

    November 04, 2018
    • ๐Ÿ›  fixed Foxx queues not retrying jobs with infinite maxFailures

    • ๐ŸŽ increase AQL query string parsing performance for queries with many (100K+) string values contained in the query string

    • increase timeouts for inter-node communication in the cluster

    • ๐Ÿ›  fixed undefined behavior in /_api/import when importing a single document went wrong

    • ๐Ÿ›  replication bugfixes

    • ๐Ÿ–จ stop printing connection class corrupted in arangosh

    when just starting the arangosh without a connection to a server and running code such as require("internal"), the shell always printed "connection class corrupted", which was somewhat misleading.

    • โž• add separate option --query.slow-streaming-threshold for tracking slow streaming queries with a different timeout value

    • increase maximum number of collections/shards in an AQL query from 256 to 2048

    • don't rely on _modules collection being present and usable for arangod startup

    • โฑ force connection timeout to be 7 seconds to allow libcurl time to retry lost DNS queries.

    • ๐Ÿ›  fixes a routing issue within the web ui after the use of views

    • ๐Ÿ›  fixes some graph data parsing issues in the ui, e.g. cleaning up duplicate edges inside the graph viewer.

    • in a cluster environment, the arangod process now exits if wrong credentials are used during the startup process.

    • โž• added option --rocksdb.total-write-buffer-size to limit total memory usage across all RocksDB in-memory write buffers

    • โš  suppress warnings from statistics background threads such as WARNING caught exception during statistics processing: Expecting Object during version upgrade

  • v3.4.0-rc.3 Changes

    October 23, 2018
    • ๐Ÿ›  fixed handling of broken Foxx services

    Installation now also fails when the service encounters an error when executed. Upgrading or replacing with a broken service will still result in the broken services being installed.

    • โช restored error pages for broken Foxx services

    Services that could not be executed will now show an error page (with helpful information if development mode is enabled) instead of a generic 404 response. Requests to the service that do not prefer HTML (i.e. not a browser window) will receive a JSON formatted 503 error response instead.

    • โž• added support for force flag when upgrading Foxx services

    Using the force flag when upgrading or replacing a service falls back to installing the service if it does not already exist.

    • The order of JSON object attribute keys in JSON return values will now be "random" in more cases. In JSON, there is no defined order for object attribute keys anyway, so ArangoDB is taking the freedom to return the attribute keys in a non-deterministic, seemingly unordered way.

    • ๐Ÿ›  Fixed an AQL bug where the optimize-traversals rule was falsely applied to extensions with inline expressions and thereby ignoring them

    • ๐Ÿ›  fix side-effects of sorting larger arrays (>= 16 members) of constant literal values in AQL, when the array was used not only for IN-value filtering but also later in the query. The array values were sorted so the IN-value lookup could use a binary search instead of a linear search, but this did not take into account that the array could have been used elsewhere in the query, e.g. as a return value. The fix will create a copy of the array and sort the copy, leaving the original array untouched.

    • disallow empty LDAP password

    • ๐Ÿ›  fixes validation of allowed or not allowed foxx service mount paths within the Web UI

    • The single database or single coordinator statistics in a cluster environment within the Web UI sometimes got called way too often. This caused artifacts in the graphs, which is now fixed.

    • An aardvark statistics route could not collect and sum up the statistics of all coordinators if one of them was ahead and had more results than the others

    • ๐Ÿ’ป Web UI now checks if server statistics are enabled before it sends its first request to the statistics API

    • ๐Ÿ›  fix internal issue #486: immediate deletion (right after creation) of a view with a link to one collection and indexed data reports failure but removes the link

    • ๐Ÿ›  fix internal issue #480: link to a collection is not added to a view if it was already added to other view

    • ๐Ÿ›  fix internal issues #407, #445: limit ArangoSearch memory consumption so that it won't cause OOM while indexing large collections

    • โฌ†๏ธ upgraded arangodb starter version to 0.13.5

    • โœ‚ removed undocumented db.<view>.toArray() function from ArangoShell

    • prevent creation of collections and views with the same in cluster setups

    • ๐Ÿ›  fixed issue #6770: document update: ignoreRevs parameter ignored

    • โž• added AQL query optimizer rules simplify-conditions and fuse-filters

    • ๐Ÿ‘Œ improve inter-server communication performance:

      • move all response processing off Communicator's socket management thread
      • create multiple Communicator objects with ClusterComm, route via round robin
      • adjust Scheduler threads to always be active, and have designated priorities.
    • ๐Ÿ›  fix internal issue #2770: the Query Profiling modal dialog in the Web UI was slightly malformed.

    • ๐Ÿ›  fix internal issue #2035: the Web UI now updates its indices view to check whether new indices exist or not.

    • ๐Ÿ›  fix internal issue #6808: newly created databases within the Web UI did not appear when used Internet Explorer 11 as a browser.

    • ๐Ÿ›  fix internal issue #2957: the Web UI was not able to display more than 1000 documents, even when it was set to a higher amount.

    • ๐Ÿ›  fix internal issue #2688: the Web UI's graph viewer created malformed node labels if a node was expanded multiple times.

    • ๐Ÿ›  fix internal issue #2785: web ui's sort dialog sometimes got rendered, even if it should not.

    • ๐Ÿ›  fix internal issue #2764: the waitForSync property of a SatelliteCollection could not be changed via the Web UI

    • ๐ŸŽ dynamically manage libcurl's number of open connections to increase performance by reducing the number of socket close and then reopen cycles

    • recover short server id from agency after a restart of a cluster node

    this fixes problems with short server ids being set to 0 after a node restart, which then prevented cursor result load-forwarding between multiple coordinators to work properly

    this should fix arangojs#573

    • 0๏ธโƒฃ increased default timeouts in replication

    this decreases the chances of followers not getting in sync with leaders because of replication operations timing out

    • include forward-ported diagnostic options for debugging LDAP connections

    • ๐Ÿ›  fixed internal issue #3065: fix variable replacements by the AQL query optimizer in arangosearch view search conditions

    The consequence of the missing replacements was that some queries using view search conditions could have failed with error messages such as

    "missing variable #3 (a) for node #7 (EnumerateViewNode) while planning registers"

    • ๐Ÿ›  fixed internal issue #1983: the Web UI was showing a deletion confirmation multiple times.

    • Restricted usage of views in AQL, they will throw an error now (e.g. "FOR v, e, p IN 1 OUTBOUND @start edgeCollection, view") instead of failing the server.

    • ๐Ÿ‘ Allow VIEWs within the AQL "WITH" statement in cluster environment. This will now prepare the query for all collections linked within a view. (e.g. "WITH view FOR v, e, p IN OUTBOUND 'collectionInView/123' edgeCollection" will now be executed properly and not fail with unregistered collection any more)

    • Properly check permissions for all collections linked to a view when instantiating an AQL query in cluster environment

    • ๐Ÿ‘Œ support installation of ArangoDB on Windows into directories with multibyte character filenames on Windows platforms that used a non-UTF8-codepage

    This was supported on other platforms before, but never worked for ArangoDB's Windows version

    • ๐Ÿ”€ display shard synchronization progress for collections outside of the _system database

    • ๐Ÿ”„ change memory protection settings for memory given back to by the bundled JEMalloc memory allocator. This avoids splitting of existing memory mappings due to changes of the protection settings

    • โž• added missing implementation for DeleteRangeCF in RocksDB WAL tailing handler

    • ๐Ÿ›  fixed agents busy looping gossip

    • ๐Ÿ– handle missing _frontend collections gracefully

    the _frontend system collection is not required for normal ArangoDB operations, so if it is missing for whatever reason, ensure that normal operations can go on.