Changelog History
Page 3
-
v3.4.0-rc.2 Changes
September 30, 2018โฌ๏ธ upgraded arangosync version to 0.6.0
โฌ๏ธ upgraded arangodb starter version to 0.13.3
๐ fixed issue #6611: Properly display JSON properties of user defined foxx services configuration within the web UI
๐ improved shards display in web UI: included arrows to better visualize that collection name sections can be expanded and collapsed
โ added nesting support for
aql
template stringsโ added support for
undefined
and AQL literals toaql.literal
โ added
aql.join
function๐ fixed issue #6583: Agency node segfaults if sent an authenticated HTTP request is sent to its port
๐ fixed issue #6601: Context cancelled (never ending query)
โ added more AQL query results cache inspection and control functionality
๐ fixed undefined behavior in AQL query result cache
๐ป the query editor within the web UI is now catching HTTP 501 responses properly
โ added AQL VERSION function to return the server version as a string
โ added startup parameter
--cluster.advertised-endpoints
โก๏ธ AQL query optimizer now makes better choices regarding indexes to use in a query when there are multiple competing indexes and some of them are prefixes of others
In this case, the optimizer could have preferred indexes that covered less attributes, but it should rather pick the indexes that covered more attributes.
For example, if there was an index on ["a"] and another index on ["a", "b"], then previously the optimizer may have picked the index on just ["a"] instead the index on ["a", "b"] for queries that used all index attributes but did range queries on them (e.g.
FILTER doc.a == @val1 && doc.b >= @val2
).โ Added compression for the AQL intermediate results transfer in the cluster, leading to less data being transferred between coordinator and database servers in many cases
๐ forward-ported a bugfix from RocksDB (https://github.com/facebook/rocksdb/pull/4386) that fixes range deletions (used internally in ArangoDB when dropping or truncating collections)
The non-working range deletes could have triggered errors such as
deletion check in index drop failed - not all documents in the index have been deleted.
when dropping or truncating collections๐ improve error messages in Windows installer
๐ allow retrying installation in Windows installer in case an existing database is still running and needs to be manually shut down before continuing with the installation
๐ fix database backup functionality in Windows installer
๐ fixed memory leak in
/_api/batch
REST handlerdb._profileQuery()
now also tracks operations triggered when usingLIMIT
clauses in a queryโ added proper error messages when using views as an argument to AQL functions (doing so triggered an
internal error
before)๐ fixed return value encoding for collection ids ("cid" attribute") in REST API
/_api/replication/logger-follow
๐ fixed dumping and restoring of views with arangodump and arangorestore
๐ fix replication from 3.3 to 3.4
๐ fixed some TLS errors that occurred when combining HTTPS/TLS transport with the VelocyStream protocol (VST)
That combination could have led to spurious errors such as "TLS padding error" or "Tag mismatch" and connections being closed
๐ make synchronous replication detect more error cases when followers cannot apply the changes from the leader
๐ fixed issue #6379: RocksDB arangorestore time degeneration on dead documents
๐ fixed issue #6495: Document not found when removing records
๐ fixed undefined behavior in cluster plan-loading procedure that may have unintentionally modified a shared structure
โฌ๏ธ reduce overhead of function initialization in AQL COLLECT aggregate functions, for functions COUNT/LENGTH, SUM and AVG
this optimization will only be noticable when the COLLECT produces many groups and the "hash" COLLECT variant is used
๐ fixed potential out-of-bounds access in admin log REST handler
/_admin/log
, which could have led to the server returning an HTTP 500 errorcatch more exceptions in replication and handle them appropriately
โก๏ธ agency endpoint updates now go through RAFT
๐ fixed a cleanup issue in Current when a follower was removed from Plan
๐ง catch exceptions in MaintenanceWorker thread
๐ fixed a bug in cleanOutServer which could lead to a cleaned out server still being a follower for some shard
-
v3.4.0-rc.1 Changes
September 06, 2018๐ Release Candidate for 3.4.0, please check the
ReleaseNotes/KnownIssues34.md
file for a list of known issues.โฌ๏ธ upgraded bundled RocksDB version to 5.16.0
โฌ๏ธ upgraded bundled Snappy compression library to 1.1.7
๐ fixed issue #5941: if using breadth first search in traversals uniqueness checks on path (vertices and edges) have not been applied. In SmartGraphs the checks have been executed properly.
โ added more detailed progress output to arangorestore, showing the percentage of how much data is restored for bigger collections plus a set of overview statistics after each processed collection
โ added option
--rocksdb.use-file-logging
to enable writing of RocksDB's own informational LOG files into RocksDB's database directory.
This option is turned off by default, but can be enabled for debugging RocksDB internals and performance.
- ๐ improved error messages when managing Foxx services
Install/replace/upgrade will now provide additional information when an error is encountered during setup. Errors encountered during a
require
call will also include information about the underlying cause in the error message.๐ fixed some Foxx script names being displayed incorrectly in web UI and Foxx CLI
๐ง major revision of the maintenance feature
โ added
uuidv4
andgenRandomBytes
methods to crypto moduleโ added
hexSlice
methodshexWrite
to JS Buffer typeโ added
Buffer.from
,Buffer.of
,Buffer.alloc
andBuffer.allocUnsafe
for improved compatibility with Node.js๐ฒ Foxx HTTP API errors now log stacktraces
๐ fixed issue #5831: custom queries in the ui could not be loaded if the user only has read access to the _system database.
๐ fixed issue #6128: ArangoDb Cluster: Task moved from DBS to Coordinator
๐ fixed some web ui action events related to Running Queries view and Slow Queries History view
๐ fixed internal issue #2566: corrected web UI alignment of the nodes table
๐ fixed issue #5736: Foxx HTTP API responds with 500 error when request body is too short
๐ fixed issue #6106: Arithmetic operator type casting documentation incorrect
๐ The arangosh now supports the velocystream transport protocol via the schemas "vst+tcp://", "vst+ssl://", "vst+unix://" schemes.
The server will no longer lowercase the input in --server.endpoint. This means Unix domain socket paths will now be treated as specified, previously they were lowercased
๐ fixed logging of requests. A wrong log level was used
๐ fixed issue #5943: misplaced database ui icon and wrong cursor type were used
๐ fixed issue #5354: updated the web UI JSON editor, improved usability
๐ fixed issue #5648: fixed error message when saving unsupported document types
๐ fixed internal issue #2812: Cluster fails to create many indexes in parallel
โ Added C++ implementation, load balancer support, and user restriction to Pregel API.
If an execution is accessed on a different coordinator than where it was created, the request(s) will be forwarded to the correct coordinator. If an execution is accessed by a different user than the one who created it, the request will be denied.
๐ป the AQL editor in the web UI now supports detailed AQL query profiling
๐ fixed issue #5884: Subquery nodes are no longer created on DBServers
intermediate commits in the RocksDB engine are now only enabled in standalone AQL queries
(not within a JS transaction), standalone truncate as well as for the "import" API
๐ป the AQL editor in the web UI now supports GeoJSON types and is able to render them.
๐ fixed issue #5035: fixed a vulnerability issue within the web ui's index view
๐ค PR #5552: add "--latency true" option to arangoimport. Lists microsecond latency
โ added
"pbkdf2"
method to@arangodb/foxx/auth
modulethe
@arangodb/foxx/auth
module now uses a different method to generate salts, so salts are no longer guaranteed to be alphanumeric๐ fixed internal issue #2567: the Web UI was showing the possibility to move a shard from a follower to the current leader
๐ Renamed RocksDB engine-specific statistics figure
rocksdb.block-cache-used
torocksdb.block-cache-usage
in output ofdb._engineStats()
The new figure name is in line with the statistics that the RocksDB library provides in its new versions.
โ Added RocksDB engine-specific statistics figures
rocksdb.block-cache-capacity
,rocksdb.block-cache-pinned-usage
as well as level-specific figuresrocksdb.num-files-at-level
androcksdb.compression-ratio-at-level
in output ofdb._engineStats()
โ Added RocksDB-engine configuration option
--rocksdb.block-align-data-blocks
If set to true, data blocks are aligned on lesser of page size and block size, which may waste some memory but may reduce the number of cross-page I/Os operations.
Usage RocksDB format version 3 for new block-based tables
๐ Bugfix: The AQL syntax variants
UPDATE/REPLACE k WITH d
now correctly take _rev from k instead of d (when ignoreRevs is false) and ignore d._rev.โ Added C++ implementation, load balancer support, and user restriction to tasks API
If a task is accessed on a different coordinator than where it was created, the request(s) will be forwarded to the correct coordinator. If a task is accessed by a different user than the one who created it, the request will be denied.
- โ Added load balancer support and user-restriction to async jobs API.
If an async job is accessed on a different coordinator than where it was created, the request(s) will be forwarded to the correct coordinator. If a job is accessed by a different user than the one who created it, the request will be denied.
- 0๏ธโฃ switch default storage engine from MMFiles to RocksDB
In ArangoDB 3.4, the default storage engine for new installations is the RocksDB engine. This differs to previous versions (3.2 and 3.3), in which the default storage engine was the MMFiles engine.
The MMFiles engine can still be explicitly selected as the storage engine for all new installations. It's only that the "auto" setting for selecting the storage engine will now use the RocksDB engine instead of MMFiles engine.
In the following scenarios, the effectively selected storage engine for new installations will be RocksDB:
--server.storage-engine rocksdb
--server.storage-engine auto
--server.storage-engine
option not specified
The MMFiles storage engine will be selected for new installations only when explicitly selected:
--server.storage-engine mmfiles
On upgrade, any existing ArangoDB installation will keep its previously selected storage engine. The change of the default storage engine is thus only relevant for new ArangoDB installations and/or existing cluster setups for which new server nodes get added later. All server nodes in a cluster setup should use the same storage engine to work reliably. Using different storage engines in a cluster is unsupported.
โ added collection.indexes() as an alias for collection.getIndexes()
disable V8 engine and JavaScript APIs for agency nodes
๐ renamed MMFiles engine compactor thread from "Compactor" to "MMFilesCompactor".
This change will be visible only on systems which allow assigning names to threads.
- โ added configuration option
--rocksdb.sync-interval
This option specifies interval (in milliseconds) that ArangoDB will use to automatically synchronize data in RocksDB's write-ahead log (WAL) files to disk. Automatic syncs will only be performed for not-yet synchronized data, and only for operations that have been executed without the waitForSync attribute.
Automatic synchronization is performed by a background thread. The default sync interval is 100 milliseconds.
Note: this option is not supported on Windows platforms. Setting the sync interval to a value greater 0 will produce a startup warning.
added AQL functions
TO_BASE64
,TO_HEX
,ENCODE_URI_COMPONENT
andSOUNDEX
PR #5857: RocksDB engine would frequently request a new DelayToken. This caused excessive write delay on the next Put() call. Alternate approach taken.
๐ changed the thread handling in the scheduler.
--server.maximal-threads
will be the maximum number of threads for the scheduler.The option
--server.threads
is now obsolete.๐ use sparse indexes in more cases now, when it is clear that the index attribute value cannot be null
โก๏ธ introduce SingleRemoteOperationNode via "optimize-cluster-single-document-operations" optimizer rule, which triggers single document operations directly from the coordinator instead of using a full-featured AQL setup. This saves cluster roundtrips.
Queries directly referencing the document key benefit from this:
UPDATE {_key: '1'} WITH {foo: 'bar'} IN collection RETURN OLD
- โ Added load balancer support and user-restriction to cursor API.
If a cursor is accessed on a different coordinator than where it was created, the requests will be forwarded to the correct coordinator. If a cursor is accessed by a different user than the one who created it, the request will be denied.
if authentication is turned on requests to databases by users with insufficient rights will be answered with the HTTP forbidden (401) response.
โฌ๏ธ upgraded bundled RocksDB library version to 5.15
โ added key generators
uuid
andpadded
The
uuid
key generator generates universally unique 128 bit keys, which are stored in hexadecimal human-readable format. Thepadded
key generator generates keys of a fixed length (16 bytes) in ascending lexicographical sort order.The REST API of
/_admin/status
added: "operationMode" filed with same meaning as the "mode" field and field "readOnly" that has the inverted meaning of the field "writeOpsEnabled". The old field names will be deprecated in upcoming versions.โ added
COUNT_DISTINCT
AQL functionโก๏ธ make AQL optimizer rule
collect-in-cluster
optimize aggregation functionsAVERAGE
,VARIANCE
,STDDEV
,UNIQUE
,SORTED_UNIQUE
andCOUNT_DISTINCT
in a cluster by pushing parts of the aggregation onto the DB servers and only doing the total aggregation on the coordinatorreplace JavaScript functions FULLTEXT, NEAR, WITHIN and WITHIN_RECTANGLE with regular AQL subqueries via a new optimizer rule "replace-function-with-index".
โก๏ธ the existing "fulltext-index-optimizer" optimizer rule has been removed because its duty is now handled by the "replace-function-with-index" rule.
โ added option "--latency true" option to arangoimport. Lists microsecond latency statistics on 10 second intervals.
๐ fixed internal issue #2256: ui, document id not showing up when deleting a document
๐ fixed internal issue #2163: wrong labels within foxx validation of service input parameters
๐ fixed internal issue #2160: fixed misplaced tooltips in indices view
โ Added exclusive option for rocksdb collections. Modifying AQL queries can now set the exclusive option as well as it can be set on JavaScript transactions.
โ added optimizer rule "optimize-subqueries", which makes qualifying subqueries return less data
The rule fires in the following situations:
in case only a few results are used from a non-modifying subquery, the rule will add a LIMIT statement into the subquery. For example
LET docs = ( FOR doc IN collection FILTER ... RETURN doc ) RETURN docs[0]
will be turned into
LET docs = ( FOR doc IN collection FILTER ... LIMIT 1 RETURN doc ) RETURN docs[0]
Another optimization performed by this rule is to modify the result value of subqueries in case only the number of results is checked later. For example
RETURN LENGTH( FOR doc IN collection FILTER ... RETURN doc )
will be turned into
RETURN LENGTH( FOR doc IN collection FILTER ... RETURN true )
This saves copying the document data from the subquery to the outer scope and may enable follow-up optimizations.
๐ fixed Foxx queues bug when queues are created in a request handler with an ArangoDB authentication header
abort startup when using SSLv2 for a server endpoint, or when connecting with a client tool via an SSLv2 connection.
SSLv2 has been disabled in the OpenSSL library by default in recent versions because of security vulnerabilities inherent in this protocol.
As it is not safe at all to use this protocol, the support for it has also been stopped in ArangoDB. End users that use SSLv2 for connecting to ArangoDB should change the protocol from SSLv2 to TLSv12 if possible, by adjusting the value of the
--ssl.protocol
startup option.- โ added
overwrite
option to document insert operations to allow for easier syncing.
This implements almost the much inquired UPSERT. In reality it is a REPSERT (replace/insert) because only replacement and not modification of documents is possible. The option does not work in cluster collections with custom sharding.
- โ added startup option
--log.escape
This option toggles the escaping of log output.
If set to
true
(which is the default value), then the logging will work as before, and the following characters in the log output are escaped:- the carriage return character (hex 0d)
- the newline character (hex 0a)
- the tabstop character (hex 09)
- any other characters with an ordinal value less than hex 20
If the option is set to
false
, no characters are escaped. Characters with an ordinal value less than hex 20 will not be printed in this mode but will be replaced with a space character (hex 20).A side effect of turning off the escaping is that it will reduce the CPU overhead for the logging. However, this will only be noticable when logging is set to a very verbose level (e.g. debug or trace).
- 0๏ธโฃ increased the default values for the startup options
--javascript.gc-interval
from every 1000 to every 2000 requests, and for--javascript.gc-frequency
from 30 to 60 seconds
This will make the V8 garbage collection run less often by default than in previous versions, reducing CPU load a bit and leaving more contexts available on average.
โ added
/_admin/repair/distributeShardsLike
that repairs collections with distributeShardsLike where the shards aren't actually distributed like in the prototype collection, as could happen due to internal issue #1770๐ Fixed issue #4271: Change the behavior of the
fullCount
option for AQL query cursors so that it will only take into accountLIMIT
statements on the top level of the query.
LIMIT
statements in subqueries will not have any effect on thefullCount
results any more.We added a new geo-spatial index implementation. On the RocksDB storage engine all installations will need to be upgraded with
--database.auto-upgrade true
. New geo indexes will now only report with the typegeo
instead ofgeo1
orgeo2
. The index typesgeo1
andgeo2
are now deprecated. Additionally we removed the deprecated flagsconstraint
andignoreNull
from geo index definitions, these fields were initially deprecated in ArangoDB 2.5โ Add revision id to RocksDB values in primary indexes to speed up replication (~10x).
0๏ธโฃ PR #5238: Create a default pacing algorithm for arangoimport to avoid TimeoutErrors on VMs with limited disk throughput
Starting a cluster with coordinators and DB servers using different storage engines is unsupported. Doing it anyway will now produce a warning on startup
๐ fixed issue #4919: C++ implementation of LIKE function now matches the old and correct behavior of the javascript implementation.
โ added
--json
option to arangovpack, allowing to treat its input as plain JSON data make arangovpack work without any configuration fileโ added experimental arangodb startup option
--javascript.enabled
to enable/disable the initialization of the V8 JavaScript engine. Only expected to work on single-servers and agency deploymentspull request #5201: eliminate race scenario where handlePlanChange could run infinite times after an execution exceeded 7.4 second time span
๐ป UI: fixed an unreasonable event bug within the modal view engine
pull request #5114: detect shutdown more quickly on heartbeat thread of coordinator and DB servers
fixed issue #3811: gharial api is now checking existence of
_from
and_to
vertices during edge creationThere is a new method
_profileQuery
on the database object to execute a query and print an explain with annotated runtime information.Query cursors can now be created with option
profile
, with a value of 0, 1 or 2. This will cause queries to include more statistics in their results and will allow tracing of queries.๐ fixed internal issue #2147: fixed database filter in UI
๐ fixed internal issue #2149: number of documents in the UI is not adjusted after moving them
๐ fixed internal issue #2150: UI - loading a saved query does not update the list of bind parameters
โ removed option
--cluster.my-local-info
in favor of persisted server UUIDs
The option
--cluster.my-local-info
was deprecated since ArangoDB 3.3.โ added new collection property
cacheEnabled
which enables in-memory caching for documents and primary index entries. Available only when using RocksDB๐ arangodump now supports
--threads
option to dump collections in parallelโช arangorestore now supports
--threads
option to restore collections in parallel๐ Improvement: The AQL query planner in cluster is now a bit more clever and can prepare AQL queries with less network overhead.
This should speed up simple queries in cluster mode, on complex queries it will most likely not show any performance effect. It will especially show effects on collections with a very high amount of Shards.
โ removed remainders of dysfunctional
/_admin/cluster-test
and/_admin/clusterCheckPort
API endpoints and removed them from documentationโ added new query option
stream
to enable streaming query execution via thePOST /_api/cursor
rest interface.๐ fixed issue #4698: databases within the UI are now displayed in a sorted order.
Behavior of permissions for databases and collections changed: The new fallback rule for databases for which an access level is not explicitly specified: Choose the higher access level of:
- A wildcard database grant
- A database grant on the
_system
database The new fallback rule for collections for which an access level is not explicitly specified: Choose the higher access level of: - Any wildcard access grant in the same database, or on "/"
- The access level for the current database
- The access level for the
_system
database
๐ fixed issue #4583: add AQL ASSERT and AQL WARN
๐ renamed startup option
--replication.automatic-failover
to--replication.active-failover
using the old option name will still work in ArangoDB 3.4, but the old option will be removed afterwardsindex selectivity estimates for RocksDB engine are now eventually consistent
This change addresses a previous issue where some index updates could be "lost" from the view of the internal selectivity estimate, leading to inaccurate estimates. The issue is solved now, but there can be up to a second or so delay before updates are reflected in the estimates.
๐ support
returnOld
andreturnNew
attributes for in the following HTTP REST APIs:- /_api/gharial//vertex/
- /_api/gharial//edge/
The exception from this is that the HTTP DELETE verb for these APIs does not support
returnOld
because that would make the existing API incompatible- ๐ fixed internal issue #478: remove unused and undocumented REST API endpoints _admin/statistics/short and _admin/statistics/long
These APIs were available in ArangoDB's REST API, but have not been called by ArangoDB itself nor have they been part of the documented API. They have been superseded by other REST APIs and were partially dysfunctional. Therefore these two endpoints have been removed entirely.
๐ fixed issue #1532: reload users on restore
๐ fixed internal issue #1475: when restoring a cluster dump to a single server ignore indexes of type primary and edge since we mustn't create them here.
๐ fixed internal issue #1439: improve performance of any-iterator for RocksDB
issue #1190: added option
--create-database
for arangoimportโก๏ธ UI: updated dygraph js library to version 2.1.0
๐ renamed arangoimp to arangoimport for consistency Release packages will still install arangoimp as a symlink so user scripts invoking arangoimp do not need to be changed
๐ป UI: Shard distribution view now has an accordion view instead of displaying all shards of all collections at once.
๐ fixed issue #4393: broken handling of unix domain sockets in JS_Download
โ added AQL function
IS_KEY
this function checks if the value passed to it can be used as a document key, i.e. as the value of the_key
attributeโ added AQL functions
SORTED
andSORTED_UNIQUE
SORTED
will return a sorted version of the input array using AQL's internal comparison orderSORTED_UNIQUE
will do the same, but additionally removes duplicates.added C++ implementation for AQL functions
DATE_NOW
,DATE_ISO8601
,DATE_TIMESTAMP
,IS_DATESTRING
,DATE_DAYOFWEEK
,DATE_YEAR
,DATE_MONTH
,DATE_DAY
,DATE_HOUR
,DATE_MINUTE
,DATE_SECOND
,DATE_MILLISECOND
,DATE_DAYOFYEAR
,DATE_ISOWEEK
,DATE_LEAPYEAR
,DATE_QUARTER
,DATE_DAYS_IN_MONTH
,DATE_ADD
,DATE_SUBTRACT
,DATE_DIFF
,DATE_COMPARE
,TRANSLATE
andSHA512
๐ fixed a bug where clusterinfo missed changes to plan after agency callback is registred for create collection
Foxx manifest.json files can now contain a $schema key with the value of "http://json.schemastore.org/foxx-manifest" to improve tooling support.
๐ fixed agency restart from compaction without data
๐ fixed agency's log compaction for internal issue #2249
only load Plan and Current from agency when actually needed
-
v3.3.21 Changes
๐ fixed TypeError being thrown instead of validation errors when Foxx manifest validation fails
๐ fixed issue #7586: a running query within the user interface was not shown if the active view was
Running Queries
orSlow Query History
.๐ improve Windows installer error messages, fix Windows installer backup routine and exit code handling
๐ make AQL REMOVE operations use less memory with the RocksDB storage engine
the previous implementation of batch removals read everything to remove into memory first before carrying out the first remove operation. The new version will only read in about 1000 documents each time and then remove these. Queries such as
FOR doc IN collection FILTER ... REMOVE doc IN collection
will benefit from this change in terms of memory usage.
-
v3.3.20 Changes
November 28, 2018โฌ๏ธ upgraded arangodb starter version to 0.13.9
โ Added RocksDB option
--rocksdb.total-write-buffer-size
to limit total memory usage across all RocksDB in-memory write buffers
The total amount of data to build up in all in-memory buffers (backed by log files). This option, together with the block cache size configuration option, can be used to limit memory usage. If set to 0, the memory usage is not limited. This is the default setting in 3.3. The default setting may be adjusted in future versions of ArangoDB.
If set to a value greater than 0, this will cap the memory usage for write buffers, but may have an effect on write performance.
- โ Added RocksDB configuration option
--rocksdb.enforce-block-cache-size-limit
Whether or not the maximum size of the RocksDB block cache is strictly enforced. This option can be set to limit the memory usage of the block cache to at most the specified size. If then inserting a data block into the cache would exceed the cache's capacity, the data block will not be inserted. If the flag is not set, a data block may still get inserted into the cache. It is evicted later, but the cache may temporarily grow beyond its capacity limit.
Export version and storage engine in cluster health
โช Potential fix for issue #7407: arangorestore very slow converting from mmfiles to rocksdb
โก๏ธ Updated joi library (Web UI), improved foxx mount path validation
๐ fix internal issue #2786: improved confirmation dialog when clicking the Truncate button in the Web UI
๐ fix for supervision, which started failing servers using old transient store
๐ fixed Foxx queues not retrying jobs with infinite
maxFailures
๐ Fixed a race condition in a coordinator, it could happen in rare cases and only with the maintainer mode enabled if the creation of a collection is in progress and at the same time a deletion is forced.
๐ง disable startup warning for Linux kernel variable
vm.overcommit_memory
settings values of 0 or 1. Effectivelyovercommit_memory
settings value of 0 or 1 fix two memory-allocation related issues with the default memory allocator used in ArangoDB release builds on 64bit Linux. The issues will remain when running with anovercommit_memory
settings value of 2, so this is now discouraged. Settingovercommit_memory
to 0 or 1 (0 is the Linux kernel's default) fixes issues with increasing numbers of memory mappings for the arangod process (which may lead to an out-of-memory situation if the kernel's maximum number of mappings threshold is hit) and an increasing amount of memory that the kernel counts as "committed". With anovercommit_memory
setting of 0 or 1, an arangod process may either be killed by the kernel's OOM killer or will die with a segfault when accessing memory it has allocated before but the kernel could not provide later on. This is still more acceptable than the kernel not providing any more memory to the process when there is still physical memory left, which may have occurred with anovercommit_memory
setting of 2 after the arangod process had done lots of allocations.
In summary, the recommendation for the
overcommit_memory
setting is now to set it to 0 or 1 (0 is kernel default) and not use 2.โฑ force connection timeout to be 7 seconds to allow libcurl time to retry lost DNS queries.
increase maximum number of collections/shards in an AQL query from 256 to 2048
don't rely on
_modules
collection being present and usable for arangod startupโก๏ธ optimizes the web ui's routing which could possibly led to unwanted events.
๐ fixes some graph data parsing issues in the ui, e.g. cleaning up duplicate edges inside the graph viewer.
in a cluster environment, the arangod process now exits if wrong credentials are used during the startup process.
๐ Fixed an AQL bug where the optimize-traversals rule was falsely applied to extensions with inline expressions and thereby ignoring them
๐ fix side-effects of sorting larger arrays (>= 16 members) of constant literal values in AQL, when the array was not used only for IN-value filtering but also later in the query. The array values were sorted so the IN-value lookup could use a binary search instead of a linear search, but this did not take into account that the array could have been used elsewhere in the query, e.g. as a return value. The fix will create a copy of the array and sort the copy, leaving the original array untouched.
๐ fixed a bug when cluster indexes were usable for queries, while still being built on db servers
๐ fix move leader shard: wait until all but the old leader are in sync. This fixes some unstable tests.
cluster health features more elaborate agent records
-
v3.3.19 Changes
October 20, 2018๐ fixes validation of allowed or not allowed foxx service mount paths within the Web UI
The single database or single coordinator statistics in a cluster environment within the Web UI sometimes got called way too often. This caused artifacts in the graphs, which is now fixed.
An aardvark statistics route could not collect and sum up the statistics of all coordinators if one of them was ahead and had more results than the others
โฌ๏ธ upgraded arangodb starter version to 0.13.6
turn on intermediate commits in replication applier in order to decrease the size of transactional operations on replication (issue #6821)
๐ fixed issue #6770: document update: ignoreRevs parameter ignored
when returning memory to the OS, use the same memory protection flags as when initializing the memory
this prevents "hole punching" and keeps the OS from splitting one memory mapping into multiple mappings with different memory protection settings
๐ fix internal issue #2770: the Query Profiling modal dialog in the Web UI was slightly malformed.
๐ fix internal issue #2035: the Web UI now updates its indices view to check whether new indices exist or not.
๐ fix internal issue #6808: newly created databases within the Web UI did not appear when used Internet Explorer 11 as a browser.
๐ fix internal issue #2688: the Web UI's graph viewer created malformed node labels if a node was expanded multiple times.
๐ fix internal issue #2957: the Web UI was not able to display more than 1000 documents, even when it was set to a higher amount.
๐ fix internal issue #2785: web ui's sort dialog sometimes got rendered, even if it should not.
๐ fix internal issue #2764: the waitForSync property of a SatelliteCollection could not be changed via the Web UI
๐ improved logging in case of replication errors
recover short server id from agency after a restart of a cluster node
this fixes problems with short server ids being set to 0 after a node restart, which then prevented cursor result load-forwarding between multiple coordinators to work properly
this should fix arangojs#573
- 0๏ธโฃ increased default timeouts in replication
this decreases the chances of followers not getting in sync with leaders because of replication operations timing out
๐ fixed internal issue #1983: the Web UI was showing a deletion confirmation multiple times.
๐ fixed agents busy looping gossip
๐ handle missing
_frontend
collections gracefully
the
_frontend
system collection is not required for normal ArangoDB operations, so if it is missing for whatever reason, ensure that normal operations can go on. -
v3.3.18 Changes
- ๐ not released
-
v3.3.17 Changes
October 04, 2018โฌ๏ธ upgraded arangosync version to 0.6.0
โ added several advanced options for configuring and debugging LDAP connections. Please note that some of the following options are platform-specific and may not work on all platforms or with all LDAP servers reliably:
--ldap.serialized
: whether or not calls into the underlying LDAP library should be serialized. This option can be used to work around thread-unsafe LDAP library functionality.--ldap.serialize-timeout
: sets the timeout value that is used when waiting to enter the LDAP library call serialization lock. This is only meaningful when--ldap.serialized
has been set totrue
.--ldap.retries
: number of tries to attempt a connection. Setting this to values greater than one will make ArangoDB retry to contact the LDAP server in case no connection can be made initially.--ldap.restart
: whether or not the LDAP library should implicitly restart connections--ldap.referrals
: whether or not the LDAP library should implicitly chase referrals--ldap.debug
: turn on internal OpenLDAP library output (warning: will print to stdout).--ldap.timeout
: timeout value (in seconds) for synchronous LDAP API calls (a value of 0 means default timeout).--ldap.network-timeout
: timeout value (in seconds) after which network operations following the initial connection return in case of no activity (a value of 0 means default timeout).--ldap.async-connect
: whether or not the connection to the LDAP library will be done asynchronously.
๐ fixed a shutdown race in ArangoDB's logger, which could have led to some buffered log messages being discarded on shutdown
๐ display shard synchronization progress for collections outside of the
_system
database๐ fixed issue #6611: Properly display JSON properties of user defined foxx services configuration within the web UI
๐ fixed issue #6583: Agency node segfaults if sent an authenticated HTTP request is sent to its port
when cleaning out a leader it could happen that it became follower instead of being removed completely
๐ make synchronous replication detect more error cases when followers cannot apply the changes from the leader
๐ fix some TLS errors that occurred when combining HTTPS/TLS transport with the VelocyStream protocol (VST)
That combination could have led to spurious errors such as "TLS padding error" or "Tag mismatch" and connections being closed
- โก๏ธ agency endpoint updates now go through RAFT
-
v3.3.16 Changes
September 19, 2018๐ fix undefined behavior in AQL query result cache
๐ป the query editor within the web ui is now catching http 501 responses properly
๐ fixed issue #6495 (Document not found when removing records)
๐ fixed undefined behavior in cluster plan-loading procedure that may have unintentionally modified a shared structure
โฌ๏ธ reduce overhead of function initialization in AQL COLLECT aggregate functions, for functions COUNT/LENGTH, SUM and AVG
this optimization will only be noticable when the COLLECT produces many groups and the "hash" COLLECT variant is used
๐ fixed potential out-of-bounds access in admin log REST handler /_admin/log, which could have led to the server returning an HTTP 500 error
catch more exceptions in replication and handle them appropriately
-
v3.3.15 Changes
September 10, 2018๐ fixed an issue in the "sorted" AQL COLLECT variant, that may have led to producing an incorrect number of results
โฌ๏ธ upgraded arangodb starter version to 0.13.3
๐ fixed issue #5941 if using breadth-first search in traversals uniqueness checks on path (vertices and edges) have not been applied. In SmartGraphs the checks have been executed properly.
โ added more detailed progress output to arangorestore, showing the percentage of how much data is restored for bigger collections plus a set of overview statistics after each processed collection
โ added option
--rocksdb.use-file-logging
to enable writing of RocksDB's own informational LOG files into RocksDB's database directory.
This option is turned off by default, but can be enabled for debugging RocksDB internals and performance.
- ๐ improved error messages when managing Foxx services
Install/replace/upgrade will now provide additional information when an error is encountered during setup. Errors encountered during a
require
call will also include information about the underlying cause in the error message.๐ fixed some Foxx script names being displayed incorrectly in web UI and Foxx CLI
โ added startup option
--query.optimizer-max-plans value
This option allows limiting the number of query execution plans created by the AQL optimizer for any incoming queries. The default value is
128
.By adjusting this value it can be controlled how many different query execution plans the AQL query optimizer will generate at most for any given AQL query. Normally the AQL query optimizer will generate a single execution plan per AQL query, but there are some cases in which it creates multiple competing plans. More plans can lead to better optimized queries, however, plan creation has its costs. The more plans are created and shipped through the optimization pipeline, the more time will be spent in the optimizer.
Lowering this option's value will make the optimizer stop creating additional plans when it has already created enough plans.
Note that this setting controls the default maximum number of plans to create. The value can still be adjusted on a per-query basis by setting the maxNumberOfPlans attribute when running a query.
This change also lowers the default maximum number of query plans from 192 to 128.
๐ bug fix: facilitate faster shutdown of coordinators and db servers
cluster nodes should retry registering in agency until successful
๐ fixed some web ui action events related to Running Queries view and Slow Queries History view
0๏ธโฃ Create a default pacing algorithm for arangoimport to avoid TimeoutErrors on VMs with limited disk throughput
backport PR 6150: establish unique function to indicate when application is terminating and therefore network retries should not occur
backport PR #5201: eliminate race scenario where handlePlanChange could run infinite times after an execution exceeded 7.4 second time span
-
v3.3.14 Changes
August 15, 2018โฌ๏ธ upgraded arangodb starter version to 0.13.1
๐ฒ Foxx HTTP API errors now log stacktraces
๐ fixed issue #5736: Foxx HTTP API responds with 500 error when request body is too short
๐ fixed issue #5831: custom queries in the ui could not be loaded if the user only has read access to the _system database.
๐ fixed internal issue #2566: corrected web UI alignment of the nodes table
๐ fixed internal issue #2869: when attaching a follower with global applier to an authenticated leader already existing users have not been replicated, all users created/modified later are replicated.
๐ fixed internal issue #2865: dumping from an authenticated arangodb the users have not been included
๐ fixed issue #5943: misplaced database ui icon and wrong cursor type were used
๐ fixed issue #5354: updated the web UI JSON editor, improved usability
๐ fixed issue #5648: fixed error message when saving unsupported document types
๐ fixed issue #6076: Segmentation fault after AQL query
This also fixes issues #6131 and #6174
๐ fixed issue #5884: Subquery nodes are no longer created on DBServers
๐ fixed issue #6031: Broken LIMIT in nested list iterations
๐ fixed internal issue #2812: Cluster fails to create many indexes in parallel
intermediate commits in the RocksDB engine are now only enabled in standalone AQL queries (not within a JS transaction), standalone truncate as well as for the "import" API
๐ Bug fix: race condition could request data from Agency registry that did not exist yet. This caused a throw that would end the Supervision thread. All registry query APIs no longer throw exceptions.