Skip to content

Releases: cinchapi/concourse

Version 0.11.10

11 May 18:35

Choose a tag to compare

  • Enhanced Memory Management for Large Term Indexing - Fixed a critical issue where Concourse Server would crash when indexing large search terms on systems with disabled memory swapping (such as Container-Optimized OS on Google Kubernetes Engine). Previously, when encountering large search terms to index, Concourse Server would always attempt to use off-heap memory to preserve heap space for other operations. If memory pressure occurred during processing, Concourse would detect this signal from the OS and fall back to a file-based approach. However, on swap-disabled systems like Container-Optimized OS, instead of receiving a graceful memory pressure signal, Concourse would be immediately OOMKilled before any fallback mechanism could activate. With this update, Concourse Server now proactively estimates required memory before attempting off-heap processing. If sufficient memory is available, it proceeds with the original approach (complete with file-based fallback capability). But, if insufficient memory is detected upfront, Concourse immediately employs a more rudimentary processing mechanism that requires no additional memory, preventing OOMKill scenarios while maintaining indexing functionality in memory-constrained environments.
  • Configurable Garbage Collection - Added the force_g1gc configuration option to allow Concourse Server to use the Garbage-First (G1) garbage collector on JDK 8. When enabled, Concourse Server will configure G1GC with optimized settings based on the available heap size and CPU cores. This option is particularly beneficial for deployments with large heaps (>4GB) or where consistent response times are critical, as G1 provides more predictable pause times than the default collector.
  • Fixed a bug that prevented Concourse Server from properly starting if a String configuration value was used for a variable that does not expect a String (e.g., max_search_substring_length = "40"). Now, Concouse Server will correctly parse all configuration values to the appropriate type or use the default value if there is an error when parsing.
  • Enhanced the concourse data repair CLI so that it fixes any inconsistencies between the inventory (which catalogs every record that was ever populated) and the repaired data.
  • Added a new ping() method to the Concourse API that allows clients to test the connection to the server. This method is designed to be lightweight and can be used to check server responsiveness or measure latency without incurring additional overhead for data processing. The method returns true if the server is responsive and false if the connection has failed, making it useful for health checks and connection monitoring.
  • GH-570: Fixed a regression that occurred when sorting on a single key and the result set failed to include data for records where the sort key was not present, even if that data would fit in the requested page.

Version 0.11.9

30 Apr 18:06

Choose a tag to compare

  • Improved the performance of select operations that specify selection keys and require both sorting and pagination. The improvements are achieved from smarter heuristics that determine the most efficient execution path. The system now intelligently decides whether to:

    • Sort all records first and then select only the paginated subset of data, or
    • Select all data first and then apply sorting and pagination, or
      • If the Order clause only uses a single key, use the index of that order key to lookup the sorted order of values and filtering out those that are not in the desired record set.

    This decision is based on a cost model that compares the number of lookups required for each approach, considering factors such as the number of keys being selected, the number of records being processed, the pagination limit, and whether the sort keys overlap with the selected keys. This optimization significantly reduces the number of database lookups required for queries that combine sorting and pagination, particularly when working with large datasets where only a small page of results is needed.

  • Improved multi-key selection commands that occur during concurrent Buffer transport operations. Previously, when selecting multiple keys while data was being transported for indexing, each primitive select operation would acquire a lock that blocked transports, but transports could still occur between operations, slowing down the overall read. Now, these commands use an advisory lock that blocks all transports until the entire bulk read completes. This optimization significantly improves performance in real-world scenarios with simultaneous reads and writes.

  • Improved the performance of multi-key selection commands that occur during concurrent Buffer transport operations. Previously, when selecting multiple keys while data was being transported for indexing, each primitive select operation would acquire a lock that blocked transports, but transports could still occur between operations, slowing down the overall read. Now, these commands use an advisory lock that blocks all transports until the entire bulk read completes. This optimization significantly improves performance in real-world scenarios with simultaneous reads and writes. In the future, this advisory locking mechanism will be extended to other bulk and atomic operations.

  • Implemented a caching mechanism for parsed keys to eliminate repeated validation and processing throughout the system. Previously, keys (especially navigation keys) were repeatedly validated against the WRITABLE_KEY regex and tokenized for every operation, even when the same keys were used multiple times. This optimization significantly improves performance for all operations that involve key validation and navigation key processing, with particularly noticeable benefits in scenarios that use the same keys repeatedly across multiple records.

  • Eliminated unnecessary re-sorting of Database data that's already in sorted order. Previously, both the Buffer and Atomic Operation/Transaction queues would force a re-sort on all initial read context that was pulled from the Database (even when that data was already sorted because the Database retrieved it from an index or intentionally sorted it before returning). Now, those stores always apply their Writes to the context under the assumption that the context is sorted and will maintain sorter order, which greatly cuts down on the overhead of intermediate processing for read operations.

  • Improved the concourse data repair CLI to detect and fix "imbalanced" data--an invalid state caused by a bug or error in Concourse Server's enforcement that any write about a topic must be a net-new ADD for that topic or must offset a previously inserted write for that topic (e.g., a REMOVE can only exist if there was a prior ADD and vice versa). When an unoffset write bypasses this constraint, some read operations will break entirely. The enhanced CLI now goes beyond only fixing duplicate transaction applications and now properly balances data in place, preserving intended state and fully restoring read functionality without any downtime.

  • Implemented auto-healing for ConnectionPools where failed connections are detected and automatically replaced when they are released back to the pool.

Version 0.11.8

15 Apr 22:48

Choose a tag to compare

Navigation Queries
  • Optimized Navigation Key Traversal: To minimize the number of lookups required when querying, we've implemented a smarter traversal strategy for navigation keys used in a Criteria/Condition. The system now automatically chooses between:

    • Forward Traversal: Starting from the beginning of the path and following links forward (traditional approach)
    • Reverse Traversal: Starting from records matching the final condition and working backwards

    For example, a query like identity.credential.email = [email protected] likely returns few records, so reverse traversal is more efficient - first finding records where email = [email protected] and then tracing backwards through the document graph. Conversely, a query like identity.credential.numLogins > 5 that likely matches many records is better handled with forward traversal, starting with records containing links from the identity key and following the path forward.

    This optimization significantly improves performance for navigation queries where the initial key in the path has high cardinality (many unique values across records), but the final condition is more selective (e.g., a specific email address).

Version 0.11.7

07 Apr 22:41
ec91102

Choose a tag to compare

  • Fixed a bug that made it possible to leak filesystem resources by opening duplicate file descriptors for the same Segment file. At scale, this could prematurely lead to "too many open files" errors.
  • GH-534: Fixed a bug that caused the CONCOURSE_HEAP_SIZE environment variable, if set, not to be read on server startup.

Version 0.11.6

06 Jul 11:27

Choose a tag to compare

  • Added new configuration options for initializing Concourse Server with custom admin credentials upon first run. These options enhance security by allowing a non-default usernames and passwords before starting the server.
    • The init_root_username option in concourse.prefs can be used to specify the username for the initial administrator account.
    • The init_root_password option in concourse.prefs can be used to specify the password for the initial administrator account
  • Exposed the default JMX port, 9010, in the Dockerfile.
  • Fixed a bug that kept HELP documentation from being packaged with Concourse Shell and prevented it from being displayed.
  • Added a fallback option to display Concourse Shell HELP documentation in contexts when the less command isn't available (e.g., IDEs).
  • Fixed a bug that caused Concourse Server to unnecessarily add error logging whenever a client disconnected.
  • Added the ability to create ConnectionPools that copy the credentials and connection information from an existing handler These copying connection pools can be created by using the respective "cached" or "fixed" factory methods in the ConnectionPool class that take a Concourse parameter.

Version 0.11.5

06 Nov 01:36

Choose a tag to compare

  • Fixed a bug that made it possible for a Transaction to silently fail and cause a deadlock when multiple distinct writes committed in other operations caused that Transaction to become preempted (e.g., unable to continue or successfully commit because of a version change).
  • Fixed a bug that allowed a Transaction's atomic operations (e.g., verifyAndSwap) to ignore range conflicts stemming from writes committed in other operations. As a result, the atomic operation would successfully commit to its a Transaction, but the Transaction would inevitably fail due to the aforementioned conflict. The correct (and now current) behaviour is that the atomic operation fails (so it can be retried) without dooming the entire Transaction to failure.
  • Fixed a bug that caused an innocuous Exception to be thrown when importing CSV data using the interactive input feature of concourse import CLI.
  • Fixed a bug that caused an inexplicable failure to occur when invoking a plugin method that indirectly depended on result set sorting.

Version 0.11.4

04 Jul 23:42

Choose a tag to compare

  • Slightly improved the performance of result sorting by removing unnecessary intermediate data gathering.
  • Improved random access performance for all result sets.

Version 0.11.3

04 Jun 21:00

Choose a tag to compare

  • Improved the performance of commands that select multiple keys from a record by adding herustics to the storage engine to reduce the number of overall lookups required. As a result, commands that select multiple keys are up to 96% faster.
  • Streamlined the logic for reads that have a combination of time, order and page parameters by adding more intelligent heuristics for determining the most efficient code path. For example, a read that only has time and page parameters (e.g., no order) does not need to be performed atomically. Previously, those reads converged into an atomic code path, but now a separate code path exists so those reads can be more performant. Additionally, the logic is more aware of when attempts to sort or paginate data don't actually have an effect and now avoids unnecessary data transformations of re-collection.
  • Fixed a bug that caused Concourse Server to not use the Strategy framework to determine the most efficient lookup source (e.g., field, record, or index) for navigation keys.
  • Added support for querying on the intrinsic identifier of Records, as both a selection and evaluation key. The record identifier can be refernced using the $id$ key (NOTE: this must be properly escaped in concourse shell as \$id\$).
    • It is useful to include the Record identifier as a selection key for some navigation reads (e.g., select(["partner.name", partner.\$id\$], 1))).
    • It is useful to include the Record identifier as an evaluation key in cases where you want to explictly exclude a record from matching a Condition (e.g., select(["partner.name", parner.\$id\$], "\$id\$ != 2")))
  • Fixed a bug that caused historical reads with sorting to not be performed atomically; potentially violating ACID semantics.
  • Fixed a bug that caused commands to find data matching a Condition (e.g., Criteria or CCL Statement) to not be fully performed atomically; potentially violating ACID semantics.

Version 0.11.2

18 Mar 20:39

Choose a tag to compare

  • Fixed a bug that caused Concourse Server to incorrectly detect when an attempt was made to atomically commit multiple Writes that toggle the state of a field (e.g. ADD name as jeff in 1, REMOVE name as jeff in 1, ADD name as jeff in 1) in user-defined transactions. As a result of this bug, all field toggling Writes were committed instead of the desired behaviour where there was a commit of at most one equal Write that was required to obtain the intended field state. Committing multiple writes that toggled the field state within the same transaction could cause failures, unexplained results or fatal inconsistencies when reconciling data.
  • Added a fallback to automatically switch to reading data from Segment files using traditional IO in the event that Concourse Server ever exceedes the maximum number of open memory mapped files allowed (as specified by the vm.max_map_count property on some Linux systems).
  • Removed the DEBUG logging (added in 0.11.1) that provides details on the execution path chosen for each lookup because it is too noisy and drastically degrades performance.
  • Fixed a bug in the way that Concourse Server determined if duplicate data existed in the v3 storage files, which caused the concourse data repair CLI to no longer work properly (compared to how it worked on the v2 storage files).
  • Fixed a regression that caused a memory leak when data values were read from disk. The nature of the memory leak caused a degradation in performance because Concourse Server was forced to evict cached records from memory more frequently than in previous versions.

Version 0.11.1

10 Mar 01:34

Choose a tag to compare

  • Upgraded to CCL version 3.1.2 to fix a regression that caused parenthetical expressions within a Condition containing LIKE REGEX, NOT_LIKE and NOT_REGEX operators to mistakenly throw a SyntaxException when being parsed.
  • Added the ConcourseCompiler#evaluate(ConditionTree, Multimap) method that uses the Operators#evaluate static method to perform local evaluation.
  • Fixed a bug that, in some cases, caused the wrong default environment to be used when invoking server-side data CLIs (e.g., concourse data <action>). When a data CLI was invoked without specifying the environment using the -e <environment> flag, the default environment was always used instead of the default_environment that was specified in the Concourse Server configuration.
  • Fixed a bug that caused the concourse data compact CLI to inexplicably die when invoked while enable_compaction was set to false in the Concourse Server configuration.
  • Fixed the usage message description of the concourse export and concourse import CLIs.
  • Fixed a bug that caused Concourse Shell to fail to parse short syntax within statements containing an open parenthesis as described in GH-463 and GH-139.
  • Fixed a bug that caused the Strategy framework to select the wrong execution path when looking up historical values for order keys. This caused a regression in the performance for relevant commands.
  • Added DEBUG logging that provides details on the execution path chosen for each lookup.
  • Fixed a bug that caused Order/Sort instructions that contain multiple clauses referencing the same key to drop all but the last clause for that key.
  • Fixed a bug that caused the concourse export CLI to not process some combinations of command line arguments properly.
  • Fixed a bug tha caused an error to be thrown when using the max or min function over an entire index as an operation value in a CCL statement.
  • Fixed several corner case bugs with Concourse's arithmetic engine that caused the calculate functions to 1) return inaccurate results when aggregating numbers of different types and 2) inexplicably throw an error when a calculation was performed on data containing null values.