MySQL for Mac 軟體歷史版本整理

winXmac軟體社群 Mac 開發工具 版本數量 114 開源軟體 Rate 90

MySQL for Mac,軟體教學,軟體下載,軟體社群,Windows軟體,Mac軟體

MySQL for Mac


MySQL for Mac 是專為企業組織提供業務關鍵數據庫應用程序。它為企業開發人員,數據庫管理員和 ISV 提供了一系列新的企業功能,以提高開發,部署和管理工業強度應用程序的效率.如果您需要 MySQL 數據庫的 GUI,可以下載 - NAVICAT(MySQL GUI)。它支持將 MySQL,MS SQL,MS Access,Excel,CSV,XML 或其他格式導入到 MySQL.MyS...

Update:2021-05-11
Info:

What's new in this version:

Packaging Notes:
- Binary packages that include curl rather than linking to the system curl library have been upgraded to use curl 7.76.0

Fixed:
- On Fedora 34, builds from source failed due to an undefined reference to symbol [email protected]@ZLIB_1.2.9
- For a prepared, implicitly grouped SELECT statement in which the WHERE clause was determined always to be false, the result of some aggregate functions could sometimes be picked up from the previous execution of the statement.


Update:2021-01-18
Info:

What's new in this version:

Added or Changed:
InnoDB: Performance was improved for the following operations:
- Dropping a large tablespace on a MySQL instance with a large buffer pool (>32GBs).
- Dropping a tablespace with a significant number of pages referenced from the adaptive hash index.
- Truncating temporary tablespaces.
- The pages of dropped or truncated tablespaces and associated AHI entries are now removed from the buffer pool passively as pages are encountered during normal operations. Previously, dropping or truncating tablespaces initiated a full list scan to remove pages from the buffer pool immediately, which negatively impacted performance Bug #98869)
- InnoDB: The new AUTOEXTEND_SIZE option defines the amount by which InnoDB extends the size of a tablespace when it becomes full, making it possible to extend tablespace size in larger increments. Allocating space in larger increments helps to avoid fragmentation and facilitates ingestion of large amounts of data. The AUTOEXTEND_SIZE option is supported with the CREATE TABLE, ALTER TABLE, CREATE TABLESPACE, and ALTER TABLESPACE statements. For more information, see Tablespace AUTOEXTEND_SIZE Configuration.
- An AUTOEXTEND_SIZE size column was added to the INFORMATION_SCHEMA.INNODB_TABLESPACES table.
- InnoDB: InnoDB now supports encryption of doublewrite file pages belonging to encrypted tablespaces. The pages are encrypted using the encryption key of the associated tablespace. For more information, see InnoDB Data-at-Rest Encryption.
- InnoDB: InnoDB atomics code was revised to use C++ std::atomic.
- When invoked with the --all-databases option, mysqldump now dumps the mysql database first, so that when the dump file is reloaded, any accounts named in the DEFINER clause of other objects will already have been created
- Some overhead for disabled Performance Schema and LOCK_ORDER tool instrumentation was identified and eliminated
- For BLOB and TEXT columns that have a default value expression, the INFORMATION_SCHEMA.COLUMNS table and SHOW COLUMNS statement now display the expression
- CRC calculations for binlog checksums are faster on ARM platforms. Thanks to Krunal Bauskar for the contributiong
- MySQL Server’s asynchronous connection failover mechanism now supports Group Replication topologies, by automatically monitoring changes to group membership and distinguishing between primary and secondary servers. When you add a group member to the source list and define it as part of a managed group, the asynchronous connection failover mechanism updates the source list to keep it in line with membership changes, adding and removing group members automatically as they join or leave. The new asynchronous_connection_failover_add_managed() and asynchronous_connection_failover_delete_managed() UDFs are used to add and remove managed sources.
- The connection is failed over to another group member if the currently connected source goes offline, leaves the group, or is no longer in the majority, and also if the currently connected source does not have the highest weighted priority in the group. For a managed group, a source’s weight is assigned depending on whether it is a primary or a secondary server. So assuming that you set up the managed group to give a higher weight to a primary and a lower weight to a secondary, when the primary changes, the higher weight is assigned to the new primary, so the replica changes over the connection to it. This function also applies to single (non- managed) servers, so the connection is now failed over if another source server is available that has a higher weighted priority.
- Replication channels can now be set to assign a GTID to replicated transactions that do not already have one, using the ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS option of the CHANGE REPLICATION SOURCE TO statement. This feature enables replication from a source that does not use GTID-based replication, to a replica that does. For a multi-source replica, you can have a mix of channels that use ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS, and channels that do not. The GTID can include the replica’s own server UUID or a server UUID that you assign to identify transactions from different sources.
- Note that a replica set up with ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS on any channel cannot be promoted to replace the replication source server in the event that a failover is required, and a backup taken from the replica cannot be used to restore the replication source server. The same restriction applies to replacing or restoring other replicas that use ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS on any channel. The GTID set (gtid_executed) from a replica set up with ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS is nonstandard and should not be transferred to another server, or compared with another server's gtid_executed set.
- For a multithreaded replica (where slave_parallel_workers is greater than 0), setting slave_preserve_commit_order=1 ensures that transactions are executed and committed on the replica in the same order as they appear in the replica's relay log. Each executing worker thread waits until all previous transactions are committed before committing. If a worker thread fails to execute a transaction because a possible deadlock was detected, or because the transaction's execution time exceeded a relevant wait timeout, it automatically retries the number of times specified by slave_transaction_retries before stopping with an error. Transactions with a non-temporary error are not retried.
- The replication applier on a multithreaded replica has always handled data access deadlocks that were identified by the storage engines involved. However, some other types of lock were not detected by the replication applier, such as locks involving access control lists (ACLs) or metadata locking (for example, FLUSH TABLES WITH READ LOCK statements). This could lead to three-actor deadlocks with the commit order locking, which could not be resolved by the replication applier, and caused replication to hang indefinitely. From MySQL 8.0.23, deadlock handling on multithreaded replicas that preserve the commit order has been enhanced to mitigate these types of deadlocks. The deadlocks are not specifically resolved by the replication applier, but the applier is aware of them and initiates automatic retries for the transaction, rather than hanging. If the retries are exhausted, replication stops in a controlled manner so that the deadlock can be resolved manually.
- The new temptable_max_mmap variable defines the maximum amount of memory the TempTable storage engine is permitted to allocate from memory-mapped temporary files before it starts storing data to InnoDB internal temporary tables on disk. A setting of 0 disables allocation of memory from memory-mapped temporary files. For more information, see Internal Temporary Table Use in MySQL.

Fixed:
- InnoDB: A CREATE TABLE operation that specified the COMPRESSION option was permitted with a warning on a system that does not support hole punching. The operation now fails with an error instead
- InnoDB: A MySQL DB system restart following an upgrade that was initiated while a data load operation was in progress raised an assertion failure
- InnoDB: An error message regarding the number of truncate operations on the same undo tablespace between checkpoints incorrectly indicated a limit of 64. The limit was raised from 64 to 50,000 in MySQL 8.0.22
- InnoDB: rw_lock_t and buf_block_t source code structures were reduced in size
- InnoDB: An InnoDB transaction became inconsistent after creating a table using a storage engine other than InnoDB from a query expression that operated on InnoDB tables
- InnoDB: In some circumstances, such as when an existing gap lock inherits a lock from a deleted record, the number of locks that appear in the INFORMATION_SCHEMA.INNODB_TRX table could diverge from the actual number of record locks.
- Thanks to Fungo Wang from Alibaba for the patch
- InnoDB: An off-by-one error in Fil_system sharding code was corrected, and the maximum number of shards (MAX_SHARDS) was changed to 69
- InnoDB: The TempTable storage engine memory allocator allocated extra blocks of memory unnecessarily
- InnoDB: A SELECT COUNT(*) operation on a table containing uncommitted data performed poorly due to unnecessary I/O.
- Thanks to Brian Yue for the contribution
- InnoDB: A race condition when shutting down the log writer raised an assertion failure
- InnoDB: Page cleaner threads were not utilized optimally in sync-flush mode, which could cause page flush operations to slow down or stall in some cases. Sync-flush mode occurs when InnoDB is close to running out of free space in the redo log, causing the page cleaner coordinator to initiate aggressive page flushing
- InnoDB: A high frequency of updates while undo log truncation was enabled caused purge to lag. The lag was due to the innodb_purge_rseg_truncate_frequency setting being changed temporarily from 128 to 1 when an undo tablespace was selected for truncation. The code that modified the setting has been removed
- InnoDB: Automated truncation of undo tablespaces caused a performance regression. To address this issue, undo tablespace files are now initialized at 16MB and extended by a minimum of 16MB. To handle aggressive growth, the file extension size is doubled if the previous file extension happened less than 0.1 seconds earlier. Doubling of the extension size can occur multiple times to a maximum of 256MB. If the previous file extension occurred more than 0.1 seconds earlier, the extension size is reduced by half, which can also occur multiple times, to a minimum of 16MB. Previously, the initial size of an undo tablespace depended on the InnoDB page size, and undo tablespaces were extended four extents at a time.
- If the AUTOEXTEND_SIZE option is defined for an undo tablespace, the undo tablespace is extended by the greater of the AUTOEXTEND_SIZE setting and the extension size determined by the logic described above.
- When an undo tablespace is truncated, it is normally recreated at 16MB in size, but if the current file extension size is larger than 16MB, and the previous file extension happened within the last second, the new undo tablespace is created at a quarter of the size defined by the innodb_max_undo_log_size variable.
- Stale undo tablespace pages are no longer removed at the next checkpoint. Instead, the pages are removed in the background by the InnoDB master thread Bug #32020900, Bug #101194)
- InnoDB: A posix_fallocate() failure while preallocating space for a temporary tablespace raised an error and caused an initialization failure. A warning is now issued instead, and InnoDB falls back to the non-posix_fallocate() method for preallocating space
- InnoDB: An invalid pointer caused a shutdown failure on a MySQL Server compiled with the DISABLE_PSI_MEMORY source configuration option enabled
- InnoDB: A long SX lock held by an internal function that calculates new statistics for a given index caused a failure
- InnoDB: The INFORMATION_SCHEMA.INNODB_TABLESPACES table reported a FILE_SIZE of 0 for some tables and schemas. When the associated tablespace was not in the memory cache, the tablespace name was used to determine the tablespace file name, which was not always a reliable method. The tablespace ID is now used instead. Using the tablespace name remains as a fallback method
- InnoDB: After dropping a FULLTEXT index and renaming the table to move it to a new schema, the FULLTEXT auxiliary tables were not renamed accordingly and remained in the old schema directory
- InnoDB: After upgrading to MySQL 8.0, a failure occurred when attempting to perform a DML operation on a table that was previously defined with a full-text search index
- InnoDB: Importing a tablespace with a page-compressed table did not report a schema mismatch error for source and destination tables defined with a different COMPRESSION setting. The COMPRESSION setting of the exported table is now saved to the .cfg metadata file during the FLUSH TABLES ... FOR EXPORT operation, and that information is checked on import to ensure that both tables are defined with the same COMPRESSION setting
- InnoDB: Dummy keys used to check if the MySQL Keyring plugin is functioning were left behind in an inactive state, and the number of inactive dummy keys increased over time. The actual master key is now used instead, if present. If no master key is available, a dummy master key is generated
- InnoDB: Querying the INFORMATION_SCHEMA.FILES table after moving the InnoDB system tablespace outside of the data directory raised a warning indicating that the innodb_system filename is unknown
- InnoDB: In a replication scenario involving a replica with binary logging or log_slave_updates disabled, the server failed to start due to an excessive number of gaps in the mysql.gtid_executed table. The gaps occurred for workloads that included both InnoDB and non-InnoDB transactions. GTIDs for InnoDB transactions are flushed to the mysql.gtid_executed table by the GTID persister thread, which runs periodically, while GTIDs for non-InnoDB transactions are written to the to the mysql.gtid_executed table directly by replica server threads. The GTID persister thread fell behind as it cycled through merging entries and compressing the mysql.gtid_executed table. As a result, the size of the GTID flush list for InnoDB transactions grew over time along with the number of gaps in the mysql.gtid_executed table, eventually causing a server failure and subsequent startup failures. To address this issue, the GTID persister thread now writes GTIDs for both InnoDB and non-InnoDB transactions, and foreground commits are forced to wait if the GTID persister thread falls behind. Also, the gtid_executed_compression_period default setting was changed from 1000 to 0 to disabled explicit compression of the mysql.gtid_executed table by default.
- Thanks to Venkatesh Prasad for the contribution
- InnoDB: Persisting GTID values for XA transactions affected XA transaction performance. Two GTID values are generated for XA transactions, one for the prepare stage and another for the commit stage. The first GTID value is written to the undo log and later overwritten by the second GTID value. Writing of the second GTID value could only occur after flushing the first GTID value to the gtid_executed table. Space is now reserved in the undo log for both XA transaction GTID values
- InnoDB: InnoDB source files were updated to address warnings produced when building Doxygen source code documentation
- InnoDB: The full-text search synchronization thread attempted to read a previously-freed word from the index cache
- InnoDB: A 20µs sleep in the buf_wait_for_read() function introduced with parallel read functionality in MySQL 8.0.17 took 1ms on Windows, causing an unexpected timeout when running certain tests. Also, AIO threads were found to have uneven amounts of waiting operating system IO requests
- InnoDB: Cleanup in certain replicated XA transactions failed to reattach transaction object (trx_t), which raised an assertion failure
- InnoDB: The tablespace encryption type setting was not properly updated due to a failure during the resumption of an ALTER TABLESPACE ENCRYPTION operation following a server failure
- InnoDB: An interrupted tablespace encryption operation did not update the encrypt_type table option information in the data dictionary when the operation resume processing after the server was restarted
- InnoDB: Internal counter variables associated with thread sleep delay and threads entering an leaving InnoDB were revised to use C++ std::atomic. Built-in atomic operations were removed. Thanks to Yibo Cai from ARM for the contribution
- InnoDB: A relaxed memory order was implemented for dictionary memory variable fetch-add (dict_temp_file_num.fetch_add) and store (dict_temp_file_num.store) operations.
- InnoDB: A background thread that resumed a tablespace encryption operation after the server started failed to take an metadata lock on the tablespace, which permitted concurrent DDL operations and led to a race condition with the startup thread. The startup thread now waits until the tablespace metadata lock is taken
- InnoDB: Calls to numa_all_nodes_ptr were replaced by the numa_get_mems_allowed() function. Thanks to Daniel Black for the contribution
- Partitioning: ALTER TABLE t1 EXCHANGE PARTITION ... WITH TABLE t2 led to an assert when t1 was not a partitioned tableug
- Replication: The network_namespace parameter for the asynchronous_connection_failover_add_source() and asynchronous_connection_failover_delete_source() UDFs is no longer used from MySQL 8.0.23. These UDFs add and remove replication source servers from the source list for a replication channel for the asynchronous connection failover mechanism. The network namespace for a replication channel is managed using the CHANGE REPLICATION SOURCE statement, and has special requirements for Group Replication source servers, so it should no longer be specified in the UDFs
- Replication: When the system variable transaction_write_set_extraction=XXHASH64 is set, which is the default in MySQL 8.0 and a requirement for Group Replication, the collection of writes for a transaction previously had no upper size limit. Now, for standard source to replica replication, the numeric limit on write sets specified by binlog_transaction_dependency_history_size is applied, after which the write set information is discarded but the transaction continues to execute. Because the write set information is then unavailable for the dependency calculation, the transaction is marked as non-concurrent, and is processed sequentially on the replica. For Group Replication, the process of extracting the writes from a transaction is required for conflict detection and certification on all group members, so the write set information cannot be discarded if the transaction is to complete. The byte limit set by group_replication_transaction_size_limit is applied instead of the numeric limit, and if the limit is exceeded, the transaction fails to execute
- Replication: When mysqlbinlog’s --print-table-metadata option was used, mysqlbinlog used a different method for assessing numeric fields to the method used by the server when writing to the binary log, resulting in incorrect metadata output relating to these fields. mysqlbinlog now uses the same method as the server
- Replication: When using network namespaces in a replication channel and the initial connection from the replica to the master was interrupted, subsequent connection attempts failed to use the correct namespace information
- Replication: If the Group Replication applier channel (group_replication_applier) was holding a lock on a table, for example because of a backup in progress, and the member was expelled from the group and tried to rejoin automatically, the auto-rejoin attempt was unsuccessful and did not retry. Now, Group Replication checks during startup and rejoin attempts whether the group_replication_applier channel is already running. If that is the case at startup, an error message is returned. If that is the case during an auto-rejoin attempt, that attempt fails, but further attempts are made as specified by the group_replication_autorejoin_tries system variable
- Replication: If a group member was expelled and made an auto-rejoin attempt at a point when some tables on the instance were locked (for example while a backup was running), the attempt failed and no further attempts were made. This scenario is now handled correctly
- Replication: As the number of replicas replicating from a semisynchronous source server increased, locking contention could result in a performance degradation. The locking mechanisms used by the plugins have been changed to use shared locks where possible, avoid unnecessary lock acquisitions, and limit callbacks. The new behaviors can be implemented by enabling the following system variables:
- replication_sender_observe_commit_only=1 limits callbacks.
- replication_optimize_for_static_plugin_config=1 adds shared locks and avoids unnecessary lock acquisitions. This system variable must be disabled if you want to uninstall the plugin.
- Both system variables can be enabled before or after installing the semisynchronous replication plugin, and can be enabled while replication is running. Semisynchronous replication source servers can also get performance benefits from enabling these system variables, because they use the same locking mechanisms as the replicas
- Replication: On a multi-threaded replica where the commit order is preserved, worker threads must wait for all transactions that occur earlier in the relay log to commit before committing their own transactions. If a deadlock occurs because a thread waiting to commit a transaction later in the commit order has locked rows needed by a transaction earlier in the commit order, a deadlock detection algorithm signals the waiting thread to roll back its transaction. Previously, if transaction retries were not available, the worker thread that rolled back its transaction would exit immediately without signalling other worker threads in the commit order, which could stall replication. A worker thread in this situation now waits for its turn to call the rollback function, which means it signals the other threads correctly Bug #87796)
- Replication: GTIDs are only available on a server instance up to the number of non-negative values for a signed 64-bit integer (2 to the power of 63 minus 1). If you set the value of gtid_purged to a number that approaches this limit, subsequent commits can cause the server to run out of GTIDs and take the action specified by binlog_error_action. From MySQL 8.0.23, a warning message is issued when the server instance is approaching the limit
- Microsoft Windows: On Windows, running the MySQL server as a service caused shared-memory connections to fail
- JSON: JSON_ARRAYAGG() did not always perform proper error handling Bug #32012559, Bug #32181438)
- JSON: When updating a JSON value using JSON_SET(), JSON_REPLACE(), or JSON_REMOVE(), the target column can sometimes be updated in-place. This happened only when the target table of the update operation was a base table, but when the target table was an updatable view, the update was always performed by writing the full JSON value.
- Now in such cases, an in-place update (that is, a partial update) is also performed when the target table is an updatable view
- JSON: Work done in MySQL 8.0.22 to cause prepared statements to be prepared only once introduced a regression in the handling of dynamic parameters to JSON functions. All JSON arguments were classified as data type MYSQL_TYPE_JSON, which overlooked the fact that JSON functions take two kinds of JSON parameters—JSON values and JSON documents—and this distinction cannot be made with the data type only. For Bug #31667405, this problem was solved for comparison operators and the IN() operator by making it possible to tag a JSON argument as being a scalar value, while letting arguments to other JSON functions be treated as JSON documents.
- The present fix restores for a number of JSON functions their treatment of certain arguments as JSON values, as listed here:
- The first argument to MEMBER OF()
- The third, fifth, seventh, and subsequent odd-numbered arguments to the functions JSON_INSERT(), JSON_REPLACE(), JSON_SET(), JSON_ARRAY_APPEND(), and JSON_ARRAY_INSERT()
- JSON: When mysqld was run with --debug, attempting to execute a query that made use of a multi-valued index raised an errorg
- Use of the thread_pool plugin could result in Address Sanitizer warnings
- While pushing a condition down to a materialized derived table, and a condition is partially pushed down, the optimizer may, in some cases in which a query transformation has added new conditions to the WHERE condition, call the internal fix_fields() function for the condition that remains in the outer query block. A successful return from this function call was misinterpreted as an error, leading to the silent failure of the original statement
- Multiple calls to a stored procedure containing an ALTER TABLE statement that included an ORDER BY clause could cause a server exit
- Prepared statements involving stored programs could cause heap-use-after-free memory problems
- Queries on INFORMATION_SCHEMA tables that involved materialized derived tables could fail
- A potential buffer overflow was fixed. Thanks to Sifang Zhao for pointing out the issue, and for suggesting a fix (although it was not used)
- Conversion of FLOAT values to values of type INT could generate Undefined Behavior Sanitizer warnings
- In multiple-row queries, the LOAD_FILE() function evaluated to the same value for every row
- Generic Linux tar file distributions had too-restrictive file permissions after unpacking, requiring a manual chmod to correct
- For debug builds, prepared SET statements containing subqueries in stored procedures could raise an assertion
- For prepared statements, illegal mix of collations errors could occur for legal collation mixes
- The functions REGEXP_LIKE(), REGEXP_INSTR(), and REGEXP_REPLACE() raise errors for malformed regular expression patterns, but could also return NULL for such cases, causing subsequent debug asserts. Now we ensure that these functions do not return NULL except in certain specified cases.
- The function REGEXP_SUBSTR() can always return NULL, so no such check is needed, and for this function we make sure that one is not performed
- Testing an aggregate function for IS NULL or IS NOT NULL in a HAVING condition using WITH ROLLUP led to wrong results
- When a new aggregate function was added to the current query block because an inner query block had an aggregate function requiring evaluation in the current one, the server did not add rollup wrappers to it as needed
- For debug builds, certain CREATE TABLE statements with CHECK constraints could raise an assertion
- Incorrect BLOB field values were passed from InnoDB during a secondary engine load operation
- The LOCK_ORDER tool did not correctly represent InnoDB share exclusive locks
- The server did not handle properly an error raised when trying to use an aggregation function with an invalid column type as part of a hash join
- The length of the WORD column of the INFORMATION_SCHEMA.KEYWORDS table could change depending on table contents
- The Performance Schema host_cache table was empty and did not expose the contents of the host cache if the Performance Schema was disabled. The table now shows cache contents regardless of whether the Performance Schema is enabled
- A HANDLER READ statement sometimes hit an assert when a previous statement did not restore the original value of THD::mark_used_columns after use
- Importing a compressed table could cause an unexpected server exit if the table contained values that were very large when uncompressed
- Removed a memory leak that could occur when a subquery using a hash join and LIMIT was executed repeatedly
- A compilation failure on Ubuntu was corrected
- Memory used for storing partial-revokes information could grow excessively for sessions that executed a large number of statements
- The server did not handle all cases of the WHERE_CONDITION optimization correctly
- FLUSH TABLES WITH READ LOCK could block other sessions from executing SHOW TABLE STATUS
- In some cases, MIN() and MAX() incorrectly returned NULL when used as window functions with temporal or JSON values as arguments
- GRANT ... GRANT OPTION ... TO and GRANT ... TO .. WITH GRANT OPTION sometimes were not correctly written to the server logs
- For debug builds, CREATE TABLE using a partition list of more than 256 entries raised an assertion
- It was possible for queries in the file named by the init_file system variable to cause server startup failure
- When performing a hash join, the optimizer could register a false match between a negative integer value and a very large unsigned integer value
- SHOW VARIABLES could report an incorrect value for the partial_revokes system variable
- In the Performance Schema user_defined_functions table, the value of the UDF_LIBRARY column is supposed to be NULL for UDFs registered via the service API. The value was incorrectly set to the empty string
- The server automatic upgrade procedure failed to upgrade older help tables that used the latin1 character set
- Duplicate warnings could occur when executing an SQL statement that read the grant tables in serializable or repeatable-read transaction isolation level
- In certain queries with DISTINCT aggregates (which in general are solved by sorting before aggregation), the server used a temporary table instead of streaming due to the mistaken assumption that the logic for handling the temporary table performed deduplication. Now the server checks for the implied unique index instead, which is more robust and allows for the removal of unnecessary logic
- Certain combinations of lower_case_table_names values and schema names in Event Scheduler event definitions could cause the server to stall
- Calling one stored function from within another could produce a conflict in field resolution, resulting in a server exit
- User-defined functions defined without a udf_init() method could cause an unexpected server exit
- Setting the secure_file_priv system variable to NULL should disable its action, but instead caused the server to create a directory named NULL
- mysqlpump could exit unexpectedly due to improper simultaneous accesses to shared structures
- Uninstalling a component and deregistering user-defined functions (UDFs) installed by the component was not properly synchronized with whether the UDFs were currently in use
- Cleanup following execution of a prepared statement that performed a multi-table UPDATE or DELETE was not always done correctly, which meant that, following the first execution of such a prepared statement, the server reported a nonzero number of rows updated, even though no rows were actually changed
- For the engines which support primary key extension, when the total key length exceeded MAX_KEY_LENGTH or the number of key parts exceeded MAX_REF_PARTS, key parts of primary keys which did not fit within these limits were not added to the secondary key, but key parts of primary keys were unconditionally marked as part of secondary keys.
- This led to a situation in which the secondary key was treated as a covering index, which meant sometimes the wrong access method was chosen.
- This is fixed by modifying the way in which key parts of primary keys are added to secondary keys so that those which do not fit within which do not fit within the limits mentioned previously mentioned are cleared
- When MySQL is configured with -DWITH_ICU=system, CMake now checks that the ICU library version is sufficiently recent
- When invoked with the --binary-as-hex option, mysql displayed NULL values as empty binary strings (0x).
- Selecting an undefined variable returned the empty binary string (0x) rather than NULL
- Enabling DISABLE_PSI_xxx Performance Schema-related CMake options caused build failures
- Some queries returned different results depending on the value of internal_tmp_mem_storage_engine.
- The root cause of this issue related to the fact that, when buffering rows for window functions, if the size of the in-memory temporary table holding these buffered rows exceeds the limit specified, a new temporary table is created on disk; the frame buffer partition offset is set at the beginning of a new partition to the total number of rows that have been read so far, and is updated specifically for use when the temporary table is moved to disk (this being used to calculate the hints required to process window functions). The problem arose because the frame buffer partition offset was not updated for the specific case when a new partition started while creating the temporary table on disk, which caused the wrong rows to be read.
- This issue is fixed by making sure to update the frame buffer partition offset correctly whenever a new partition starts while a temporary table is moved to disk
- While buffering rows for window functions, if the size of the in-memory temporary table holding these buffered rows exceeds the limit specified by temptable_max_ram, a new temporary table is created on disk. After the creation of the temporary table, hints used to process window functions need to be reset, since the temporary table is now moved to disk, making the existing hints unusable. When the creation of the temporary table on disk occurred when the first row in the frame buffer was being processed, the hints had not been initialized and trying to reset these uninitialized hints resulted in an unplanned server exit.
- This issue is fixed by adding a check to verify whether frame buffer hints have been initialized, prior to resetting them
- The Performance Schema could produce incorrect results for joins on a CHANNEL_NAME column when the index for CHANNEL_NAME was disabled with USE INDEX ()
- When removing unused window definitions, a subquery that was part of an ORDER BY was not removed
- In certain cases, the server did not handle multiply-nested subqueries correctly
- The recognized syntax for a VALUES statement includes an ORDER BY clause, but this clause was not resolved, so the execution engine could encounter invalid data
- The server attempted to access a non-existent temporary directory at startup, causing a failure. Checks were added to ensure that temporary directories exist, and that files are successfully created in the tmpdir directory
- While removing redundant sorting, a window's ordering was removed due to the fact that rows were expected to come in order because of the ordering of another window. When the other window was subsequently removed because it was unused, this resulted in unordered rows, which was not expected during evaluation.
- Now in such cases, removal of redundant sorts is not performed until after any unused windows have been removed. In addition, resolution of any rollups has been moved to the preparation phase
- Semisynchronous replication errors were incorrectly written to the error log with a subsystem tag of Server. They are now written with a tag of Repl, the same as for other replication errors
- A user could grant itself as a role to itself
- The server did not always correctly handle cases in which multiple WHERE conditions, one of which was always FALSE, referred to the same subquery
- With a lower_case_table_names=2 setting, InnoDB background threads sometimes acquired table metadata locks using the wrong character case for the schema name part of a lock key, resulting in unprotected metadata and race conditions. The correct character case is now applied. Changes were also implemented to prevent metadata locks from being released before corresponding data dictionary objects, and to improve assertion code that checks lock protection when acquiring data dictionary objects
- If a CR_UNKNOWN_ERROR was to be sent to a client, an exception occurred
- Conversion of DOUBLE values to values of type BIT, ENUM, or SET could generate Undefined Behavior Sanitizer warnings
- Certain accounts could cause server startup failure if the skip_name_resolve system variable was enabled
- Client programs could unexpectedly exit if communication packets contained bad data
- A buffer overflow in the client library was fixed
- When creating a multi-valued or other functional index, a performance drop was seen when executing a query against the table on which the index was defined, even though the index itself was not actually used. This occurred because the hidden virtual column that backs such indexes was evaluated unnecessarily for each row in the query
- CMake checks for libcurl dependencies were improved
- mysql_config_editor incorrectly treated # in password values as a comment character
- In some cases, the optimizer attempted to compute the hash value for an empty string. Now a fixed value is always used instead
- The INSERT() and RPAD() functions did not correctly set the character set of the result
- Some corner cases for val1 BETWEEEN val2 AND val3 were fixed, such as that -1 BETWEEN 9223372036854775808 AND 1 returned true
- For the Performance Schema memory_summary_global_by_event_name table, the low watermark columns could have negative values, and the high watermark columns had ever-increasing values even when the server memory usage did not increase
- Several issues converting strings to numbers were fixed
- Certain group by queries that performed correctly did not return the expected result when WITH ROLLUP was added. This was due to the fact that decimal information was not always correctly piped through rollup group items, causing functions returning decimal values such as TRUNCATE() to receive data of the wrong typeug
- When creating fields for materializing temporary tables (that is, when needing to sort a join), the optimizer checks whether the item needs to be copied or is only a constant. This was not done correctly in one specific case; when performing an outer join against a view or derived table containing a constant, the item was not properly materialized into the table, which could yield spurious occurrences of NULL in the resultug
- When REGEXP_REPLACE() was used in an SQL statement, the internal function Regexp_engine::Replace() did not reset the error code value after handling a record, which could affect processing of the next record, which lead to issues.
- Our thanks to Hope Lee for the contributionug
- For a query having the following form, the column list sometimes assumed an inconsistent state after temporary tables were created, causing out-of-bounds indexing later:
SELECT * FROM (
SELECT PI()
FROM t1 AS table1, t1 AS table2
ORDER BY PI(), table1.a
) AS d1;

- When aggregating data that was already sorted (known as performing streaming aggregation, due to no temporary tables being used), it was not possible to determine when a group ended until processing the first row in the next group, by which time the group expressions to be output were often already overwritten.
- This is fixed by replacing the complex logic previously used with the much simpler method of saving a representative row for the group when encountering it the first time, so that its columns can easily be retrieved for the output row when neededug
- Subqueries making use of fulltext matching might not perform properly when subquery_to_derived was enabled, and could lead to an assert in debug buildsug
- When an ALTER TABLE ... CONVERT TO CHARACTER SET statement is executed, the character set of every CHAR, VARCHAR, and TEXT column in the table is updated to the new CHARACTER SET value. This change was also applied to the hidden CHAR column used by an ARRAY column for a multi-valued index; since the character set of the hidden column must be one of my_charset_utf8mb4_0900_bin or binary, this led to an assert in debug builds of the server.
- This issue is resolved by no longer setting the character set of the hidden column to that of the table when executing the ALTER TABLE statement referenced previously; this is similar to what is done for BLOB columns in similar circumstancesg
- In some cases, the server's internal string-conversion routines had problems handling floating-point values which used length specifiers and triggered use of scientific notationg


Update:2021-01-18
Info:

What's new in this version:

- A new visual editor for R Markdown documents
- Improved support for Python, including an environment pane for Python and visualization of Python objects
- Workbench productivity improvements, including a command palette and rainbow parentheses
- A more configurable workspace with additional source columns and improved accessibility
- Support for SAML and OpenID authentication, and experimental support for VS Code sessions, in RStudio Server Pro
- Dozens of small improvements and bugfixes

Bug-fixes:
- Fixed issue where debugger contexts were not displayed correctly for byte-compiled functions
- UTF-8 character vectors are now properly displayed within the Environment pane
- Fixed issue where diagnostics system surface “Unknown or uninitialized column” warnings in some cases
- Fixed issue where hovering mouse cursor over C++ completion popup would steal focus
- Fixed issue where autocompletion could fail for functions masked by objects in global environments
- Fixed issue where autocompletion could fail to provide argument names for piped-to S3 generics
- Fixed issue where UTF-8 output from Python chunks was mis-encoded on Windows
- Git integration now works properly for project names containing the ‘!’ character
- Fixed issue where loading the Rfast package could lead to session hangs
- Fixed header resizing in Data Viewer
- Fixed resizing last column in Data Viewer
- Fixed inconsistencies in the resizing between a column and its header
- Fixed submission of inconsistently indented Python blocks to reticulate
- Fixed error when redirecting inside Plumber applications in RStudio Server Pro
- Fixed failure to open files after an attempt to open a very large file
- Fixed Data Viewer getting out of sync with the underlying data when changing live viewer object
- Fixed issue where attempts to plot could fail if R tempdir was deleted
- Fixed issue that caused sessions to freeze due to slow I/O for monitor logs
- Added CSRF protection to sign-in pages
- Fixed issue that allowed multiple concurrent sign-in requests
- Fixed issue where the admin logs page could sometimes crash due to a malformed log statement
- Fixed issue where the URL popped out by the Viewer pane was incorrect after navigation
- Fixed issue where clicking the filter UI box would sort a data viewer column
- Fixed issue where Windows shortcuts were not resolved correctly in file dialogs
- Fixed issue where failure to rotate a log file could cause a process crash
- Fixed issue where saving workspace could emit ‘package may not be available when loading’ warning
- Fixed issue where indented Python chunks could not be run
- Fixed disappearing commands and recent files/projects when RStudio Desktop opens new windows
- Fixed issue where active repositories were not propagated to newly-created renv projects
- Fixed issue where .DollarNames methods defined in global environment were not resolved
- Reduced difference in font size and spacing between Terminal and Console
- Fixed issue where path autocompletion in R Markdown documents did not respect Knit Directory preference
- Fixed issue where Job Launcher streams could remain open longer than expected when viewing the job details page
- Fixed issue where rstudioapi::askForPassword() did not mask user input in some cases
- Fixed issue where Job Launcher admin users would have gid=0 in Slurm Launcher Sessions
- Fixed issue where Slurm Job Launcher jobs would not post updated resource utilization without browser refresh
- Fixed issue causing script errors when reloading Shiny applications from the editor toolbar
- Fixed issue where saving a file or project located in a backed up directory (such as with Dropbox or Google Drive) would frequently fail and display an error prompt
- Fixed issue causing C++ diagnostics to fail when Xcode developer tools were active
- Added option for clickable links in Terminal pane
- Fixed issue where R scripts containing non-ASCII characters in their path could not be sourced as a local job on Windows
- Fixed issue where non-ASCII characters in Subversion commit comments were incorrect encoded on Windows
- Prevent Discard button from being hidden in Subversion diff viewer
- Fixed issue where French (AZERTY) keyboards inserted ‘/’ rather than ‘:’ in some cases
- readline() and readLines() can now be interrupted, even when reading from stdin()
- Fixed issue causing Knit button to show old formats after editing the YAML header
- Fixed issue wherein the Python prompt would continue to be shown after an R restart
- Fixed issue where searches in the console history could inappropriately preserve search position
- Fixed issue where auth-pam-session-use-password would not work when multiple Server nodes are used behind an external load balancer
- Fixed issue where project sharing configured it server-project-sharing-root-dir would fail to share when the path contain mixed ACL support
- Fixed issue where project sharing would fail to share when the path contain mixed NFS ACL support
- Fixed issue where in sharing a project on some NFSv4 filesystems could result in damage to owner permissions
- Fixed issue where file permissions were not corrected after uploading a file to a shared project
- Fixed issue where the project sharing would not work behind a HTTPS proxy


Update:2020-10-24
Info:

What's new in this version:

- Lock handling for statements involving the grant tables was improved
- Modifying the mysql.infoschema and mysql.sys reserved accounts now requires the SYSTEM_USER privilege
- For the CREATE USER, DROP USER, and RENAME USER account-management statements, the server now performs additional security checks designed to prevent operations that (perhaps inadvertently) cause stored objects to become orphaned or that cause adoption of stored objects that are currently orphaned. Such operations now fail with an error. If you have the SET_USER_ID privilege, it overrides the checks and those operations produce a warning rather than an error; this enables administrators to perform the operations when they are deliberately intended. See Orphan Stored Objects.


Update:2020-04-29
Info:

What's new in this version:

Functionality Added or Changed:
- Important Change: Previously, including any column of a blob type larger than TINYBLOB or BLOB as the payload in an ordering operation caused the server to revert to sorting row IDs only, rather than complete rows; this resulted in a second pass to fetch the rows themselves from disk after the sort was completed. Since JSON and GEOMETRY columns are implemented internally as LONGBLOB, this caused the same behavior with these types of columns even though they are almost always much shorter than the 4GB maximum for LONGBLOB (or even the 16 MB maximum for MEDIUMBLOB). The server now converts columns of these types into packed addons in such cases, just as it does TINYBLOB and BLOB columns, which in testing showed a significant performance increase. The handling of MEDIUMBLOB and LONGBLOB columns in this regard remains unchanged.
- One effect of this enhancement is that it is now possible for Out of memory errors to occur when trying to sort rows containing very large (multi-megabtye) JSON or GEOMETRY column values if the sort buffers are of insufficient size; this can be compensated for in the usual fashion by increasing the value of the sort_buffer_size system variable. (Bug #30400985, Bug #30804356)
- InnoDB: The Contention-Aware Transaction Scheduling (CATS) algorithm, which prioritizes transactions that are waiting for locks, was improved. Transaction scheduling weight computation is now performed a separate thread entirely, which improves computation performance and accuracy.
- The First In First Out (FIFO) algorithm, which had also been used for transaction scheduling, was removed. The FIFO algorithm was rendered redundant by CATS algorithm enhancements. Transaction scheduling previously performed by the FIFO algorithm is now performed by the CATS algorithm.
- A TRX_SCHEDULE_WEIGHT column was added to the INFORMATION_SCHEMA.INNODB_TRX table, which permits querying transaction scheduling weights assigned by the CATS algorithm.

Bugs Fixed:
- Performance: Certain queries against tables with spatial indexes were not performed as efficiently following an upgrade from MySQL 5.7 to MySQL 8.0
- References: See also: Bug #89551, Bug #27499984
- NDB Cluster: NDB defines one SPJ worker per node owning a primary partition of the root table. If this table used read from any replica, DBTC put all SPJ workers in the same DBSPJ instance, which effectively removed the use of some SPJ workers.
- NDB Cluster: Executing the SHOW command using an ndb_mgm client binary from NDB 8.0.16 or earlier to access a management node running NDB 8.0.17 or later produced the error message Unknown field: is_single_user.
- InnoDB: A CREATE UNDO TABLESPACE operation that specified an undo data file name without specifying a path removed an existing undo data file of the same name from the directory specified by innodb_undo_directory variable. The file name conflict check was performed on the data directory instead of the directory specified by the innodb_undo_directory variable.
- InnoDB: In debug builds, a regression introduced in MySQL 8.0.19 slowed down mutex and rw-lock deadlock debug checks.
- References: This issue is a regression of: Bug #30628872.
- InnoDB: Valgrind testing raised an error indicating that a conditional jump or move depends on an uninitialized value. The error was a false-positive due to invalid validation logic.
- InnoDB: Missing barriers in rw_lock_debug_mutex_enter() (in source file sync0debug.cc) could cause a thread to wait without ever being woken up.
- InnoDB: To improve server initialization speed, fallocate() is now used to allocate space for redo log files.
- InnoDB: A data dictionary table open function was implemented with incorrect lock ordering
- InnoDB: Changes to parallel read threads functionality introduced in MySQL 8.0.17 caused a degradation in SELECT COUNT(*) performance. Pages were read from disk unnecessarily
- InnoDB: DDL logging was not performed for SQL operations executed by the bootstrap thread using the init_file startup variable, causing files to be left behind that should have been removed during a post-DDL stage.
- InnoDB: Adding an index on a column cast as a JSON array on a table with a specific number of records failed with an “Incorrect key file for table” error.
- InnoDB: A Valgrind error reported that an uninitialized lock->writer_thread value was used in a conditional jump.
- InnoDB: An internal buffer pool statistics counter (n_page_gets) was partitioned by page number to avoid contention when accessed by multiple threads.
- InnoDB: A tablespace import operation failed with a schema mismatch error due to the .cfg file and the data dictionary both containing default values for a column that was added using ALGORITHM=INSTANT. An error should only occur if default values differ.
- InnoDB: A slow shutdown failed to flush some GTIDs, requiring recovery of unflushed GTIDs from the undo log.
- InnoDB: A broken alignment requirement in the code that allocates a prefix in memory for Performance Schema memory allocations caused a failure on MySQL builds optimized for macOS and FreeBSD.
- InnoDB: Adding a virtual column raised an assertion failure due to data that was missing from the new data dictionary object created for the table.
- InnoDB: A required latch was not taken when checking the mode of an undo tablespace. A required latch was also not taken when checking whether an undo tablespace is empty
- InnoDB: Allocating an update undo log segment to an XA transaction for persisting a GTID value before the transaction performed any data modifications caused a failure.
- InnoDB: A query executed on a partitioned table with a discarded tablespace raised an assertion failure.
- InnoDB: The row_upd_clust_rec_by_insert function, which marks a clustered index record as deleted and inserts an updated version of the record into the clustered index, passed an incorrect n_ext value (the total number of external fields) to lower level functions, causing an assertion failure.
- InnoDB: During a cloning operation, writes to the data dictionary buffer table at shutdown were too late, causing a failure. Newly generated dirty pages were not being flushed.
- InnoDB: An operation performed with the innodb_buffer_pool_evict debug variable set to uncompressed caused an assertion failure.
- InnoDB: Read-write lock code (rw_lock_t) that controls ordering of access to the boolean recursive flag and the writer thread ID using GCC builtins or os_mutex when the builtins are not available, was revised to use C++ std::atomic in some instances.
- Thanks to Yibo Cai from ARM for the contribution.
- InnoDB: A failure occurred while upgrading from MySQL 5.7 to MySQL 8.0. A server data dictionary object was missing information about the FTS_DOC_ID column and FTS_DOC_ID_INDEX that remain after dropping a FULLTEXT index.
- InnoDB: Unnecessary messages about parallel scans were printed to the error log.
- InnoDB: During upgrade from MySQL 5.7 to MySQL 8.0, clustered indexes named GEN_CLUST_INDEX are renamed to PRIMARY, which resulted in duplicate entries for the clustered indexes being added to the mysql.innodb_index_stats table.
- InnoDB: Various internal functions computed write event slots in an inconsistent manner.
- InnoDB: Under specific circumstances, it was possible that tablespace encryption key information would not be applied during the redo log apply phase of crash recovery.
- InnoDB: A file operation failure caused the page tracking archiver to fail, which in turn caused the main thread to hang, resulting in an assertion failure. Also, incorrectly, the page tracking archiver remained enabled in innodb_read_only mode.
- InnoDB: An index corruption error was reported when attempting to import a tablespace containing a table column that was added using ALGORITHM=INSTANT. The error was due to missing metadata associated with the instantly added column.
- InnoDB: A transaction attempting to fetch an LOB record encountered a null LOB reference, causing an assertion failure. However, the null LOB reference was valid in this particular scenario because the LOB value was not yet fully written.
- InnoDB: During a parallel read operation, the rollback of a table load operation while autocommit was disabled resulted in a server to exit due to assertion code that did not account for the possibility of tree structure changes during a parallel read.
- InnoDB: The current size value maintained in a rollback segment memory object was found to be invalid, causing an assertion failure in function trx_purge_free_segment(). A validation routine (trx_rseg_t::validateCurrSize()) was added to verify the current size value.
- InnoDB: A prepared statement executed with invalid parameter values raised an assertion failure.
- InnoDB: An add column operation caused an assertion failure. The failure was due to a dangling pointer.
- References: This issue is a regression of: Bug #28491099.
- InnoDB: Updating certain InnoDB system variables that take string values raised invalid rad errors during Valgrind testing.
- InnoDB: Redo log records for modifications to undo tablespaces increased in size in MySQL 8.0 due to a change in undo tablespace ID values, which required additional bytes. The change in redo log record size caused a performance regression in workloads with heavy write I/O. To address this issue, the redo log format was modified to reduce redo log record size for modifications to undo tablespaces.
- InnoDB: Additional information about InnoDB file writes, including progress data, is now printed to the error log.
- InnoDB: An insert statement on a table with a spatial index raised a record type mismatch assertion due to a tuple corruption.
- InnoDB: A function that calculates undo log record size could calculate an incorrect length value in the case of a corrupted undo log record, resulting in a malloc failure. Assertion code was added to detect incorrect calculations.
- Replication: The thread used by Group Replication's message service was not correctly registered by the Performance Schema instrumentation, so the thread actions were not visible in Performance Schema tables.
- Replication: Group Replication initiates and manages cloning operations for distributed recovery, but group members that have been set up to support cloning may also participate in cloning operations that a user initiates manually. In releases before MySQL 8.0.20, you could not initiate a cloning operation manually if the operation involved a group member on which Group Replication was running. From MySQL 8.0.20, you can do this, provided that the cloning operation does not remove and replace the data on the recipient. The statement to initiate the cloning operation must therefore include the DATA DIRECTORY clause if Group Replication is running.
- Replication: For Group Replication channels, issuing the CHANGE MASTER TO statement with the PRIVILEGE_CHECKS_USER option while Group Replication was running caused the channel's relay log files to be deleted. Transactions that had been received and queued in the relay log, but not yet applied, could be lost in this situation. The CHANGE MASTER TO statement can now only be issued when Group Replication is not running.
- Replication: Group Replication's failure detection mechanism raises a suspicion if a server stops sending messages, and the member is eventually expelled provided that a majority of the group members are still communicating. However, the failure detection mechanism did not take into account the situation where one or more of the group members in the majority had actually already been marked for expulsion, but had not yet been removed from the group. Where the network was unstable and members frequently lost and regained connection to each other in different combinations, it was possible for a group to end up marking all its members for expulsion, after which the group would cease to exist and have to be set up again.
- Group Replication's Group Communication System (GCS) now tracks the group members that have been marked for expulsion, and treats them as if they were in the group of suspect members when deciding if there is a majority. This ensures at least one member remains in the group and the group can continue to exist. When an expelled member has actually been removed from the group, GCS removes its record of having marked the member for expulsion, so that the member can rejoin the group if it is able to. (Bug #30640544)
- Replication: While an SQL statement was in the process of being rewritten for the binary log so that sensitive information did not appear in plain text, if a SHOW PROCESSLIST statement was used to inspect the query, the query could become corrupted when it was written to the binary log, causing replication to stop. The process of rewriting the query is now kept private, and the query thread is updated only when rewriting is complete.
- Replication: When a GRANT or REVOKE statement is only partially executed, an incident event is logged in the binary log, which makes the replication slave's applier thread stop so that the slave can be reconciled manually with the master. Previously, if a failed GRANT or REVOKE statement was the first statement executed in the session, no GTID was applied to the incident event (because the cache manager did not yet exist for the session), causing an error on the replication slave. Also, no incident event was logged in the situation where a GRANT statement created a user but then failed because the privileges had been specified incorrectly, again causing an error on the replication slave. Both these issues have now been fixed.
- Replication: Compression is now triggered for the mysql.gtid_executed table when the thread/sql/compress_gtid_table thread is launched after the server start, and the effects are visible when the compression process is complete.
- Replication: Performance Schema tables could not be accessed on a MySQL server with Group Replication that was running under high load conditions.
- Replication: Internal queries from Group Replication to the Performance Schema for statistics on local group members failed if they occurred simultaneously with changes to the group's membership. Locking for the internal queries has been improved to fix the issue.
- Replication: In the event of an unplanned disconnection of a replication slave from the master, the reference to the master's dump thread might not be removed from the list of registered slaves, in which case statements that accessed the list of slaves would fail. The issue has now been fixed. (Bug #29915479)
- Replication: When a partitioned table was involved, the server did not correctly handle the situation where a row event could not be written to the binary log due to a lack of cache space. An appropriate error is now returned in this situation.
- Replication: During Group Replication's distributed recovery process, if a joining member is unable to complete a remote cloning operation with any donor from the group, it uses state transfer from a donor's binary log to retrieve all of the required data. However, if the last attempted remote cloning operation was interrupted and left the joining member with incomplete or no data, an attempt at state transfer immediately afterwards could also fail. Before attempting state transfer following a failed remote cloning operation, Group Replication now checks that the remote cloning operation did not reach the stage of removing local data from the joining member. If data was removed, the joining member leaves the group and takes the action specified by the group_replication_exit_state_action system variable.
- Replication: With the settings binlog_format=MIXED, tx_isolation=READ-COMMITTED, and binlog_row_image=FULL, an INSERT ... SELECT query involving a transactional storage engine omitted any columns with a null value from the row image written to the binary log. This happened because when processing INSERT ... SELECT statements, the columns were marked for inserts before the binary logging format was selected. The issue has now been fixed.
- Replication: Before taking certain actions, Group Replication checks what transactions are running on the server. Previously, the service used for this check did not count transactions that were in the commit phase, which could result in the action timing out. Now, transactions that are in the commit phase are included in the set of currently ongoing transactions.
- JSON: When JSON_TABLE() was used as part of an INSERT statement in strict mode, conversion errors handled by any ON ERROR clause could cause the INSERT to be rejected. Since errors are handled by an ON ERROR clause, the statement should not be rejected unless ERROR ON ERROR is actually specified.
- This issue is fixed by ignoring warnings when converting values to the target type if NULL ON ERROR or DEFAULT ... ON ERROR has been specified or is implied.