[prev in list] [next in list] [prev in thread] [next in thread] 

List:       mysql
Subject:    MySQL Cluster 7.3.10 has been released
From:       Lars Tangvald <lars.tangvald () oracle ! com>
Date:       2015-07-14 13:13:42
Message-ID: 55A50B06.7010401 () oracle ! com
[Download RAW message or body]


Dear MySQL Users,

MySQL Cluster is the distributed, shared-nothing variant of MySQL.
This storage engine provides:

   - In-Memory storage - Real-time performance (with optional
     checkpointing to disk)
   - Transparent Auto-Sharding - Read & write scalability
   - Active-Active/Multi-Master geographic replication

   - 99.999% High Availability with no single point of failure
     and on-line maintenance
   - NoSQL and SQL APIs (including C++, Java, http, Memcached
     and JavaScript/Node.js)

MySQL Cluster 7.3.10, has been released and can be downloaded from

http://www.mysql.com/downloads/cluster/

where you will also find Quick Start guides to help you get your
first MySQL Cluster database up and running.

The release notes are available from

http://dev.mysql.com/doc/relnotes/mysql-cluster/7.3/en/index.html

MySQL Cluster enables users to meet the database challenges of next
generation web, cloud, and communications services with uncompromising
scalability, uptime and agility.

More details can be found at

http://www.mysql.com/products/cluster/

Enjoy !

Changes in MySQL Cluster NDB 7.3.10 (5.6.25-ndb-7.3.10) (2015-07-13)

    MySQL Cluster NDB 7.3.10 is a new release of MySQL Cluster,
    based on MySQL Server 5.6 and including features from version
    7.3 of the NDB storage engine, as well as fixing a number of
    recently discovered bugs in previous MySQL Cluster releases.

    Obtaining MySQL Cluster NDB 7.3.  MySQL Cluster NDB 7.3
    source code and binaries can be obtained from
http://dev.mysql.com/downloads/cluster/.

    For an overview of changes made in MySQL Cluster NDB 7.3, see
    MySQL Cluster Development in MySQL Cluster NDB 7.3
    (http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-develop 
<http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-development-5-6-ndb-7-3.html>
ment-5-6-ndb-7-3.html 
<http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-development-5-6-ndb-7-3.html>).

    This release also incorporates all bugfixes and changes made
    in previous MySQL Cluster releases, as well as all bugfixes
    and feature changes which were added in mainline MySQL 5.6
    through MySQL 5.6.25 (see Changes in MySQL 5.6.25
    (2015-05-29) (http://dev.mysql.com/doc/relnotes/mysql/5.6/en/ 
<http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-25.html>
news-5-6-25.html 
<http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-25.html>)).

    Functionality Added or Changed

      * ClusterJ: Under high workload, it was possible to overload
     the direct memory used to back domain objects, because
     direct memory is not garbage collected in the same manner
     as objects allocated on the heap. Two strategies have been
     added to the ClusterJ implementation: first, direct memory is
     now pooled, so that when the domain object is garbage collected,
     the direct memory can be reused by another domain object.
     Additionally, a new user-level method, release(instance), has
     been added to the Session interface, which allows users to
     release the direct memory before the corresponding domain
     object is garbage collected. See the description for
release(instance) 
<http://dev.mysql.com/doc/ndbapi/en/mccj-clusterj-session.html#mccj-clusterj-session-release-t> 
for more information. (Bug #20504741)

    Bugs Fixed

      * Important Change; Cluster API: Added the method
        Ndb::isExpectingHigherQueuedEpochs() to the NDB API to
        detect when additional, newer event epochs were detected
        by pollEvents2().
        The behavior of Ndb::pollEvents() has also been modified
        such that it now returns NDB_FAILURE_GCI (equal to
        ~(Uint64) 0) when a cluster failure has been detected.
        (Bug #18753887)

      * After restoring the database metadata (but not any data)
        by running ndb_restore --restore_meta (or -m), SQL nodes
        would hang while trying to SELECT from a table in the
        database to which the metadata was restored. In such
        cases the attempt to query the table now fails as
        expected, since the table does not actually exist until
        ndb_restore is executed with --restore_data (-r). (Bug
        #21184102)
        References: See also Bug #16890703.

      * When a great many threads opened and closed blocks in the
        NDB API in rapid succession, the internal close_clnt()
        function synchronizing the closing of the blocks waited
        an insufficiently long time for a self-signal indicating
        potential additional signals needing to be processed.
        This led to excessive CPU usage by ndb_mgmd, and
        prevented other threads from opening or closing other
        blocks. This issue is fixed by changing the function
        polling call to wait on a specific condition to be woken
        up (that is, when a signal has in fact been executed).
        (Bug #21141495)

      * Previously, multiple send threads could be invoked for
        handling sends to the same node; these threads then
        competed for the same send lock. While the send lock
        blocked the additional send threads, work threads could
        be passed to other nodes.
        This issue is fixed by ensuring that new send threads are
        not activated while there is already an active send
        thread assigned to the same node. In addition, a node
        already having an active send thread assigned to it is no
        longer visible to other, already active, send threads;
        that is, such a node is longer added to the node list
        when a send thread is currently assigned to it. (Bug
        #20954804, Bug #76821)

      * Queueing of pending operations when the redo log was
        overloaded (DefaultOperationRedoProblemAction API node
        configuration parameter) could lead to timeouts when data
        nodes ran out of redo log space (P_TAIL_PROBLEM errors).
        Now when the redo log is full, the node aborts requests
        instead of queuing them. (Bug #20782580)
        References: See also Bug #20481140.

      * NDB statistics queries could be delayed by the error
        delay set for ndb_index_stat_option (default 60 seconds)
        when the index that was queried had been marked with
        internal error. The same underlying issue could also
        cause ANALYZE TABLE to hang when executed against an NDB
        table having multiple indexes where an internal error
        occured on one or more but not all indexes.
        Now in such cases, any existing statistics are returned
        immediately, without waiting for any additonal statistics
        to be discovered. (Bug #20553313, Bug #20707694, Bug
        #76325)

      * The multi-threaded scheduler sends to remote nodes either
        directly from each worker thread or from dedicated send
        threadsL, depending on the cluster's configuration. This
        send might transmit all, part, or none of the available
        data from the send buffers. While there remained pending
        send data, the worker or send threads continued trying to
        send in a loop. The actual size of the data sent in the
        most recent attempt to perform a send is now tracked, and
        used to detect lack of send progress by the send or
        worker threads. When no progress has been made, and there
        is no other work outstanding, the scheduler takes a 1
        millisecond pause to free up the CPU for use by other
        threads. (Bug #18390321)
        References: See also Bug #20929176, Bug #20954804.

      * In some cases, the DBDICT block failed to handle repeated
        GET_TABINFOREQ signals after the first one, leading to
        possible node failures and restarts. This could be
        observed after setting a sufficiently high value for
        MaxNoOfExecutionThreads and low value for
        LcpScanProgressTimeout. (Bug #77433, Bug #21297221)

      * Client lookup for delivery of API signals to the correct
        client by the internal
        TransporterFacade::deliver_signal() function had no mutex
        protection, which could cause issues such as timeouts
        encountered during testing, when other clients connected
        to the same TransporterFacade. (Bug #77225, Bug
        #21185585)

      * It was possible to end up with a lock on the send buffer
        mutex when send buffers became a limiting resource, due
        either to insufficient send buffer resource
        configuration, problems with slow or failing
        communications such that all send buffers became
        exhausted, or slow receivers failing to consume what was
        sent. In this situation worker threads failed to allocate
        send buffer memory for signals, and attempted to force a
        send in order to free up space, while at the same time
        the send thread was busy trying to send to the same node
        or nodes. All of these threads competed for taking the
        send buffer mutex, which resulted in the lock already
        described, reported by the watchdog as Stuck in Send.
        This fix is made in two parts, listed here:

          1. The send thread no longer holds the global send
             thread mutex while getting the send buffer mutex; it
             now releases the global mutex prior to locking the
             send buffer mutex. This keeps worker threads from
             getting stuck in send in such cases.

          2. Locking of the send buffer mutex done by the send
             threads now uses a try-lock. If the try-lock fails,
             the node to make the send to is reinserted at the
             end of the list of send nodes in order to be retried
             later. This removes the Stuck in Send condition for
             the send threads.
        (Bug #77081, Bug #21109605)

      * Cluster API: Creation and destruction of
        Ndb_cluster_connection objects by multiple threads could
        make use of the same application lock, which in some
        cases led to failures in the global dictionary cache. To
        alleviate this problem, the creation and destruction of
        several internal NDB API objects have been serialized.
        (Bug #20636124)

      * Cluster API: A number of timeouts were not handled
        correctly in the NDB API.
        (Bug #20617891)

      * Cluster API: When an Ndb object created prior to a
        failure of the cluster was reused, the event queue of
        this object could still contain data node events
        originating from before the failure. These events could
        reference "old" epochs (from before the failure
        occurred), which in turn could violate the assumption
        made by the nextEvent() method that epoch numbers always
        increase. This issue is addressed by explicitly clearing
        the event queue in such cases. (Bug #18411034)

      * ClusterJ: When used with Java 1.7 or higher, ClusterJ might
        cause the Java VM to crash when querying tables with BLOB
        columns, because NdbDictionary::createRecord calculates the
        wrong size needed for the record. Subsequently, when ClusterJ
        called NdbScanOperation::nextRecordCopyOut, the data
        overran the allocated buffer space. With this fix, ClusterJ
        checks the size calculated by NdbDictionary::createRecord and
        uses the value for the buffer size, if it is larger than the value
        ClusterJ itself calculates (Bug #20695155)

On behalf of Oracle/MySQL RE Team
Lars Tangvald


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic