Bugs Fixed in GemFire 6.6.4

Last updated: 12/06/2012

Ticket Created Title Description Workaround for earlier versions
#43466 05/25/11 Transaction commit may hang In a very busy system, transaction commit may starve when there are multiple threads trying to begin transactions on regions with expiration. This has been fixed.
#44158 11/11/11 Redundancy zones for HA are random in presence of teredo link-local address fe80:0:0:0:0:100:7f:fffe on Windows The presence of non unique, link-local IPv6 addresses may cause gemfire to think that two different hosts are actually the same machine. One such case is windows machines that use Teredo may use link-local address fe80:0:0:0:0:100:7f:fffe on every machine. If gemfire decides that two hosts are actually the same machine, it may refuse to create redundant copies of buckets on those hosts if enforce-unique-hosts is set to true. With enforce-unique-hosts set to false, gemfire may be unable to distinguish between two JVMs that actually are on the same host and two JVMs that are on different hosts, and therefore may end up placing two redundant copies of a bucket on the same host. This is fixed in GemFire 7.0 and Upgrade to GemFire 7.0 or 6.6.4. The alternative is to disable the teredo service on every Windows machine hosting GemFire servers using this command: "netsh interface teredo set state disabled"
#44727 03/22/12 Deadlock closing the cache while recovering a persistent partitioned region In rare cases, calling cache.close, or using shutdown all, can cause members to hang during shutdown if the members where in the process of recovering a persistent partitioned region from disk.
#45077 05/24/12 Spurious DistributedSystemDisconnectedException with cause IllegalStateException A cache operation may, in rare circumstances, throw a DistributedSystemDisconnectedException when the system is not disconnected. This may occur when a peer process has disconnected or crashed. The cause of the exception will be an IllegalStateException in this form: {{{ Caused by: java.lang.IllegalStateException: Task already scheduled or cancelled at java.util.Timer.sched(Timer.java:401) at java.util.Timer.scheduleAtFixedRate(Timer.java:328) at com.gemstone.gemfire.internal.SystemTimer.scheduleAtFixedRate(SystemTimer.java:386) at com.gemstone.gemfire.internal.tcp.ConnectionTable.scheduleIdleTimeout(ConnectionTable.java:555) ... 31 more }}}
#45142 06/08/12 SystemConnectExceptions thrown by new servers after shutting down locator It is possible for members to become confused and not elect a new membership coordinator when the old one is shut down. This results in new server-side processes throwing a SystemConnectException when attempting to start up.
#45151 06/11/12 When thread local connections are configured with single hop, connections are lost and performance is degraded at end of load conditioning interval The performance drop has been fixed.
#45343 07/05/12 Updates not distributed to WAN sites When the WAN concurrency level is greater than one, some updates from putAll operations may not be distributed to remote sites.
#45368 07/09/12 Hang using fixed partitioning with disk persistence In rare cases, a persistent partitioned region with using fixed partitioning and colocated regions can hang during primary bucket failover.
#45842 08/21/12 Indexes dont work with REPLICATE_PERSISTENT_OVERFLOW In GemFire 6.6.x, an index defined in cache.xml on a persistent_overflow region isn't created when the cache is stopped and restarted using the persistence file.
#45931 08/23/12 If customer did not specify classpath for their instantiators, offline compaction (including the conversion) will lose the instantiators. This bug is inherited from 6.5.
#46004 08/30/12 NullPointer Exception while querying partitioned regions with numThreads system property When query is executed on Partitioned Region with "gemfire.PRQueryProcessor.numThreads" system property set, the query used to throw NPE. The NPE was getting thrown in the log messages.
#46216 09/14/12 7.0 servers are not compatible with 6.6.3 clients If you use a 7.0 with 6.6.3 clients, then you may see warnings in your server log about a NumberFormatException and the call stack will show it came from InternalDistributedMember.readEssentialData. Clients older than 6.6.3 will work with 7.0 and a fix for 6.6.3 is available in 6.6.4/
#46351 09/25/12 After network failure, locator shuts down but other members do not If there is a network failure such that a locator and one or more other processes become separated from the distributed system and these processes do not have a quorum, it is possible that the locator will shut down but the other members will not.
#46355 09/25/12 Using instantiator might hang when registering or deserializing Deadlocks are detected. This problem is inherited from 6.5.
#46456 10/02/12 Processes are slow to shut down after network failure A process that becomes isolated with one or more other processes due to a network failure may be slow to shut down or not shut down at all. This has been observed when one of the isolated processes was a locator. The locator shut down normally, but other processes either did not shut down at all or did not do so until the network was fixed.
#46671 10/25/12 Fixed Partitioning quickstart example fails In the Fixed Partitioning quickstart example, the following files reference cache6_5.dtd when they should reference cache6_6.dtd. FixedPartitionPeer1.xml FixedPartitionPeer2.xml In the following files, change the DTD reference to cache6_6.dtd: quickstart/xml/FixedPartitionPeer1.xml quickstart/xml/FixedPartitionPeer2.xml
#46768 11/07/12 Index update during node recovery Concurrent index updates during a node recovery were failing for compact indexes on regions configured with overflow-to-disk eviction strategy. The part of the code was not thread-safe which has been fixed.