Friday, January 29, 2021

Integrating Apache Geode with the Rapid cluster membership system

In 2020 I set a goal of integrating an alternative Membership Service with Apache Geode.  A team of engineers (including myself) broke that service out into a separate module (link) in 2019 and I wanted to see if it was possible to use a different implementation.


All distributed systems are based on a Membership Service.  Geode, Cassandra, Hazelcast, Coherence etc.  all depend on one to know which processes (nodes) are part of the cluster and to detect when there are failures.  There needs to be a way to introduce new nodes into the cluster and a way to remove current nodes when the need arises.


Apache Geode's Membership Service (which I'll call GMS) grew out of a heavily modified fork of the JGroups project.  Though it's been rewritten to not depend on JGroups, Geode still uses the same concept of a membership coordinator (usually the oldest node in the cluster) that accepts join/remove requests and decides whether a node should be kicked out of the cluster.  It also has the other components you'd expect in a Membership Service like a message-sender/receiver and a failure detector.


Halfway through the year I read a paper on the Rapid membership service claiming it could spin up a large cluster in record time and detect failures with a high probability of success.  There was also a Java implementation of the service (as well as a Go impl, which is interesting from a K8s perspective) which made it a nice fit for an integration project.  Perfect!  So I cloned the Rapid repo and started integrating it with Geode.


As an initial goal I wanted to demonstrate that the integration would at least pass a non-trivial integration test and an even more complicated distributed unit test, which orchestrates a cluster of JVMs to test distributed algorithms.  I also wanted to try out a regression test that kills off processes and asserts that the membership service detects and resolves the node failures.


Another goal was to see what dependencies Geode has on its membership implementation that make it difficult or impossible to use a different implementation. I've highlighted these findings in the sections below.


The modularization effort that I mentioned created a nice API for Geode's membership service but hardcoded the implementation into its Builder implementation.  I didn't bother with creating a pluggable SPI during the integration. Instead I modified the Builder to create a Membership based on Rapid.  The Rapid project includes a sample Netty-based messaging component and a ping-pong based failure detector that are similar enough to what Geode has that I decided to use them in the integration effort.


Node Identifiers


In a distributed system nodes have identifiers that let you know how to contact them.  Rapid uses UUIDs and these map to physical addresses that contain the Netty host and port.  Other systems like JGroups use a similar approach, but not Geode.  It has an overgrown membership identifier that includes a lot of information about each node.  That was the first problem I ran into.  How could I disseminate the metadata about each node to other members of the cluster?  That metadata included things like the address used for non-membership communications, the type of node (locator, server) and the name assigned to it.  Fortunately Rapid's Cluster implementation allows you to provide this data when joining the cluster and makes it available to other nodes in its callback API.  I merely serialized the Geode identifier and set that as the metadata for the process joining the cluster.  In the callbacks I could then deserialize the ID and pass that up the line.


--> This points out that Geode requires a way to transmit metadata about a node in membership views and node identifiers.  The metadata is static and is known when joining the cluster.


Map<String, ByteString> metadata = new HashMap<>();
metadata.put(GEODE_ID, objectToByteString(localAddress, serializer));
clusterBuilder = new Cluster.Builder(listenAddress)
.setMessagingClientAndServer(messenger, messenger)
.setMetadata(metadata);
clusterBuilder.addSubscription(com.vrg.rapid.ClusterEvents.VIEW_CHANGE_PROPOSAL,
this::onViewChangeProposal);
clusterBuilder.addSubscription(com.vrg.rapid.ClusterEvents.VIEW_CHANGE,
this::onViewChange);
clusterBuilder.addSubscription(com.vrg.rapid.ClusterEvents.KICKED,
this::onKicked);

 

Joining the cluster


Next I needed to reconcile the discovery mechanisms of Rapid and Geode.


Like many other systems Rapid relies on there being Seed Nodes that new processes can contact and request to join the cluster.  Any process that's already in the cluster can be used as a Seed Node — you just need to know the host & port of the node to use it as a Seed Node.


Geode's membership service, on the other hand, doesn't use Seed Nodes. Instead it uses a Locator service to find the node that is currently the membership coordinator and uses that node to join the cluster.  All of the Geode unit tests that create a cluster expect this behavior, so if I wanted to use those tests as-is I couldn't use Seed Nodes.


The Locator service is a simple request/reply server that keeps track of who is in the cluster and gives new processes the address of the membership coordinator.  I decided to write an alternative plugin to Geode's GMSLocator class that would allow multiple Rapid nodes to start up concurrently.


A new process contacts the new RapidSeedLocator and asks for the ID of a seed node.  Normally there is a seed node available and the new process starts the Rapid cluster using the seed node's address.


seedAddress = findSeedAddressFromLocators();
if (!seedAddress.equals(listenAddress)) {
cluster = clusterBuilder.join(seedAddress);
}


If there are no nodes in the cluster the new process loops a few times to see if one will show up. If none does it starts a new cluster. It's a little more complicated than that because there may be multiple processes starting up at the same time and they need to work out which one will bootstrap the cluster.


if (bestChoice.equals(localAddress)) {
logger.info("I am the lead registrant - starting new cluster");
cluster = clusterBuilder.start();
} else {
logger.info("Joining with lead registrant");
cluster = clusterBuilder.join(seedAddress);
}


This Locator is behavior that we took from the JGroups project and have used in Geode since its inception.  Any seed node based system could be fit into this same pattern.


Other than that, all I needed to do was implement leaving the cluster, the callbacks announcing new/left/crashed nodes and a callback that handles being kicked out of the cluster.  That can happen if a node becomes unresponsive and the other nodes (observers) declare it dead.


--> Geode uses a Locator service to find the node that's the membership coordinator.

--> Geode needs to handle concurrent startup of the initial nodes of a cluster.


Integration Testing


There was an existing unit test for Geode's Membership class that I used to shake out problems with the integration, MembershipIntegrationTest.  The simplest test in that class just boots up a cluster of 1 and verifies that the cluster started okay.  It passed so I went on to the next test, one that boots up a 2 node cluster.  That one failed because the view identifiers in Rapid are not a monotonically increasing integer as they are in Geode's GMS.  Instead they are a hash of the node IDs and endpoints in its membership view.  Geode will refuse to accept a membership view with a view ID lower than the current view's and, since it's just a hash, Rapid's identifiers jump all over the place.


The only way to get around that problem was to comment out all of the checks in Geode that pay attention to view identifiers.  After doing that the 2 node test passed.


Other tests in MembershipIntegrationTest were variants of that second test and passed.  That was encouraging.


--> Geode expects monotonically increasing membership view identifiers.


Distributed Unit Testing


I moved on to a multi-JVM test using a Cache, DistributedAckRegionDUnitTest.  For this kind of test a Locator and four satellite JVMs are started by the test framework and then all of the test methods are executed in the unit test JVM.  The unit tests assign tasks to the satellite JVMs via RMI.  For instance, a test might assign tasks to create a Cache, populate it and then verify that all of the nodes have the correct data.  At the end of each test method the framework cleans up any Caches created in the JVMs to make them ready for the next test method.


First I focused on getting one test to pass.  The test first assigned two of the satellite JVMs tasks to create a Cache and then have them create incompatible cache Regions.  This test hung during startup because, doh!, the nodes had membership views with identifiers in the different order.  One had [nodeA, nodeB, Locator] and the other had [nodeB, nodeA, Locator].  The problem with this is that Geode's distributed lock service depends on a stable ordering of the identifiers and it uses the left-most node as its Elder in its distributed algorithms.  The two nodes disagreed on which node should be the Elder and it screwed up the algorithms, causing the Caches to hang.


In order for Geode to use a membership service that doesn't provide a stable ordering of the node identifiers some other mechanism would be needed to determine the Elder for the distributed lock service.  For my project I didn't want to get into that so I stored the millisecond and nanosecond clock values in the metadata when initializing each node and used that to sort the identifiers.  That's good enough for unit testing but it's not something that could be used IRL.


--> Geode requires stable ordering of node IDs in a membership view or it needs an alternative way of choosing an Elder for the distributed lock service.


With that change the distributed test passed, as did other tests that I selected at random in DistributedAckRegionDUnitTest.  However, when I fired off all of the tests to run sequentially things started to go wrong.  A run would get part way through the tests and then hang.


Looking at the logs I could see the first satellite JVM try to join with the Locator node and some other node that the test hadn't started.


   [vm0] [info 2021/01/13 11:52:29.455 PST <RMI TCP Connection(1)-192.168.1.218> tid=0x14] mac-a01:63410 is sending a join-p2 to mac-a01:63200 (Locator) for config 4530718538436871917


  [vm0] [info 2021/01/13 11:52:29.455 PST <RMI TCP Connection(1)-192.168.1.218> tid=0x14] mac-a01:63410 is sending a join-p2 to mac-a01:63383 (?????) for config 4530718538436871917


and then Rapid's Paxos phase2 would fail:


  [vm0] [error 2021/01/13 11:52:34.455 PST <RMI TCP Connection(1)-192.168.1.218> tid=0x14] Join message to seed mac-a01:63200 (Locator) returned an exception: com.vrg.rapid.Cluster$JoinPhaseTwoException


but there was no logging for this mac-a01:63383 node.


I finally looked at the previous test's logs and found that this node was used in that test and had shut down but the remaining node (the Locator) didn't install a new view removing it.  A quick talk with the Rapid developer confirmed that Rapid requires at least two nodes to make any decisions, so I modified the test to use two Locators instead of one.  That way there are always two nodes to come to agreement on removal of lost nodes.

--> Geode assumes the cluster can devolve to one node.


When running all 101 tests in DistributedAckRegionDUnitTest I found that the shutdown of a Rapid node doesn't seem to inform all other nodes in a small cluster and they're left to figure out that a node is gone.  This caused new nodes to take 20 seconds or more to join the cluster while the seed nodes figured out the loss of old nodes.  Attempts to join would fail and then there would be a retry.  Decreasing the timeout in Rapid's NettyClientServer to 2 seconds helped with that, but joining the cluster usually took a full 2 seconds or more compared to 50 to 250ms in Geode's GMS.  A better failure detector and a UDP-based heartbeat might give tighter response times.


--> Geode has tight SLAs concerning failure detection and needs to know whether a node has crashed or left gracefully.


Regression Testing


VMware has a large collection of regression tests that it uses to harden its own releases of Geode.  These are similar to the Distributed Unit Tests I described above but use a different framework, can run on multiple machines and typically run for much longer periods of time.  I chose one of the tests from the Membership regression to test the Rapid integration's handling of lost & new members.  After the poor performance of the default ping-pong failure detector with Netty in the Distributed Unit Tests I didn't expect this to go well, and it didn't.  Though there were only 10 nodes in the test, all of them failed in the first few minutes with processes unable to join the cluster.


Looking through the logs I could see some nodes in the cluster accepting a new node while others didn't and the join attempt failing with a JoinPhaseTwoException.  The accepting nodes would then remove the new node.


--> Like Geode's Distributed Unit Tests, VMware's regression tests require a membership service that can handle frequent shutdown and startup of nodes.


* Scalability Testing


I ran a small scalability test with Geode's membership implementation and then with the Rapid implementation. The test doesn't spin up a Geode Cache, but just membership instances. For these tests I used my 16gb Mac laptop and ran the tests under IntelliJ. Sometime I'll grab a bunch of machines in AWS or GCP and do a larger test.


Rapid was unable to create a cluster of more than 71 nodes before running out of file descriptors in this single-machine test, probably due to the use of Netty instead of the connectionless JGroups UDP messaging used by Geode's membership implementation.









Using Rapid, nodes were able to join faster than when using Geode's membership system but the time to join was more erratic. The time required to shut down the cluster was much higher with Rapid.


Conclusions


Integration of Rapid and Geode was fairly straightforward but pointed out dependencies that Geode has on its existing membership system that go beyond what you might expect.  Untangling these so that the membership module is truly pluggable will take work.


These are the dependencies that I noticed:

  • Geode requires a way to transmit metadata about a node in membership views and node identifiers.  The metadata is static and is known when joining the cluster.
  • Geode uses a Locator service to find the node that's the membership coordinator.
  • Geode needs to handle concurrent startup of the initial nodes of a cluster.
  • Geode expects monotonically increasing membership view identifiers.
  • Geode requires stable ordering of node IDs in a membership view or it needs an alternative way of choosing an Elder for the distributed lock service.
  • Geode assumes the cluster can devolve to one node.
  • Geode has tight SLAs concerning failure detection and needs to know whether a node has crashed or left gracefully.
  • Like Geode's Distributed Unit Tests, VMware's regression tests require a membership service that can handle frequent shutdown and startup of nodes.
Integration code can be found here:
https://github.com/bschuchardt/geode/tree/feature/rapid_integration
https://github.com/bschuchardt/rapid/tree/geode_integration


Friday, August 9, 2013

Shifting madness


My team was chasing an odd bug for a couple of weeks. In the GemFire distributed cache we store a 64-bit timestamp representing the last-modified-time for a cache entry and use it for entry expiration and inter-site consistency checks.  This value was all of a sudden going back in time a day and a half, causing early expiration and inter-site inconsistencies.

Well, what do you do?  You review recent changes to the product, for one thing.  Not long ago two people worked on the code for handling this timestamp.  The significant change seemed to be in using the top 8 bits of this field to store boolean flags and using a java.util.concurrent.atomic.AtomicLongFieldUpdater to access the field.

  private static final long LAST_MODIFIED_MASK = 0x00FFFFFFFFFFFFFFL;

   long storedValue;
    long newValue;
    do {
      storedValue = lastModifiedUpdater.get(this);
      newValue = storedValue & ~LAST_MODIFIED_MASK;
      newValue |= lastModifiedTime;
    } while (!lastModifiedUpdater.compareAndSet(this, storedValue, newValue));


This code looks okay unless the relatively new AtomicLongFieldUpdater is messing up.  The other change was introduction of a new bit to store in the 8 top bits of the field:

  private static final long VALUE_RESULT_OF_SEARCH   = 0x01L << 56;
  private static final long UPDATE_IN_PROGRESS       = 0x02L << 56;
  private static final long TOMBSTONE_SCHEDULED      = 0x04L << 56;
  private static final long LISTENER_INVOCATION_IN_PROGRESS = 0x08 << 56;


There's a problem with this last line.  We're shifting a 32-bit integer 56 places to the left. A good C compiler will complain about this but the Java compiler seems okay with it.  Here's a C program:


#include "stdio.h"

int main(int argc, char *argv[]) {
    long l = 1 << 30;
    printf("1 << 30=%ld",l);

    l = 1<<32 font="">
    printf("1 << 32=%ld",l);
    l = 3<<32 br="">    printf("3 << 32=%ld",l);
}
~> cc -o testit test.c
test.c: In function 'main':
test.c:6: warning: left shift count >= width of type
test.c:8: warning: left shift count >= width of type


And the equivalent Java program:


import java.io.*;

public class test {
  public static void main(String[] args) {
    long l = 1 << 30;
    System.out.println("1 << 30="+l);

    l = 1 << 32;
    System.out.println("1 << 32="+l);

    l = 3 << 32;
    System.out.println("3 << 32="+l);

  }
}


~> javac test.java



Running these two programs gives different results


~> ./testit
1 << 30=1073741824

1 << 32=0
3 << 32=0

~> java test
1 << 30=1073741824

1 << 32=1
3 << 32=3

So the << operator works differently in Java than in C.  Shifting 0x08 fifty six bits in C results in a 0 but Java turns it into 0x800,0000!  The Java Language Specification says

If the promoted type of the left-hand operand is int, only the five lowest-order bits of the right-hand operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator & (§15.22.1) with the mask value 0x1f (0b11111). The shift distance actually used is therefore always in the range 0 to 31, inclusive.
 So 56 (111000) is silently turned into 24 (011000) by the javac compiler! The new constant, 0x8000000, is 8 << 24, not 8 << 56 as intended!

This bit was being set and cleared in the timestamps when the new flag was used.  For instance, 2013/07/25 15:55:10.906 PDT is 1374792910906 on the millisecond clock.  Clearing bit 0x8000000 turns the clock back to 1374658693178, which is 2013/07/24 02:38:13.178 PDT.  That's roughly 37 hours earlier than the unmolested timestamp.  No wonder entries were being considered "old" before their time.

Changing the constant to have "L" like the others fixed the problem.

Tuesday, February 5, 2013

I was reviewing some code for a coworker and saw something that I didn't know was possible...


Integer counter = 0;
.
.
.
synchronized(counter) {
  counter++;
  // do other work under sync
}


I thought that the compiler might be accepting this as a valid use of autoboxing, as in

Integer counter = 0;

synchronized(counter) {
  int i = counter.intValue();
  i++;
  // do other work under sync
}

in other words the value is pulled out of the Integer and held in a temporary variable where it is incremented and then thrown away.

This turned out not to be the case at all.  The compiler actually creates code that will increment the value like this but it affects counter just as if this were an "int" field!  So my coworker was right - this "++" is incrementing the counter like he wanted it to do.

But now there is something else wrong with the code!  Java Integers are immutable, and that "++" is assigning a new Integer to the counter variable.  This makes his synchronized(counter)statement useless in protecting anything but the counter++.  Once that's finished there is a new object in counter.  If one thread synchronized on Integer(0) the counter++ would change it to Integer(1)  Another thread could then enter the synchronized block holding a lock on Integer(1) while the original thread continued to lock on Integer(0).

t1: synchronized(Integer(0)) {
t1: counter++ // in other words, counter=Integer(1)
t2: synchronized(Integer(1)) {
t1 & t2: // do other work under sync


What else is wrong with this?  What about the objects we're synchronizing on?  I wrote a short program to look at the Integers generated in code such as this.


public class incInt {

  public static void main(String args[]) {
    Integer i = 0;
    System.out.println("i="+i + " hash="+System.identityHashCode(i));
    i++;
    System.out.println("i="+i + " hash="+System.identityHashCode(i));
    i++;
    System.out.println("i="+i + " hash="+System.identityHashCode(i));

    System.out.println("resetting to zero");

    i = 0;
    System.out.println("i="+i + " hash="+System.identityHashCode(i));
    i++;
    System.out.println("i="+i + " hash="+System.identityHashCode(i));
    i++;
    System.out.println("i="+i + " hash="+System.identityHashCode(i));
  }

}

The result of running this with Oracle's JRE 1.7.0_5 shows that there are canonical values for autoboxed zero, one and two.

> java incInt
i=0 hash=4991049
i=1 hash=32043680
i=2 hash=9499036
resetting to zero
i=0 hash=4991049
i=1 hash=32043680
i=2 hash=9499036

Here's a blog post that claims that [-128,127] are cached by the JVM and used in autoboxing.  It turns out that the post is right.  I modified the test to print out the hashes for zero, one and two and they are the same objects

> java incInt
i=0 hash=31879808
i=1 hash=6770745
i=2 hash=12835244
resetting to zero
i=0 hash=31879808
i=1 hash=6770745
i=2 hash=12835244
Integer.valueOf(0)=31879808
Integer.valueOf(1)=6770745
Integer.valueOf(2)=12835244

Getting back to the code under review, this means that the synchronization is at least sometimes using a canonical object used by the whole JVM.  Anything could sync on Integer.valueOf(0), causing this code to be affected by code running in other threads.  All synchronization should be done on private state or by using well-tested concurrency utilities to avoid accidental conflicts and meddling.



Thursday, September 8, 2011

patent granted

This is a follow-up to a post I made last year.  About four years ago I applied for a patent on a method of replicating data from one process to another without blocking operations on the data.  It's used in the GemFire data fabric product to create backup copies of data buckets, and is called "state flush operation".  In a way it provides a temporal point of virtual synchrony that assures the new replica bucket sees all of the changes to the data that the original bucket sees.


I got the idea for this work after reading a paper by Lamport and Chandy published back in 1985, Distributed Snapshots: Determining Global States of Distributed Systems.


Basically what you do is create a sort of catchers-mitt that is set up to record operations on the data during the transfer, then you announce to everyone that the transfer is going to happen.  At that point any operations performed on the original bucket will also be sent to the new replica bucket.


Then you send a message to each member of the distributed system that holds the bucket telling them to apply in-process operations to the bucket and then "flush" those changes to the member holding the original bucket.  A message gets sent from each of these members to the original bucket holder that tells it which messages have been sent.  An observer is created that watches for all of the changes to arrive and be applied to the original.  It then sends a notice to the new replica that the operation has completed.


At this point the data may be copied from the original bucket to the replica bucket, taking care not to overwrite any items that have shown up in the catchers-mitt.  Because of the flush we know that the copied data holds any changes that were made prior to creating the catchers-mitt, but the catchers-mitt may hold operations that are newer than what is reflected in the copied data.



Tuesday, August 30, 2011

Chaos Monkey

If you haven't heard of the Netflix Chaos Monkey, read Jeff Atwood's blog. This "monkey" roams around their cloud app killing processes to ensure that the system is resilient. IMO the MTBF for java VMs isn't all that long unless a great deal of testing has been done, so this is a great way to keep the system healthy.  Jeff asserts that having the monkey in their system was at least part of the reason that Netflix survived the Amazon Web Services (AWS) crash.


When we test GemFire we run many High Availability (HA) tests that randomly kill server processes and then test to ensure that the product continues to run and maintains consistency.  That guarantees that the product reacts to failures correctly in short (10-60 minute) tests, but what about long running distributed systems?  It would be nice to build an optional Chaos Monkey into the product that randomly killed off server-side processes (can't kill the clients!).   The system-monitoring infrastructure would have to be able to recognize the Monkey's work so that alarms aren't raised, but how hard could that be?


A smart Monkey could examine metadata about the system and, perhaps, give weight to older processes now and then when selecting a process to kill.  That would tend to shake things up a little more in the distributed system and test things like lock grantor fail-over.


The Monkey would need to have a collection of blind spots built into it so that customers could protect VMs that they don't want the Monkey to, er, monkey with.  GemFire might be well tested and be able to withstand a Chaos Monkey, but that doesn't mean the systems built with it could survive degradation of their own essential services.

Friday, May 6, 2011

Moving to the Cloud

For half a year I've been doing some of my development work on a virtual computer hosted in a data center that I've never seen.  It works remarkably well and is like using RDP to connect to a desktop at work when you're telecommuting.  I fire up VMware View and connect to the computer, giving it one of the Dexpot screens on my laptop.  I can even connect to it on my iPod Touch using WYSE Pocket Cloud and zaTelnet.


The downside has been that I use other machines to run tests and those machines were seven network hops away from my cloud-based development machine.   Any network interaction with those machines was painfully slow.  So slow that I stopped using the virtual computer for much of anything.

Recently that situation changed.  Most of the rack-mounted machines that we own were moved to the same data center, so that now it's seven network hops from my desk to all of the machines I use.  But it's now only one hop from my cloud-based virtual computer to them.  The situation is reversed and the virtual computer is a life saver.  I log in, pop up VMware View and the rest of my computing day is spent in the cloud.


Wednesday, November 3, 2010

Performance of Message Patterns

I've been faced with a dilemma in distributed processing.  I have a "client" process that needs to send a message to several servers.  The servers contain a versioning service that stamps a version on each message for concurrency management, but the client doesn't have this service.  Each of the servers must end up having the same version number for the message.  And it must be blindingly fast.

Either the servers are going to have to talk to each other to agree on a version for the message or some other trick is going to have to be used.

The simplest approach is to send a request to one of the servers to get a version number and then send the version out with the message to all of the servers in parallel.  This is nice because the message can be serialized to the network in parallel and there are mechanisms in place that will do this very quickly.

Here's a diagram of the first scenario.  If any of the diagrams are hard to read, you can click on it to see a larger image.


Instead of sending a request for a version to the first server, the client could send the message and expect a version number in response.  It could then add this to the message and send it to the other servers.  This would require multiple serialization of the message to on-wire format, but might be faster for small messages.


Another approach is to delegate the sending of the message to one of the servers ("one hop messaging").  The client sends the message to one of the servers and it, in turn, sends the message to the other servers after adding a version stamp to it.  This also requires some level of extra serialization since the client must write the message to the network and then the selected server must forward the message to the other servers over the network.

We can simplify the acknowledgment scheme by having each server send an ack directly back to the client instead of piping them through Server1.  This would cause context switching in the client, but might be better than funneling the acks through Server1.


Yet another approach is to have the client send the message to all of the servers and include a tag that selects one of the servers to supply the version tag.  After the selected server (Server1 in the diagram below) receives the message it stamps it with a version and then sends that version to the other servers.  The other servers wait for the version number before accepting the message.  This, too, requires extra context switching because the other servers have to wait for a signal that the version number has been received.



I wrote a program to simulate these different approaches to see how they compared.  Like the diagrams, I used a client and three servers running on a fairly large and fast Linux computer.


The payload column shows the size of message that was used in each test run, and each column shows how many seconds it took the approch to handle 1 million messages.  The "no versioning" column is a base-line that shows the performance of simple send-with-acknowledgement messaging.


The results were a little surprising to me.  I had expected the last approach, where the client sends the message to all servers and  they wait for one to send a version number, to have the best performance.  Instead, all but one of them converge to the same performance level when messages reach 10,000 bytes in size.  At this level they are only moving about 70mb of data through the system per second, so they aren't being throttled by the network, but with the extra synchronization points and context switches CPU was becoming a limiting factor.

The "version request" and "message returns version" scenarios (the first two diagrams above) are clear losers because server2 and server3 do not even see the message until a complete send and response cycle is performed with server1.

The "one hop" scenario had a poor showing because of the long acknowledgement chain, with both server2 and server3 sending their acks to server1.  Server1 has to wait for both acks before it finally sends its own ack back to the client.

The clear winner is the "one hop, ack client" algorithm with servers sending acknowledgements directly to the client.  It even converged with and then passed the base-line "no versioning" scenario at about 3000 bytes/message.