The ClusterBarrier works as follows. On every rendering node there is both an MPI process and a rendering process. In the MPI process's inner loop, it gets to a point where it needs to wait to give the rendering process new device data. Once a particular rendering process on a single node is ready, we want to make sure that all other rendering nodes are also ready. Thus, we wait on the rendering node for all nodes to be ready.
So, there is a fundamental role for both MPI Client and rendering process in this synchronization. The details of this barrier pairing is up to the implementation. To ensure that they are using the same strategy, we don't reveal the constructors, but instead provide static methods for instantiation. To change what type(s) of barriers can be created, modify the way these static methods work.
The ClusterBarrier (on the MPI side) should take care not to stop if the rendering process has exited.