Parallelization (Active-Active-Active.. Cluster Up to 32)
Parallelization is the TRscaler technique that achieves system redundancy, by having a group of hosts on the same network segment (redundancy group) to share an IP address, so that, in case of a machine going down, another host in the redundancy group can take over its tasks. Parallelization also allows a degree of load sharing between systems. Redundancy group can be easily extended to up to 32 hosts.
Parallelization works by utilizing the network itself to distribute incoming traffic to all TRscaler nodes in the cluster. Each packet is filtered on the incoming TRscaler interface so that only one node in the cluster accepts the packet. All the other nodes will just silently drop it. The filtering function uses a hash over the source and destination address of the IPv4 or IPv6 packet and compares the result against the state of the node.
In this mode, Virtual IP uses a multicast MAC address, so that a switch sends incoming traffic towards all nodes.
However, there are a few OS and routers that do not accept a multicast MAC address being mapped to a unicast IP. This can be resolved by using one of the following unicast options. For scenarios where a hub is used it is not necessary to use a multicast MAC and it is safe to use the ip-unicast mode. Manageable switches can usually be tricked into forwarding unicast traffic to all cluster nodes ports by configuring them into some sort of monitoring mode. If this is not possible, using the ip-stealth mode is another option, which should work on most switches. In this mode TRscaler never sends packets with its virtual MAC address as source. Stealth mode prevents a switch from learning the virtual MAC address, so that it has to flood the traffic to all its ports.
load sharing won't probably achieve a perfect 50/50 distribution between the two machines, since Virtual IP uses a hash of the source and destination IP addresses to determine which system should accept a packet, not the actual load. If one client use %60 of overall traffic and the other client uses %40 , device load sharing will be %60 (first node) to %40 (second node)