Cluster Architecture
Site Clusters
To support a distributed population of users, typically you will need to setup site clusters. A cluster is defined as a group of GridGuard™ servers configured in such a way that they replicate the nonce & GridPass data (PIN, corner, etc.) amongst themselves. A cluster is exposed to external appliances through load balancers that will distribute the load amongst the various nodes in the cluster. The load balancers will also ensure that in the event of a node failing, all traffic is automatically re-directed to the other active nodes in the cluster.
Each node in a cluster will be setup to replicate to every other node in every other cluster. This will ensure that all nodes across all sites will be synchronized. It will also ensure that in the case of a disaster, where an entire data center is unavailable, the other data centers are available to handle the load. The nodes in the clusters will have to be sized appropriately to handle the loads under HA (High Availability) & DR (Disaster Recovery) conditions.
Cluster Configuration
The diagram shows the basic configuration of nodes in a cluster. The main points to note:
- GridGuard™ Servers are configured to replicate data over 2 separate ports. The diagram shows 2 nodes, however, any number of nodes can be similarly configured.
a) Ports 6268 or 6269 - These ports are used to replicate nonces across nodes. Nonces are transient objects created during the login process. Nonces are one-time-use tokens that are destroyed at the end of each authentication. Nonces are always replicated only within a cluster, never across clusters. Port 6268 is used for non-encrypted replication; 6269 is used when encryption is turned on.
b) Ports 389 or 636 - These ports are used to replicate GridPass data (the PIN & position data). This data is replicated across all nodes, across all clusters. Port 389 is used for non-encrypted replication; 636 for encrypted replication.
The choice of whether to use encrypted or non-encrypted replication is determined based on the implementation; primarily, based on the location of the GridGuard servers. If the servers are located in the DMZ, then encrypted replication is normally preferred. If the servers are located on the LAN, non-encrypted replication most times meets the requirements.
- LDAP Load Balancer Port 6268 / 6269 - This load balancer will balance the load for traffic for nonce based binds among the GridGuard™ servers.This is the virtual IP address (VIP) that will be used by the appliance or service being secured for token authentication schemes. Port 6268 is used to LDAP binds; 6269 for LDAPS binds.
- LDAP Load Balancer Port 389 / 626 - This load balancer will balance the load for traffic for user lookups among the GridGuard™ servers. Port 389 is used for LDAP lookups; 636 for LDAPS lookups.
- HTTPS Load Balancer Port 443 - This load balancer is used to balance the HTTPS load among the GridGuard™ servers.
Note: Load balancers are not provided as part of the GridGuard™ Virtual Appliance. Load balancers will need to be procured and configured separately.
Cross-Site Replication
The diagram shows a high level overview of the components that are replicated in cross-site replication. Every node in every cluster is aware of every node in every cluster. The only difference is that nodes within a cluster replicate two kinds of data, whereas nodes across clusters replicate just one type of data.
- Nodes within a site cluster replicate both Nonce & GridPass information. This ensures that in-flight authentications can be processed by any server in the cluster.
- Nodes across clusters replicate only GridPass information. This ensures that in case of a site disaster, the user credentials are replicated to other clusters so they can process log ins for users whose primary site is no longer available.
All of the replication can be performed over encrypted or non-encrypted channels.