NATS
NATS is a messaging system used by Cloud Control components to communicate with each other. NATS is used to support several usecases, primarily:
- Cloud Control API: Uses NATS to instruct components to perform activities, such as cache flushing & database restores
- Dynamic Filtering: Uses NATS to facilitate communication between dynamic filtering components in Userplanes & Controlplanes
Deployment topology
Depending on the functionality enabled via Cloud Control, NATS can run in one of these topologies:
- Confined to a single namespace (Userplane or Controlplane)
- As a mesh, spanning across Userplanes & Controlplanes
Confined to a namespace
Within a single namespace, a NATS deployment looks as follows:

The 3 NATS nodes form a cluster and use a NATS-internal mechanism to stay in sync with each other. A Service object named nats-local
is created and any NATS-compatible Cloud Control components within the namespace are automatically configured to connect to it.
If you deploy a Userplane with dnsdist, Recursor and the Cloud Control API (all with 1 replica for the sake of simplicity of the diagram), your NATS mesh would look as follows:

In the above diagram the Cloud Control Agent in the dnsdist & Recursor pods are connected to the NATS mesh and the Cloud Control API is also connected.
Userplane & Controlplane mesh
When NATS is deployed to a Controlplane, it automatically enables an extra mode, allowing it to serve as a hub
. When acting as hub
, Userplane NATS deployments can join the Controlplane's NATS mesh as a leaf
. For example, when you have a Controlplane and 2 Userplanes, with the Userplane NATS configured to connect to the Controlplane NATS:

To add a layer of security to the NATS mesh, Controlplane NATS clusters acting as hub
will automatically create a user named hubuser
. The Userplane NATS clusters attempting to join the mesh will need to authenticate with this hubuser
user before they are allowed to join the mesh.
Cluster name
When NATS clusters join a mesh, they need to have a unique name by which they can be identified. You can provide a name (top-level configuration item) in your Userplane or Controlplane values overrides using the following:
This will ensure this NATS cluster will have the unique identifier "userplane_berlin" within the NATS mesh. If this name
is not supplied, the name of the Kubernetes namespace is chosen instead, which may not be unique if you have deployments across different Kubernetes clusters.
Services exposed to the mesh
Cloud Control only exposes NATS services to the mesh if there is a need to do so. For example:
- A Postgres database in a Controlplane can only be reached by the Cloud Control API inside that Controlplane deployment.
- A dynamic filtering simulator in a Userplane is advertised to the Controlplane to ensure it can be interacted with via the dynamic filtering admin GUI/API
Configuration Reference
NATS clusters can be configured as top-level items in the Userplane & Controlplane Helm Charts. Most usecases which depend on NATS automatically enable it, but it can also be enabled manually:
Above will lead to a simple 3-node standalone NATS cluster, identifiable as a cluster named "my_deployment" if it is configured to be part of a larger mesh.
The full list of parameters which can be used to configure NATS:
Parameter | Type | Default | Description |
---|---|---|---|
affinity |
k8s: Affinity |
pod affinity (Kubernetes docs: Affinity and anti-affinity). If unset, a default anti-affinity is applied using antiAffinityPreset to spread pods across nodes |
|
antiAffinityPreset |
string |
"preferred" |
pod anti affinity preset. Available options: "preferred" "required" |
agentLogLevel |
string |
"info" |
Verbosity of logging for the agent container. Available options: "debug" "info" "warn" "error" |
containerSecurityContext |
k8s: SecurityContext |
|
SecurityContext applied to each container |
hostNetwork |
boolean |
false |
Use host networking for pods |
hub |
Hub | {} |
Configuration of access to the NATS cluster from outside the namespace. Only applicable for Controlplane NATS clusters which will act as hub in a larger mesh. |
hubAddress |
string |
"" |
Address of the NATS hub to connect to when attempting to join a larger mesh. Only applicable to NATS clusters in a Userplane. |
hubCA |
string |
"" |
CA in PEM format to use for validation when connecting to a TLS-enabled NATS hub. Only applicable to NATS clusters in a Userplane. |
hubCASecretName |
string |
"" |
Name of a pre-existing Kubernetes Secret with a data item named ca.crt containing the CA in PEM format to use for validation when connecting to a TLS-enabled NATS hub.Only applicable to NATS clusters in a Userplane. |
hubInsecure |
boolean |
false |
If true , skip certificate validation when connecting to a TLS-enabled NATS hub.Only applicable to NATS clusters in a Userplane. |
hubPasswordSecretName |
string |
"" |
Name of the pre-provisioned Secret containing the password for the hubuser user which will be used to authenticate when trying to join a larger NATS mesh.Only applicable to NATS clusters in a Userplane. |
hubPasswordSecretKey |
string |
"nats_hubuserpassword" |
Name of the item in the Secret specified by hubPasswordSecretName which contains the desired password of the hubuser user.Only applicable to NATS clusters in a Userplane. |
local |
Local | {} |
Configuration of access to the NATS cluster from within the namespace |
logDebug |
boolean |
false |
If true , enable debug logging |
logTrace |
boolean |
false |
If true , enable trace logging |
nodeSelector |
k8s: NodeSelector |
{} |
Kubernetes pod nodeSelector |
passwordSecretName |
string |
Name of a pre-existing Kubernetes Secret containing a password to be set as Redis password for this cluster. If this is omitted a random password is generated and stored in a Secret. |
|
passwordSecretKey |
string |
"password" |
Name of the item in the passwordSecretName Secret holding the password |
podAnnotations |
k8s: Annotations |
{} |
Annotations to be added to each pod |
podLabels |
k8s: Labels |
{} |
Labels to be added to each pod |
podSecurityContext |
k8s: PodSecurityContext |
|
SecurityContext applied to each pod |
replicas |
integer |
Read-write: 3 Read-only: 2 |
Default number of replicas in the StatefulSet |
resources |
k8s: Resources |
|
Resources allocated to the nats container if resourceDefaults (global) is true |
tls |
TLS |
|
TLS configuration for inbound Redis traffic |
tolerations |
List of k8s: Tolerations |
[] |
Kubernetes pod Tolerations |
topologySpreadConstraints |
List of k8s: TopologySpreadConstraint |
[] |
Kubernetes pod topology spread constraints |
Hub
Only applicable when configuring a Controlplane NATS cluster which acts as hub
. For example:
name: "my_deployment"
nats:
enabled: true
hub:
secretName: "my-hub-password-secret"
secretPasswordKey: "password"
tls:
enabled: true
certManager: true
Above will enable TLS on the nats-hub
Service with a certificate managed by CertManager. Also, the default hubuser
user will have a password configured based on the password stored in a pre-provisioned Secret named my-hub-password-secret
.
The full list of parameters which can be used to configure hub access:
Parameter | Type | Default | Description |
---|---|---|---|
secretName |
string |
"" |
Name of the pre-provisioned Secret containing the password for the hubuser user which Userplane use to connect to this NATS cluster to form a mesh. If this is not supplied a random password is generated for the hubuser |
secretPasswordKey |
string |
"nats_hubuserpassword" |
Name of the item in the Secret specified by secretName which contains the desired password of the hubuser user |
service |
Service |
|
Configuration of the nats-hub Service |
tls |
TLS |
|
TLS configuration for inbound traffic to the nats-hub Service |
Local
Configuration of access to the NATS cluster from within the namespace. For example, to enable TLS on the local communication:
Parameter | Type | Default | Description |
---|---|---|---|
service |
Service |
|
Configuration of the local Service |
tls |
TLS |
|
TLS configuration for inbound traffic to the local Service |
Service
Parameters to configure the service objects. For example in a read-write deployment:
name: "my_deployment"
nats:
enabled: true
<scope>:
service:
type: LoadBalancer
annotations:
metallb.universe.tf/address-pool: name_of_pool
Where <scope>
is either local
or hub
Parameter | Type | Default | Description |
---|---|---|---|
allocateLoadBalancerNodePorts |
boolean |
true |
If true, services with type LoadBalancer automatically assign NodePorts. Can be set to false if the LoadBalancer provider does not rely on NodePorts |
annotations |
k8s: Annotations |
{} |
Annotations for the service |
clusterIP |
string |
Static cluster IP, must be in the cluster's range of cluster IPs and not in use. Randomly assigned when not specified. | |
clusterIPs |
List of string |
List of static cluster IPs, must be in the cluster's range of cluster IPs and not in use. | |
externalIPs |
List of string |
List of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes and must be user-defined on the cluster's nodes | |
externalTrafficPolicy |
string |
Cluster |
Can be set to Local to let nodes distribute traffic received on one of the externally-facing addresses (NodePort and LoadBalancer ) solely to endpoints on the node itself |
healthCheckNodePort |
integer |
For services with type LoadBalancer and externalTrafficPolicy Local you can configure this value to choose a static port for the NodePort which external systems (LoadBalancer provider mainly) can use to determine which node holds endpoints for this service |
|
internalTrafficPolicy |
string |
Cluster |
Can be set to Local to let nodes distribute traffic received on the ClusterIP solely to endpoints on the node itself |
ipv4 |
boolean |
false |
If true, force the Service to include support for IPv4, ignoring globally configured IP Family settings and/or cluster defaults. If ipv4 is set to true and ipv6 remains false , the result will be an ipv4 -only SingleStack Service. If both are false , global settings and/or cluster defaults are used. If both are true , a PreferDualStack Service is created |
ipv6 |
boolean |
false |
If true, force the Service to include support for IPv6, ignoring globally configured IP Family settings and/or cluster defaults. If ipv6 is set to true and ipv4 remains false , the result will be an ipv6 -only SingleStack Service. If both are false , global settings and/or cluster defaults are used. If both are true , a PreferDualStack Service is created |
labels |
k8s: Labels |
{} |
Labels to be added to the service |
loadBalancerIP |
string |
Deprecated Kubernetes feature, available for backwards compatibility: IP address to attempt to claim for use by this LoadBalancer. Replaced by annotations specific to each LoadBalancer provider |
|
loadBalancerSourceRanges |
List of string |
If supported by the LoadBalancer provider, restrict traffic to this LoadBalancer to these ranges | |
loadBalancerClass |
string |
Used to select a non-default type of LoadBalancer class to ensure the appropriate LoadBalancer provisioner attempt to manage this LoadBalancer service | |
publishNotReadyAddresses |
boolean |
false |
Service is populated with endpoints regardless of readiness state |
sessionAffinity |
string |
None |
Can be set to ClientIP to attempt to maintain session affinity. |
sessionAffinityConfig |
k8s: SessionAffinityConfig |
{} |
Configuration of session affinity |
type |
string |
ClusterIP |
Type of service. Available options: "ClusterIP" "LoadBalancer" "NodePort" |
TLS
Parameters to configure TLS for inbound traffic. An example:
name: "my_deployment"
nats:
enabled: true
<scope>:
tls:
enabled: true
certSecretName: my-cluster-certificate
Where <scope>
is either local
or hub
In the above example the certificate present in Secret my-cluster-certificate
will be attempted to be used to start a TLS-enabled listener.
Parameter | Type | Default | Description |
---|---|---|---|
certSecretName |
string |
Name of a Secret object containing a certificate (must contain the tls.key , tls.crt items) |
|
certManager |
boolean |
false |
Toggle to have a request created for Certmanager to provision a certificate. By default, this will request for a Certificate covering the following: For the local Service:- nats-local - nats-local.[Namespace] - nats-local.[Namespace].svc For the hub Service in a Controlplane:- nats-hub - nats-hub.[Namespace] - nats-hub.[Namespace].svc Additional entries can be configured using extraDNSNames |
enabled |
boolean |
false |
Toggle to enable TLS If set to true , a certSecretName must be set or certManager must be set to true to ensure a valid certificate is available |
extraDNSNames |
List of string |
[] |
List of additional entries to be added to the Certificate requested from Certmanager |
issuerGroup |
string |
"cert-manager.io" |
Group to which issuer specified under issuerKind belongsDefault value is inherited from the global certManager configuration |
issuerKind |
string |
"ClusterIssuer" |
Type of Certmanager issuer to request a Certificate from Default value is inherited from the global certManager configuration |
issuerName |
string |
"" |
Name of the issuer from which to request a Certificate Default value is inherited from the global certManager configuration |
certSpecExtra |
CertificateSpec | {} |
Extra configuration to be injected into the Certmanager Certificate object's spec field.Disallowed options: "secretName" "commonName" "dnsNames" "issuerRef" (These are configured automatically and/or via other options) |
certLabels |
k8s: Labels |
{} |
Extra labels for the Certmanager Certificate object |
certAnnotations |
k8s: Annotations |
{} |
Extra annotations for the Certmanager Certificate object |