Skip to content

SPOG Quickstart Guide

Welcome to SPOG! This quickstart will have you exploring a fully functional multi-cluster DNS management platform in about 15 minutes. You'll get hands-on experience with the core concepts that make SPOG powerful, all in a lightweight demo environment.

What You'll Get

By the end of this quickstart, you'll have:

  • A working demo environment with three DNS clusters
  • The Glass UI for exploring unified cluster management
  • Real monitoring agents collecting actual metrics from each cluster
  • A taste of label-based organization showing how SPOG adapts to different organizational structures
  • Everything you need to evaluate SPOG for your use case

Overview

This quickstart deploys a complete SPOG demo environment:

Components

  • Controlplane with NATS - The central messaging hub that connects all components
  • 3 PowerDNS Clusters - DNS infrastructure representing different regions
  • Glass Instrumentation - Monitoring agents that collect metrics and logs from each cluster
  • Glass UI - The web interface for managing and monitoring all clusters

Prerequisites

  • Kubernetes cluster (v1.24+)
  • Helm 3.8 or later
  • kubectl configured to access your cluster
  • Access to registry.open-xchange.com (CloudControl registry)
  • Ingress controller installed (e.g., nginx-ingress)

How Will You Access Glass UI?

How will you access Glass UI?

Planning to use OIDC?

OIDC authentication requires HTTPS with SSL termination, which means you'll need an Ingress controller. If you plan to set up OIDC later (see OAuth & LDAP Setup), choose Ingress instead.

Planning to use OIDC?

OIDC authentication requires HTTPS with SSL termination, which means you'll need an Ingress controller. If you plan to set up OIDC later (see OAuth & LDAP Setup), choose Ingress instead.

Registry Authentication

To access the charts and container images, you'll need to configure both Helm registry access and Kubernetes image pull secrets.

Configure Helm Registry Access

First, set your registry credentials as environment variables:

Bash
export REGISTRY_USERNAME="your-username"
export REGISTRY_PASSWORD="your-password"

GLASS_HOSTNAME is the hostname your browser will use to reach Glass UI -- without protocol or port. The NATS WebSocket URL is computed from this at deploy time and must match your browser's address bar, otherwise the UI loads but shows no data.

Bash
export GLASS_HOSTNAME="console.spog.local"

Hostname for OIDC

If you plan to configure OIDC authentication later (see OAuth & LDAP Setup), use console.spog.local as your hostname. The OIDC redirect URIs in the provider configuration must match this hostname exactly.

With port-forward, the UI will be accessible at http://localhost:8080. No GLASS_HOSTNAME is needed -- the NATS WebSocket URL is overridden to ws://localhost:8222.

With NodePort, the UI will be accessible at http://<node-ip>:31080. Set NODE_IP to your Kubernetes node's IP address:

Bash
export NODE_IP="<your-node-ip>"

Then log in to the OCI registry:

Bash
1
2
3
helm registry login registry.open-xchange.com \
  --username "$REGISTRY_USERNAME" \
  --password "$REGISTRY_PASSWORD"

This allows Helm to pull chart packages from the registry.

Create Image Pull Secrets

For Kubernetes to pull container images from the private registry, create an image pull secret in each namespace where you'll deploy charts:

Bash
# Create namespace and secrets in controlplane namespace
kubectl create namespace controlplane
kubectl create secret docker-registry registry-credentials \
  --docker-server=registry.open-xchange.com \
  --docker-username="$REGISTRY_USERNAME" \
  --docker-password="$REGISTRY_PASSWORD" \
  --namespace=controlplane

# Create namespaces and secrets for each cluster
kubectl create namespace quickstart-1
kubectl create secret docker-registry registry-credentials \
  --docker-server=registry.open-xchange.com \
  --docker-username="$REGISTRY_USERNAME" \
  --docker-password="$REGISTRY_PASSWORD" \
  --namespace=quickstart-1

kubectl create namespace quickstart-2
kubectl create secret docker-registry registry-credentials \
  --docker-server=registry.open-xchange.com \
  --docker-username="$REGISTRY_USERNAME" \
  --docker-password="$REGISTRY_PASSWORD" \
  --namespace=quickstart-2

kubectl create namespace quickstart-3
kubectl create secret docker-registry registry-credentials \
  --docker-server=registry.open-xchange.com \
  --docker-username="$REGISTRY_USERNAME" \
  --docker-password="$REGISTRY_PASSWORD" \
  --namespace=quickstart-3

When deploying charts in the following steps, you'll add --set global.imagePullSecretsList[0]=registry-credentials to each helm install command to use these secrets. This parameter is supported by all SPOG/Glass charts and CloudControl charts.

Deployment Steps

Let's deploy your SPOG demo environment. We'll explain what each component does as we go.

1. Install PowerDNS CRDs

First, install the Custom Resource Definitions that allow Kubernetes to manage PowerDNS resources:

Bash
1
2
3
helm install powerdns-crds \
  oci://registry.open-xchange.com/cloudcontrol/powerdns-crds \
  --version "3.1.13"

This extends Kubernetes to manage PowerDNS service components like Authoritative servers, Recursors, and DNSDist load balancers as native resources.

2. Deploy Controlplane

The controlplane in this quickstart is a minimal deployment with NATS messaging hub and the operator:

Bash
1
2
3
4
5
6
7
helm install quickstart-controlplane \
  oci://registry.open-xchange.com/cloudcontrol/controlplane \
  --version "3.1.13" \
  --namespace "controlplane" \
  --create-namespace \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/controlplane/examples/quickstart.yaml"

Note: This quickstart uses a simplified controlplane with only NATS and the operator enabled. In production deployments, you would typically also have dynamic filtering, PostgreSQL, and Redis components for additional features.

3. Deploy PowerDNS Clusters

Deploy three PowerDNS instances. Each deployment includes the complete DNS infrastructure stack:

Cluster 1: quickstart-1

Bash
1
2
3
4
5
6
7
helm install quickstart-1 \
  oci://registry.open-xchange.com/cloudcontrol/powerdns \
  --version "3.1.13" \
  --namespace "quickstart-1" \
  --create-namespace \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/powerdns/examples/quickstart.yaml"

Cluster 2: quickstart-2

Bash
1
2
3
4
5
6
7
helm install quickstart-2 \
  oci://registry.open-xchange.com/cloudcontrol/powerdns \
  --version "3.1.13" \
  --namespace "quickstart-2" \
  --create-namespace \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/powerdns/examples/quickstart.yaml"

Cluster 3: quickstart-3

Bash
1
2
3
4
5
6
7
helm install quickstart-3 \
  oci://registry.open-xchange.com/cloudcontrol/powerdns \
  --version "3.1.13" \
  --namespace "quickstart-3" \
  --create-namespace \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/powerdns/examples/quickstart.yaml"

What gets deployed in each cluster:

  • Recursor: Recursive DNS resolver for looking up external domains
  • DNSDist: Load balancer and DNS traffic router
  • Operator: Manages the PowerDNS components lifecycle
  • CC-API: CloudControl API that SPOG queries for real-time status and configuration

4. Deploy Glass Instrumentation

Glass Instrumentation agents connect each cluster to the SPOG platform, enabling visibility into server status and health. Each deployment also sets the cluster labels that define its identity and organizational role.

For quickstart-1:

Bash
1
2
3
4
5
6
helm install glass-instrumentation-1 \
  oci://registry.open-xchange.com/cc-glass/glass-instrumentation \
  --version "1.0.0" \
  --namespace "quickstart-1" \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-instrumentation/examples/quickstart-1.yaml"
Labels: region: us-east, environment: production, tier: critical, team: [platform, security]

For quickstart-2:

Bash
1
2
3
4
5
6
helm install glass-instrumentation-2 \
  oci://registry.open-xchange.com/cc-glass/glass-instrumentation \
  --version "1.0.0" \
  --namespace "quickstart-2" \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-instrumentation/examples/quickstart-2.yaml"
Labels: region: us-west, environment: production, tier: standard, team: platform

For quickstart-3:

Bash
1
2
3
4
5
6
helm install glass-instrumentation-3 \
  oci://registry.open-xchange.com/cc-glass/glass-instrumentation \
  --version "1.0.0" \
  --namespace "quickstart-3" \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-instrumentation/examples/quickstart-3.yaml"
Labels: region: eu-west, environment: development, tier: standard, team: infrastructure

These labels are what enable SPOG's powerful organizational capabilities, allowing you to filter and manage clusters based on multiple dimensions.

What Glass Instrumentation provides:

  • Server discovery and health status
  • Cluster identification through labels
  • Connection to the SPOG control plane

5. Deploy Glass UI

Finally, deploy the Glass UI to access your unified management interface:

Bash
1
2
3
4
5
6
7
helm install glass-ui \
  oci://registry.open-xchange.com/cc-glass/glass-ui \
  --version "1.0.0" \
  --namespace "controlplane" \
  --set ui.ingress.host="$GLASS_HOSTNAME" \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/quickstart.yaml"
Bash
1
2
3
4
5
6
7
helm install glass-ui \
  oci://registry.open-xchange.com/cc-glass/glass-ui \
  --version "1.0.0" \
  --namespace "controlplane" \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/quickstart.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/quickstart-port-forward.yaml"

The --set ui.config.nats.serverUrl override is required because NodePort bypasses ingress -- the browser connects directly to the node IP, so the NATS WebSocket URL must also point to the node IP.

Bash
1
2
3
4
5
6
7
8
helm install glass-ui \
  oci://registry.open-xchange.com/cc-glass/glass-ui \
  --version "1.0.0" \
  --namespace "controlplane" \
  --set global.imagePullSecretsList[0]=registry-credentials \
  --set "ui.config.nats.serverUrl=ws://${NODE_IP}:31222" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/quickstart.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/quickstart-nodeport.yaml"

Glass UI provides:

  • Unified dashboard showing all clusters
  • Label-based filtering and organization
  • Real-time metrics and health status
  • DNS query testing capabilities
  • Cluster configuration management

Accessing Glass UI

Once deployed, access Glass UI at the hostname you configured. The ingress will route traffic to your Glass UI service automatically.

Start two port-forwards to access the UI and NATS WebSocket:

Bash
kubectl port-forward svc/glass-ui 8080:80 -n controlplane &
kubectl port-forward svc/glass-nats 8222:8080 -n controlplane &

Then open http://localhost:8080 in your browser.

OIDC Not Supported

Port-forward mode does not support OIDC authentication. If you plan to set up OIDC later, use Ingress instead.

Access Glass UI at http://${NODE_IP}:31080.

Troubleshooting: UI loads but shows no data

If the Glass UI loads but clusters never appear, the NATS WebSocket URL may not match the hostname in your browser's address bar. Check the generated ConfigMap:

Bash
kubectl get configmap glass-ui-config -n controlplane -o jsonpath='{.data.nats\.json}'

The serverUrl hostname must be reachable from your browser. For example, if you access Glass UI at http://localhost:8080, the NATS URL should use localhost too (which is what the port-forward overlay sets).

Clusters may take a few minutes to appear

After deploying, it can take 2-3 minutes for all clusters to fully appear in the UI. The Glass Instrumentation agents need time to connect to the control plane and announce their clusters. If you don't see all three clusters immediately, just wait a moment.

Default Credentials

  • Username: admin
  • Password: quickstart

Features to Explore

Once logged in, try these key features:

  1. Multi-Cluster Dashboard: View all three clusters with their current health and status
  2. Label Filtering: Use the filter controls to view subsets (e.g., only "production" or "critical" clusters)
  3. Log Viewing: See log output from DNS services inside the clusters
  4. DNS Testing: Execute test queries against specific clusters to validate configurations

Understanding the Label System

The quickstart demonstrates how labels organize clusters. Each cluster has labels set by its Glass Instrumentation configuration:

  • quickstart-1: region: us-east, environment: production, tier: critical, team: [platform, security]
  • quickstart-2: region: us-west, environment: production, tier: standard, team: platform
  • quickstart-3: region: eu-west, environment: development, tier: standard, team: infrastructure

Note that quickstart-1 has a multi-valued team label, meaning it belongs to both the platform and security teams. This demonstrates how clusters with shared ownership can match filters for either team.

What Labels Enable

Try these in the Glass UI:

  • Filter views: Show only production clusters or critical tier systems
  • Team organization: Platform team sees their clusters, infrastructure team sees theirs
  • Regional grouping: View all US clusters or just EU clusters
  • Access control: REGO policies can grant permissions based on labels

This simple labeling system scales from this 3-cluster demo to hundreds of clusters in production. You can also use these labels to set up preconfigured views in the UI that filter and group clusters in a way that's useful to you.

Things to Try

Once you're in the Glass UI, experiment with these filtering and grouping capabilities:

Filter Examples

  • Production Only: Use filter environment = "production" to see only clusters 1 and 2
  • Regional Wildcards: Try region like "us-*" to view all US-based clusters
  • Critical Systems: Filter by tier = "critical" to focus on high-priority infrastructure
  • Security Team: Filter by team = "security" to see quickstart-1 (it has team: [platform, security])

Grouping Examples

  • Group by Tier: Use group by tier to organize clusters by criticality level
  • Group by Region: Try group by region to see geographic distribution
  • Group by Team: Use group by team to see ownership boundaries

Combined Queries

  • Production Platform Clusters: environment = "production" and team = "platform" (shows quickstart-1 and quickstart-2)
  • Non-critical Development: environment = "development" and tier != "critical"
  • Security Team Production: team = "security" and environment = "production" (shows only quickstart-1)

These queries demonstrate the power of label-based organization - the same patterns work whether you're managing 3 clusters or 300.

Cleanup

To remove the quickstart deployment:

Bash
# Uninstall Glass UI and Controlplane
helm uninstall glass-ui quickstart-controlplane -n controlplane

# Uninstall US East cluster
helm uninstall quickstart-1 glass-instrumentation-1 -n quickstart-1

# Uninstall US West cluster
helm uninstall quickstart-2 glass-instrumentation-2 -n quickstart-2

# Uninstall EU West cluster
helm uninstall quickstart-3 glass-instrumentation-3 -n quickstart-3

# Delete all namespaces
kubectl delete namespace controlplane quickstart-1 quickstart-2 quickstart-3

# Remove PowerDNS CRDs (this removes the DNS knowledge from Kubernetes)
helm uninstall powerdns-crds

Summary

You've successfully deployed a working SPOG demo environment with three DNS clusters.

This quickstart demonstrated:

  • How SPOG provides unified visibility across multiple clusters
  • Label-based organization for filtering and grouping
  • Centralized access through the Glass UI dashboard
  • Basic health monitoring and log viewing capabilities

The patterns shown here - using labels to organize clusters and accessing them through a single interface - work the same way whether you're managing this 3-cluster demo or a production environment with many more clusters.

Next Steps

Now that you have a working SPOG environment, dive deeper into understanding how it all works:

Learn the Architecture

Architecture Overview - Understand the core concepts that make SPOG powerful:

  • Hub-and-Spoke Architecture: How the control plane connects to distributed user planes
  • Label-Based Taxonomy: Design flexible cluster organization schemes
  • Filter Query Language: Master advanced filtering and grouping
  • REGO-Based Authorization: Implement fine-grained access control
  • Dashboards & Playlists: Create custom views for different teams
  • Navigation: Structure your UI for intuitive access