CPI-ControlCPI-ControlBlog
Tutorial

Managing Multi-Cluster Kubernetes with One Tool

7 min read·
Managing multiple Kubernetes clusters from one dashboard

If you run more than one Kubernetes cluster, you know the pain. Production in one region, staging in another, maybe a dev cluster on a local machine. Each cluster has its own context, its own dashboard, its own set of bookmarked URLs. You spend half your debugging time running kubectl config use-contextand the other half wondering if you're looking at the right cluster. CPI-Control eliminates this problem entirely by letting you upload multiple kubeconfigs and viewing all clusters in a single, unified dashboard.

The Problem: Context Switching Kills Productivity

Most teams running multiple Kubernetes clusters end up with a fragmented monitoring setup. Production might have Datadog or Grafana, staging might have a basic Prometheus installation, and the dev cluster often has nothing at all. When an issue spans environments — a bug that appears in staging but not in dev, a config difference between staging and production — you're jumping between tools, tabs, and terminal windows to piece together what's happening.

Even if you use the same monitoring stack everywhere, you still need separate dashboards per cluster. You can't see staging and production pods side by side. You can't compare resource usage across environments. You can't tail logs from a staging service and a production service simultaneously. This fragmentation slows down incident response and makes routine operations more error-prone than they need to be.

The Solution: Multiple Kubeconfigs, One Dashboard

In CPI-Control, navigate to Settings → Integrations → Kubernetes and click Add Clusterfor each of your clusters. Upload the kubeconfig for your production cluster and name it something descriptive like "production-eu-west". Then do the same for staging and dev. Each kubeconfig gets its own Kubernetes adapter internally, with its own connection, its own sync schedule, and its own set of discovered services.

The key insight is that all services from all clusters appear in the same service list on your dashboard. A service running in your production cluster shows up right next to the same service running in staging. CPI-Control uses the cluster name you provided as a prefix, so you can immediately tell which environment you're looking at. The service list is filterable by cluster, so you can either see everything at once or focus on a specific environment.

How It Works Under the Hood

When you add a kubeconfig, CPI-Control creates a dedicated adapter instance for that cluster. Each adapter runs its own SyncScheduler, which independently scans the cluster's namespaces for Deployments, Services, and Ingresses. The discovery results are merged into a single service database, with each service tagged with its cluster origin. Infrastructure bindings use the format cluster-name/namespace/deployment-name to ensure uniqueness across clusters.

Health checks, pod metrics, and event collection run independently per cluster. This means a network issue with your staging cluster won't affect monitoring of production. Each adapter maintains its own connection pool and retry logic. If a cluster becomes unreachable, its services are marked as "unknown" status rather than "down" — CPI-Control distinguishes between a service that's actually failing and a monitoring connection that's interrupted.

Live Logs Across Clusters

One of the most powerful features in a multi-cluster setup is aggregated log viewing. CPI-Control runs stern processes for each cluster independently, collecting logs into separate ring buffers. But when you open the log viewer, you can select services from any cluster and see their logs interleaved in a single, chronologically sorted view.

Imagine debugging a request that flows from a frontend deployed on your production cluster to an API on the same cluster to a background worker on a separate processing cluster. Instead of opening three terminal windows with three different kubectl contexts, you select all three services in CPI-Control's log viewer and see the entire request flow in one stream. Each log line is color-coded by service and labeled with the cluster name, so you always know where each line originated.

The multi-service log viewer supports filtering by log level across all selected services simultaneously. Filter for ERROR level and you'll see errors from production and staging side by side — useful for confirming whether a fix deployed to staging actually resolved the error you're seeing in production.

Deployments from All Clusters in One Timeline

The deployment timeline aggregates rollout events from every connected cluster. When a new image is deployed to your staging cluster, it shows up in the timeline. When the same image is promoted to production an hour later, that deployment appears right below it. You can filter the timeline by cluster to see only production deployments, or view everything chronologically to understand the full promotion path of a change.

If you've also connected GitHub as a provider, CPI-Control correlates deployments with commits and pull requests. You can trace a production deployment back to the PR that introduced the change, see the CI status on that PR, and check whether the same commit was deployed to staging first. This deployment lineage is built automatically from the metadata available in your Kubernetes annotations and GitHub webhook data.

Practical Tips: Naming Conventions and Organization

To get the most out of multi-cluster management, establish a consistent naming convention for your clusters. We recommend the format environment-region— for example, "production-eu-west", "staging-us-east", "dev-local". This makes it immediately clear what you're looking at in the service list and in log output. Avoid generic names like "cluster-1" or "main" that don't convey the environment or purpose.

Namespace organization matters too. If your clusters use consistent namespace naming — the same service deployed to the "backend" namespace in both production and staging — CPI-Control can better correlate services across environments. This isn't required, but it makes the dashboard more intuitive. Services with matching names in matching namespaces across different clusters are visually grouped, making it easy to compare their health and resource usage.

For teams that use namespace-per-feature-branch patterns, CPI-Control's auto-discovery will pick up new namespaces as they're created and remove services when namespaces are deleted. The SyncScheduler includes ghost service detection that checks whether a Kubernetes resource still exists before marking it as down, preventing false alerts when ephemeral environments are torn down.

Managing CronJobs and Deployments Across Clusters

CronJobs are often the forgotten workloads in multi-cluster setups. They run in the background, and when they fail, nobody notices until a customer reports missing data. CPI-Control discovers CronJobs alongside Deployments and tracks their execution history. You can see the last run time, whether it succeeded, and how long it took — across all clusters.

For Deployments specifically, CPI-Control tracks rollout status in real time. If a deployment in your production cluster is stuck in a rolling update — maybe the new pods are crash-looping — you'll see the rollout status directly on the service card. Combined with live logs from the failing pods, you can diagnose the issue without leaving the dashboard. And because you can see the same service's status in staging simultaneously, you can quickly verify whether the same image works there, narrowing down the issue to environment-specific configuration.

The multi-cluster setup in CPI-Control requires no additional infrastructure, no cross-cluster networking, and no shared monitoring backend. Each cluster is accessed independently through its kubeconfig, and all aggregation happens locally on your machine. This means there's zero operational overhead — no monitoring cluster to maintain, no central Prometheus to scale, no Thanos or Cortex to configure. Just upload your kubeconfigs and start seeing everything in one place.

Try CPI-Control Free

Monitor up to 50 services with zero cloud dependency.

Download for macOS