Mastering Dynamic Client to Watch All Your CRDs
The Kubernetes ecosystem has revolutionized how we deploy, manage, and scale applications in a cloud-native world. At its heart lies a powerful, extensible API that allows users to define not just standard resources like Pods, Deployments, and Services, but also their own custom resources tailored to specific application domains. These Custom Resource Definitions (CRDs) are the backbone of Operators, bringing application-specific logic and state management into the Kubernetes control plane. However, interacting with these custom resources programmatically, especially when their structure isn't known beforehand, presents a unique challenge. This is where the Kubernetes Dynamic Client steps in, offering an unparalleled level of flexibility and power for observing and manipulating any resource in the cluster, including your bespoke CRDs.
This comprehensive guide delves deep into the world of Kubernetes Custom Resources and the indispensable role of the Dynamic Client. We will explore the theoretical underpinnings, practical implications, and detailed code examples that empower developers to build robust, generic controllers and tools capable of watching and reacting to changes across a multitude of CRDs, without needing compile-time knowledge of their Go structs. By the end of this journey, you will possess a profound understanding of how to leverage the Dynamic Client to gain full observability and control over your Kubernetes environment, fundamentally transforming your approach to cloud-native development and operations.
The Foundation: Understanding Kubernetes Custom Resource Definitions (CRDs)
Before we can master the art of watching CRDs, it's crucial to have a solid grasp of what they are and why they exist. Kubernetes, by design, provides a rich set of built-in resources to manage stateless and stateful applications, networking, storage, and configuration. These core resources, however, cannot possibly cover every single application-specific requirement or operational pattern that users might encounter. To address this inherent limitation, Kubernetes introduced the concept of Custom Resource Definitions (CRDs) as a powerful extension mechanism.
A CRD allows you to define a new type of resource that is managed by the Kubernetes API. Think of it as telling Kubernetes, "Hey, I'm going to introduce a new kind of object, and here's what it looks like and how it behaves." Once a CRD is created and registered with the Kubernetes API server, users can then create instances of that custom resource, just like they would create a Pod or a Deployment. These instances are called Custom Resources (CRs). The beauty of CRDs lies in their ability to seamlessly integrate new concepts into the Kubernetes API, making them first-class citizens of the cluster.
Why CRDs are Indispensable
The introduction of CRDs marked a pivotal moment in the evolution of Kubernetes, fundamentally changing how complex applications and services are managed. Here are some key reasons for their indispensability:
- Extending Kubernetes API: CRDs allow you to extend the Kubernetes API with your own application-specific objects. This means you can define objects that represent your application's components, configurations, or operational states, and manage them using the same declarative API and tools (like
kubectl) you use for built-in resources. For example, if you're running a database, you might define aDatabaseInstanceCRD to represent a database deployment, complete with version, storage class, and backup policies. - Building Operators: The most prominent use case for CRDs is in conjunction with the Operator pattern. An Operator is a method of packaging, deploying, and managing a Kubernetes application. Operators extend the Kubernetes API with custom resources, which act as abstractions for complex application configurations. A human operator encodes their operational knowledge into software that can manage instances of these custom resources. For example, a Prometheus Operator uses CRDs like
PrometheusandServiceMonitorto manage Prometheus instances and their scraping configurations. This automates tasks that would traditionally require manual intervention, improving reliability and reducing operational burden. - Simplifying Application Management: For complex applications comprising multiple microservices, databases, caches, and networking components, managing all these individual Kubernetes resources can become overwhelming. CRDs provide a higher-level abstraction, allowing developers to define a single custom resource that encapsulates the entire application or a significant part of it. This simplifies deployment, scaling, and lifecycle management, presenting a unified interface to complex systems.
- Declarative Configuration: Like all Kubernetes resources, CRDs and their instances (CRs) are managed declaratively. You define the desired state of your custom resources in YAML manifest files, and Kubernetes works to achieve that state. This aligns perfectly with the GitOps philosophy, where your entire application and infrastructure state is version-controlled and deployed through automated pipelines.
- Empowering the Ecosystem: CRDs have fueled an explosion of innovation within the Kubernetes ecosystem. From service meshes like Istio (which uses CRDs extensively for traffic management rules) to serverless platforms like Knative, CRDs provide the foundational mechanism for extending Kubernetes capabilities in countless domains. They allow vendors and open-source projects to integrate their solutions deeply and natively with Kubernetes.
Anatomy of a CRD
A CRD itself is a Kubernetes resource that defines the schema and behavior of a new custom resource. When you create a CRD, you're essentially registering a new type with the Kubernetes API server. Let's break down its key components:
apiVersion,kind,metadata: Standard Kubernetes fields identifying the object as aCustomResourceDefinition.spec.group: A logical grouping for your custom resources, typically using a domain name in reverse (e.g.,stable.example.com). This helps avoid naming collisions and organizes your custom resources.spec.names: Defines the various names for your custom resource:plural: The plural name used in API endpoints (e.g.,databases).singular: The singular name used for object names (e.g.,database).kind: Thekindfield used in custom resource manifests (e.g.,Database).shortNames: Optional, shorter names forkubectl(e.g.,db).
spec.scope: Specifies whether the custom resource isNamespaced(like Pods) orClusterscoped (like Nodes). Most application-specific resources are namespaced.spec.versions: A list of supported API versions for your custom resource. Each version can have its own schema and features:name: The version string (e.g.,v1alpha1,v1).served: Whether this version is enabled via the API.storage: Whether this version is the primary storage version in etcd. There must be exactly one storage version.schema.openAPIV3Schema:** This is perhaps the most critical part. It defines the validation schema for your custom resources using theOpenAPIv3 specification. This schema ensures that any custom resource created conforms to the defined structure, preventing malformed objects from being stored in etcd. It enforces data types, required fields, patterns, and other constraints. This powerful validation mechanism enhances the reliability and predictability of your custom resources, acting as a crucial gatekeeper for data integrity.
spec.conversion(Optional): Defines how objects are converted between different API versions if multiple versions are supported. This is essential for evolving your CRD over time without breaking existing deployments.spec.subresources(Optional): Allows enabling/statusand/scalesubresources, providing standard Kubernetes patterns for status reporting and scaling operations.
CRDs empower Kubernetes users to extend the platform's capabilities in virtually limitless ways, making it adaptable to an ever-wider array of workloads and operational paradigms. However, this flexibility introduces complexities when it comes to programmatic interaction, especially if you need to build generic tools that can handle any CRD without prior knowledge of its specific structure. This is precisely the challenge that the Dynamic Client is designed to overcome.
The Challenge of Dynamically Watching CRDs
While CRDs bring immense flexibility, they also introduce a significant challenge for developers building generic tools or controllers. When you're working with built-in Kubernetes resources (like Pods or Deployments), you typically use the client-go library's typed clients. These clients provide type-safe Go structs for each resource, allowing for compile-time checking and a familiar Go programming experience. For example, you can directly access pod.Spec.Containers or deployment.Status.Replicas.
However, what happens when you need to interact with a CRD whose Go struct you haven't defined or don't even know at compile time? Imagine building a monitoring tool that needs to watch all custom resources of a specific group and version, regardless of their kind. Or perhaps you're creating a generic compliance checker that scans for specific annotations or labels across all custom resources in a cluster. In these scenarios, relying on type-safe clients becomes impossible, as you would need to define a Go struct for every single potential CRD, which is neither scalable nor practical.
The core of the challenge lies in the dynamic nature of CRDs:
- Unknown Types at Compile Time: A custom resource's
kind,group, and schema are only known at runtime when the CRD is deployed to the cluster. A generic watcher cannot hardcode specific Go structs. - Evolving Schemas: CRD schemas can evolve over time, with new versions introducing new fields or changing existing ones. A typed client built against an older schema would break when encountering newer versions, or simply miss new information.
- Resource Discovery: How do you even know which CRDs exist in a cluster without manually inspecting them? A generic tool needs to discover CRDs programmatically.
- Generic Operations: Performing common operations (listing, watching, getting, creating, updating, deleting) on resources with unknown structures requires a mechanism that doesn't rely on Go struct marshaling/unmarshaling to predefined types.
Traditional client-go typed clients are excellent for specific, known resource types. For instance, if you're building a controller that manages Foo resources (defined by a Foo CRD), you'd generate a Foo type, an informer, and a lister for it. This provides strong type safety and IDE autocompletion, which is highly beneficial for dedicated controllers. However, this approach falls short when generality is paramount.
Consider a scenario where you want to implement a custom admission webhook that validates any incoming CR based on a set of generic rules, or a backup solution that needs to snapshot all custom resources. Writing separate code paths for each potential CRD is not feasible. This is where the Kubernetes Dynamic Client becomes an indispensable tool, offering a solution that embraces the dynamic nature of CRDs rather than fighting against it. It provides a powerful interface for interacting with any Kubernetes resource using untyped unstructured.Unstructured objects, allowing you to defer type resolution and schema understanding to runtime.
Introducing the Kubernetes Dynamic Client
The Kubernetes Dynamic Client, found within the k8s.io/client-go/dynamic package, is a powerful and flexible interface designed for interacting with Kubernetes API resources when their exact Go type isn't known at compile time. Unlike the typed clients (which operate on specific Go structs like v1.Pod or appsv1.Deployment), the Dynamic Client operates on generic unstructured.Unstructured objects. This makes it ideal for building generic tools, controllers, and operators that need to work with arbitrary CRDs or even built-in resources in a type-agnostic manner.
What is dynamic.Interface?
The dynamic.Interface is the entry point to the Dynamic Client. It provides methods to obtain a ResourceInterface for a specific API group, version, and resource kind. This ResourceInterface then allows you to perform standard CRUD (Create, Read, Update, Delete) and Watch operations on those resources.
The key difference from typed clients is that all data exchange happens via unstructured.Unstructured objects. An unstructured.Unstructured object is essentially a wrapper around a map[string]interface{}, allowing you to access and modify arbitrary JSON-like data using string keys, without needing a predefined Go struct. This mirrors the flexible nature of JSON and YAML manifests that define Kubernetes resources.
How it Differs from Typed Clients
Let's highlight the fundamental differences:
| Feature | Typed Client (e.g., corev1.PodClient) |
Dynamic Client (dynamic.Interface) |
|---|---|---|
| Type Safety | High (compile-time checking with Go structs). | Low (runtime type assertion with map[string]interface{}). |
| Resource Knowledge | Requires compile-time knowledge of resource Go struct. | Operates on unstructured.Unstructured objects; resource structure unknown at compile time. |
| Primary Use Case | Building specific controllers for known resource types, application logic. | Building generic tools, inspectors, admission webhooks, controllers for unknown CRDs. |
| API Calls | Direct methods like Pods().Get(), Deployments().Watch(). |
Uses Resource(gvr).Namespace(ns).Get(), Resource(gvr).Watch(). |
| Data Representation | Go structs (e.g., v1.Pod). |
unstructured.Unstructured (wrapper for map[string]interface{}). |
| Code Verbosity | Generally less verbose for known types (direct field access). | Can be more verbose due to map access and error handling for conversions. |
| Flexibility | Limited to known types; requires code generation for new CRDs. | Highly flexible; can interact with any resource that conforms to Kubernetes API conventions. |
Example of Typed vs. Dynamic Access:
// Typed Client Example (for a Pod)
pod, err := clientset.CoreV1().Pods("default").Get(ctx, "my-pod", metav1.GetOptions{})
if err == nil {
fmt.Printf("Pod name: %s, Image: %s\n", pod.Name, pod.Spec.Containers[0].Image)
}
// Dynamic Client Example (for any resource, let's assume a Pod initially)
// group-version-resource for Pods
gvr := schema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"}
unstructuredObj, err := dynamicClient.Resource(gvr).Namespace("default").Get(ctx, "my-pod", metav1.GetOptions{})
if err == nil {
// Accessing fields dynamically requires type assertions
name := unstructuredObj.GetName()
spec, ok := unstructuredObj.Object["spec"].(map[string]interface{})
if ok {
containers, ok := spec["containers"].([]interface{})
if ok && len(containers) > 0 {
container := containers[0].(map[string]interface{})
image := container["image"].(string)
fmt.Printf("Pod name: %s, Image: %s\n", name, image)
}
}
}
As you can see, accessing fields with the Dynamic Client is more involved, requiring explicit type assertions and checks. However, this verbosity is the price of ultimate flexibility.
Use Cases for Dynamic Clients
The Dynamic Client shines in scenarios where type-safety is a hindrance rather than a help:
- Generic Controllers/Operators: Building controllers that can manage any resource matching certain criteria (e.g., all resources with a specific label) without needing specific Go types for them. This is particularly useful for "meta-operators" or cross-cutting concern operators.
- Kubernetes Resource Inspectors/Auditors: Tools that scan the cluster for various resources, regardless of whether they are built-in or custom. For example, a tool to list all resources that have a certain annotation.
- Admission Webhooks: Webhooks that validate or mutate resources before they are stored by the API server. If the webhook needs to operate on various CRDs without knowing their types at compile time, the Dynamic Client is invaluable.
- Backup and Restore Solutions: Tools that need to enumerate and backup all resources, including all CRDs and their instances, across a cluster.
- CLI Tools/kubectl Plugins: Building custom
kubectlplugins that interact with arbitrary CRDs. - Multi-CRD Managers: When you have an application that uses multiple custom resources, and you need to manage their interactions in a generic way (e.g., ensuring a
DatabaseCR has a correspondingUserCR). apiGateway and Management Platforms: In a broader context, platforms like anapigatewaymight need to understand and route requests to services defined by various Kubernetes resources, including CRDs. While thegatewayitself might not directly use the Dynamic Client for routing, the control plane managing thegatewayconfigurations could leverage it to dynamically discover and adapt to new custom service definitions within Kubernetes. This provides a powerful abstraction layer, allowing operations teams to manageapiexposure with granular control.
The Dynamic Client is a sophisticated tool that offers immense power but comes with the responsibility of careful error handling and runtime type checking. When leveraged correctly, it unlocks a new dimension of programmatic interaction with Kubernetes, making it possible to build truly generic and resilient cloud-native applications and infrastructure tools.
Setting Up Your Go Environment for Kubernetes Client-Go
To begin our journey of mastering the Dynamic Client, we first need to set up a proper Go development environment and understand how to configure client-go to connect to a Kubernetes cluster.
Basic Go Project Setup
Assuming you have Go installed, create a new project directory:
mkdir crd-watcher
cd crd-watcher
go mod init crd-watcher
This initializes a Go module, which is essential for managing dependencies.
Importing client-go
Next, you need to add client-go as a dependency. client-go is the official Go client library for Kubernetes and contains all the necessary packages for interacting with the Kubernetes API, including the Dynamic Client.
go get k8s.io/client-go@latest
This command fetches the latest version of client-go and updates your go.mod file. You might also see other k8s.io/* dependencies added, as client-go itself has dependencies on other Kubernetes core libraries.
Kubernetes Configuration: Connecting to Your Cluster
The client-go library needs to know how to connect to your Kubernetes cluster. There are two primary ways to configure this:
Inside the Cluster (e.g., within a Pod): When your Go application runs inside a Kubernetes Pod, it can automatically leverage the service account token and API server address injected into the Pod's environment. client-go provides rest.InClusterConfig() for this purpose. This is the standard way to configure controllers, operators, and other in-cluster applications.```go package mainimport ( "fmt" "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" // ... other imports )func main() { // Creates the in-cluster config config, err := rest.InClusterConfig() if err != nil { panic(err.Error()) }
// Creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
// Example: List all Pods in the current namespace (inferred from service account)
// Note: You'll need appropriate RBAC permissions for the Pod's service account.
pods, err := clientset.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("There are %d pods in the cluster\n", len(pods.Items))
} ```When using InClusterConfig(), remember to configure appropriate Role-Based Access Control (RBAC) permissions for the Service Account that your Pod uses. Without the correct ClusterRole and ClusterRoleBinding (or Role and RoleBinding for namespaced resources), your application won't have the necessary permissions to interact with the Kubernetes API.
Outside the Cluster (e.g., from your local machine): This is typical for development, testing, and running administrative tools. client-go will look for a kubeconfig file, usually located at ~/.kube/config. The rest.InClusterConfig() function will not work here; instead, you'll use clientcmd.BuildConfigFromFlags() or clientcmd.NewNonInteractiveDeferredLoadingClientConfig() to load the configuration.```go package mainimport ( "flag" "fmt" "path/filepath"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)func main() { var kubeconfig *string if home := homedir.HomeDir(); home != "" { kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file") } else { kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file") } flag.Parse()
// Use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// Create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
// Example: List all Pods in the default namespace
pods, err := clientset.CoreV1().Pods("default").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("There are %d pods in the cluster\n", len(pods.Items))
} ```When running this code, it will automatically use your ~/.kube/config file (or the one specified by the kubeconfig flag) to connect to the cluster.
Obtaining the Dynamic Client
Once you have a *rest.Config, you can create an instance of the Dynamic Client.
package main
import (
"context"
"flag"
"fmt"
"path/filepath"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/dynamic" // Import for dynamic client
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
func main() {
// ... (kubeconfig loading as shown above) ...
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// Create the standard clientset (optional, but often useful)
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
// Create the Dynamic Client!
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
panic(err.Error())
}
// Now you have both clients:
// clientset for typed interactions (Pods, Deployments, etc.)
// dynamicClient for untyped, dynamic interactions with any resource, especially CRDs.
fmt.Println("Successfully connected to Kubernetes cluster and initialized clients.")
// You can now proceed to use dynamicClient to interact with CRDs.
// (Example code for interacting with CRDs will follow in the next sections.)
}
This setup forms the bedrock for any application interacting with Kubernetes, and specifically for those that need the power and flexibility of the Dynamic Client to handle custom resources. With these client objects in hand, we are now ready to explore how to interact with CRDs in a truly dynamic fashion.
Interacting with CRDs Using the Dynamic Client
Now that our environment is set up and we have an instance of the dynamic.Interface, we can dive into the core functionality: interacting with Custom Resource Definitions and their instances. The Dynamic Client allows us to perform typical CRUD (Create, Read, Update, Delete) operations, but its true power shines when we need to watch for changes across potentially unknown CRDs.
Discovering CRDs
Before you can watch or manipulate a custom resource, you often need to know which CRDs exist and what their GroupVersionResource (GVR) is. The GVR is crucial for the Dynamic Client, as it identifies the specific API endpoint for a resource.
You can list all CRDs in the cluster using the standard clientset (which provides a typed client for CustomResourceDefinition):
package main
import (
"context"
"flag"
"fmt"
"path/filepath"
"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
// getKubeConfig builds and returns a Kubernetes rest.Config
func getKubeConfig() (*rest.Config, error) {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// Use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
return nil, fmt.Errorf("error building kubeconfig: %w", err)
}
return config, nil
}
func main() {
config, err := getKubeConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
// List all CRDs in the cluster
fmt.Println("Discovering CRDs...")
crdList, err := clientset.ApiextensionsV1().CustomResourceDefinitions().List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(fmt.Errorf("error listing CRDs: %w", err))
}
fmt.Printf("Found %d CRDs:\n", len(crdList.Items))
for _, crd := range crdList.Items {
fmt.Printf(" - Kind: %s, Group: %s, Versions: %v\n", crd.Spec.Names.Kind, crd.Spec.Group, getVersionNames(crd.Spec.Versions))
}
// In a real application, you'd filter this list to find the CRDs you care about
// and then construct their GVRs.
}
func getVersionNames(versions []metav1.CustomResourceDefinitionVersion) []string {
names := make([]string, len(versions))
for i, v := range versions {
names[i] = v.Name
}
return names
}
This code snippet demonstrates how to programmatically list all CustomResourceDefinition resources. For each CRD, you can extract its spec.group, spec.names.plural, and spec.versions to construct the GroupVersionResource (GVR) needed by the Dynamic Client. The most commonly used GVR for a custom resource will be schema.GroupVersionResource{Group: crd.Spec.Group, Version: storageVersion, Resource: crd.Spec.Names.Plural} where storageVersion is the storage: true version.
Watching Custom Resources (CRs)
The core functionality we aim to master is watching CRs. The Dynamic Client's Watch method allows you to receive notifications whenever a custom resource matching your criteria is added, updated, or deleted. This is fundamental for building reactive systems, such as custom controllers or monitoring agents.
The process typically involves: 1. Defining the GVR: You need the schema.GroupVersionResource (GVR) for the specific custom resource you want to watch. 2. Getting the ResourceInterface: Use dynamicClient.Resource(gvr) to get a ResourceInterface. 3. Initiating the Watch: Call the Watch method on the ResourceInterface with appropriate metav1.ListOptions. 4. Processing Events: Iterate over the ResultChan() of the WatchInterface to receive and process events.
Let's illustrate this with a full example. For this, we'll assume a dummy CRD exists in your cluster. If you don't have one, you can create a simple one like this:
# my-crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: myresources.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
message:
type: string
description: A custom message
replicaCount:
type: integer
description: Number of replicas
minimum: 1
required: ["message"]
scope: Namespaced
names:
plural: myresources
singular: myresource
kind: MyResource
shortNames:
- mr
---
# my-resource-instance.yaml
apiVersion: stable.example.com/v1
kind: MyResource
metadata:
name: my-first-resource
namespace: default
spec:
message: "Hello from my custom resource!"
replicaCount: 3
Apply these to your cluster: kubectl apply -f my-crd.yaml and kubectl apply -f my-resource-instance.yaml.
Now, let's write the Go code to watch instances of MyResource:
package main
import (
"context"
"fmt"
"os"
"os/signal"
"syscall"
"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
)
func main() {
config, err := getKubeConfig() // Re-use the function from above
if err != nil {
panic(err.Error())
}
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
panic(err.Error())
}
// Define the GVR for MyResource
myResourceGVR := schema.GroupVersionResource{
Group: "stable.example.com",
Version: "v1",
Resource: "myresources", // Plural name from CRD
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Set up signal handling to gracefully stop watching
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
fmt.Printf("Starting watch for %s/%s...\n", myResourceGVR.Group, myResourceGVR.Resource)
watchFunc := func(ctx context.Context) {
// Start watching for MyResource objects in the "default" namespace
// Use metav1.NamespaceAll to watch across all namespaces
watcher, err := dynamicClient.Resource(myResourceGVR).Namespace("default").Watch(ctx, metav1.ListOptions{})
if err != nil {
fmt.Fprintf(os.Stderr, "Error starting watch: %v\n", err)
return
}
defer watcher.Stop() // Ensure the watcher is stopped when this function exits
for {
select {
case event, ok := <-watcher.ResultChan():
if !ok {
fmt.Println("Watcher channel closed, reconnecting...")
return // Exit to allow re-initialization of watch
}
// The event.Object is an unstructured.Unstructured object
unstructuredObj, ok := event.Object.(*metav1.PartialObjectMetadata) // Often easier for metadata
if !ok {
unstructuredObj = event.Object.(dynamic.Unstructured).DeepCopyObject().(*metav1.PartialObjectMetadata) // Full object
}
switch event.Type {
case "ADDED":
fmt.Printf("[ADDED] %s/%s (UID: %s)\n", unstructuredObj.GetNamespace(), unstructuredObj.GetName(), unstructuredObj.GetUID())
// For full object details:
// fullObject := event.Object.(*unstructured.Unstructured)
// message, _, _ := unstructured.NestedString(fullObject.Object, "spec", "message")
// fmt.Printf(" Message: %s\n", message)
case "MODIFIED":
fmt.Printf("[MODIFIED] %s/%s (UID: %s)\n", unstructuredObj.GetNamespace(), unstructuredObj.GetName(), unstructuredObj.GetUID())
// You'd typically get the full object to compare old/new state
case "DELETED":
fmt.Printf("[DELETED] %s/%s (UID: %s)\n", unstructuredObj.GetNamespace(), unstructuredObj.GetName(), unstructuredObj.GetUID())
case "ERROR":
fmt.Fprintf(os.Stderr, "[ERROR] Watch error: %v\n", event.Object)
default:
fmt.Printf("[UNKNOWN EVENT] Type: %s, Object: %+v\n", event.Type, unstructuredObj.GetName())
}
case <-ctx.Done():
fmt.Println("Context cancelled, stopping watch.")
return
}
}
}
// Run watch in a goroutine and handle potential reconnections
go func() {
for {
select {
case <-ctx.Done():
return
default:
watchFunc(ctx)
// If watchFunc returns (due to channel closure/error),
// wait a bit before attempting to reconnect.
time.Sleep(5 * time.Second)
}
}
}()
<-sigChan // Wait for an interrupt signal
fmt.Println("Shutting down...")
cancel() // Cancel the context to stop the goroutine
}
When you run this code, it will connect to your cluster and start watching for MyResource objects. Try creating, updating, and deleting MyResource instances using kubectl, and you will see the watcher logging these events.
# Example actions while the Go program is running:
kubectl apply -f my-resource-instance.yaml # If not already applied
kubectl get mr
kubectl patch mr my-first-resource -p '{"spec":{"replicaCount":5}}' --type=merge
kubectl delete mr my-first-resource
This dynamic watching capability is the cornerstone of building advanced Kubernetes tooling. It allows you to create generic systems that react to any custom resource without being tied to specific Go types, making your applications more resilient and adaptable to evolving cluster environments.
APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! 👇👇👇
Building a Generic CRD Watcher
The previous section demonstrated watching a single, pre-defined CRD. However, the true power of the Dynamic Client lies in its ability to build generic watchers that can monitor any CRD without hardcoding their GroupVersionResource (GVR). This is essential for tools that need to observe the entire Kubernetes landscape or adapt to dynamically deployed custom resources.
A generic CRD watcher needs to: 1. Discover all CRDs: It must first list all CustomResourceDefinition objects in the cluster. 2. Extract GVRs: For each discovered CRD, it needs to determine the correct GVR (group, version, plural resource name) to use for the Dynamic Client. 3. Start concurrent watches: For each relevant GVR, it initiates a separate watch process. 4. Handle events generically: Events from all watches are processed, typically logging the object's metadata or full content. 5. Manage lifecycle: It should gracefully start, stop, and potentially reconnect watches.
Let's construct a comprehensive example of such a generic watcher.
Step-by-Step Code Example for a Generic CRD Watcher
This example will integrate CRD discovery, GVR construction, and concurrent dynamic watching.
package main
import (
"context"
"flag"
"fmt"
"log"
"os"
"os/signal"
"path/filepath"
"sync"
"syscall"
"time"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
// getKubeConfig builds and returns a Kubernetes rest.Config
func getKubeConfig() (*rest.Config, error) {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
return nil, fmt.Errorf("error building kubeconfig: %w", err)
}
return config, nil
}
// WatchConfig defines the configuration for a single CRD watcher
type WatchConfig struct {
GVR schema.GroupVersionResource
Namespace string
}
// startDynamicWatch starts a watch for a specific GVR and processes events
func startDynamicWatch(ctx context.Context, wg *sync.WaitGroup, dynamicClient dynamic.Interface, config WatchConfig) {
defer wg.Done()
log.Printf("Starting watch for GVR: %s/%s (Namespace: %s)\n", config.GVR.Group, config.GVR.Resource, config.Namespace)
for {
select {
case <-ctx.Done():
log.Printf("Stopping watch for GVR: %s/%s (Namespace: %s) due to context cancellation.\n", config.GVR.Group, config.GVR.Resource, config.Namespace)
return
default:
// Ensure the watch connection is resilient
watcher, err := dynamicClient.Resource(config.GVR).Namespace(config.Namespace).Watch(ctx, metav1.ListOptions{})
if err != nil {
log.Printf("Error starting watch for %s/%s: %v. Retrying in 5 seconds...\n", config.GVR.Group, config.GVR.Resource, err)
time.Sleep(5 * time.Second)
continue
}
log.Printf("Watch established for %s/%s.\n", config.GVR.Group, config.GVR.Resource)
for event := range watcher.ResultChan() {
// Check if context is cancelled while processing events
select {
case <-ctx.Done():
watcher.Stop() // Stop the underlying watch
return
default:
// Continue processing
}
if event.Object == nil {
log.Printf("Received nil object for event type %s on %s/%s. Skipping.\n", event.Type, config.GVR.Group, config.GVR.Resource)
continue
}
// The event.Object is an unstructured.Unstructured object
obj, ok := event.Object.(*unstructured.Unstructured)
if !ok {
log.Printf("Failed to cast event object to *unstructured.Unstructured for %s/%s. Type: %T, Value: %+v\n", config.GVR.Group, config.GVR.Resource, event.Object, event.Object)
continue
}
// Log basic metadata for the event
log.Printf("[%s] %s %s/%s (Kind: %s, UID: %s)\n",
event.Type,
config.GVR.Group,
obj.GetNamespace(),
obj.GetName(),
obj.GetKind(),
obj.GetUID(),
)
// Example of accessing a field from the unstructured object:
// If you know a field path (e.g., spec.message for MyResource), you can retrieve it.
// message, found, err := unstructured.NestedString(obj.Object, "spec", "message")
// if found && err == nil {
// log.Printf(" Message: %s\n", message)
// }
// You can add more detailed processing or send this event to a channel for a central event handler.
}
watcher.Stop() // Watcher channel closed, try to reconnect
log.Printf("Watch channel for %s/%s closed, attempting to reconnect...\n", config.GVR.Group, config.GVR.Resource)
time.Sleep(2 * time.Second) // Brief pause before reconnecting
}
}
}
func main() {
config, err := getKubeConfig()
if err != nil {
log.Fatalf("Failed to get Kubernetes config: %v", err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatalf("Failed to create Kubernetes clientset: %v", err)
}
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
log.Fatalf("Failed to create Dynamic Client: %v", err)
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// --- 1. Set up signal handling for graceful shutdown ---
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-sigChan
log.Println("Received shutdown signal, initiating graceful shutdown...")
cancel() // Cancel the main context
}()
// --- 2. Discover all CRDs in the cluster ---
log.Println("Discovering CustomResourceDefinitions...")
crdList, err := clientset.ApiextensionsV1().CustomResourceDefinitions().List(ctx, metav1.ListOptions{})
if err != nil {
log.Fatalf("Failed to list CRDs: %v", err)
}
var watchConfigs []WatchConfig
for _, crd := range crdList.Items {
// Only consider CRDs that are established and have served/stored versions
if !isCRDEstablished(&crd) {
log.Printf("Skipping CRD %s: not established or no served/stored versions.\n", crd.Name)
continue
}
// Iterate through versions to find the storage version (or the latest served if no storage)
var targetVersion *apiextensionsv1.CustomResourceDefinitionVersion
for i := range crd.Spec.Versions {
version := &crd.Spec.Versions[i]
if version.Storage { // Prefer storage version
targetVersion = version
break
}
if version.Served { // Fallback to any served version if no storage
targetVersion = version
}
}
if targetVersion == nil {
log.Printf("CRD %s has no served or stored versions, skipping.\n", crd.Name)
continue
}
gvr := schema.GroupVersionResource{
Group: crd.Spec.Group,
Version: targetVersion.Name,
Resource: crd.Spec.Names.Plural,
}
// Determine scope: namespaced or cluster
namespace := metav1.NamespaceAll // Watch all namespaces for namespaced CRDs
if crd.Spec.Scope == apiextensionsv1.ClusterScoped {
namespace = "" // For cluster-scoped resources, no namespace is specified
}
watchConfigs = append(watchConfigs, WatchConfig{GVR: gvr, Namespace: namespace})
log.Printf("Prepared to watch %s/%s (Kind: %s, Scope: %s, Version: %s)\n",
gvr.Group, gvr.Resource, crd.Spec.Names.Kind, crd.Spec.Scope, gvr.Version)
}
if len(watchConfigs) == 0 {
log.Println("No suitable CRDs found to watch.")
return
}
// --- 3. Start concurrent watchers for each discovered CRD ---
var wg sync.WaitGroup
for _, cfg := range watchConfigs {
wg.Add(1)
go startDynamicWatch(ctx, &wg, dynamicClient, cfg)
}
log.Printf("Started %d concurrent CRD watchers.\n", len(watchConfigs))
wg.Wait() // Wait for all goroutines to finish (after context cancellation)
log.Println("All CRD watchers have stopped. Exiting.")
}
// isCRDEstablished checks if a CRD is ready to be used
func isCRDEstablished(crd *apiextensionsv1.CustomResourceDefinition) bool {
foundEstablished := false
for _, cond := range crd.Status.Conditions {
if cond.Type == apiextensionsv1.Established && cond.Status == apiextensionsv1.ConditionTrue {
foundEstablished = true
break
}
}
if !foundEstablished {
return false
}
// Also check if there's at least one served version
hasServedVersion := false
for _, v := range crd.Spec.Versions {
if v.Served {
hasServedVersion = true
break
}
}
return hasServedVersion
}
This comprehensive example demonstrates:
- Robust Configuration Loading: Using
getKubeConfigfor consistent cluster connection. - CRD Discovery: It lists all
CustomResourceDefinitionresources using theapiextensionsv1client. - GVR Construction: For each discovered CRD, it constructs the appropriate
schema.GroupVersionResource. It intelligently selects thestorageversion, or falls back to anyservedversion. It also determines if the resource is namespaced or cluster-scoped to set the correctNamespaceparameter for the Dynamic Client. - Concurrent Watching: It launches a separate goroutine (
startDynamicWatch) for each CRD to be watched, allowing parallel monitoring. This is crucial for performance and responsiveness in a large cluster with many CRDs. - Resilient Watching: Each
startDynamicWatchgoroutine includes a loop to automatically reconnect the watch if the connection is dropped (e.g., due to API server restart or network issues), making the watcher robust. - Generic Event Handling: It processes events (
ADDED,MODIFIED,DELETED) and logs the metadata of theunstructured.Unstructuredobject, showing how to interact with dynamically retrieved resources. - Graceful Shutdown: It uses a
context.Contextandos.Signalhandling to ensure that all watcher goroutines can be cleanly shut down when the program receives an interrupt signal. Async.WaitGroupensures themainfunction waits for all goroutines to complete. - CRD Status Check: The
isCRDEstablishedhelper function ensures we only try to watch CRDs that are actually ready to be used by the API server.
To test this: 1. Run the Go program. 2. In a separate terminal, apply my-crd.yaml and my-resource-instance.yaml (if you haven't already). 3. Then, try creating, updating, and deleting instances of MyResource, or even other CRDs you might have in your cluster (like those from Istio or Prometheus Operator). The Go program will log events for all of them.
This generic CRD watcher serves as a powerful foundation for building advanced Kubernetes automation, monitoring, and management tools, demonstrating the immense flexibility and utility of the Dynamic Client.
The Role of Schemas and Validation in CRDs
While the Dynamic Client allows us to interact with custom resources without compile-time knowledge of their structure, the integrity and predictability of these resources largely depend on the schemas defined within their CRDs. Understanding the OpenAPI v3 schema and its role in validation is crucial for both defining robust CRDs and for processing the unstructured.Unstructured objects received by the Dynamic Client.
OpenAPI v3 Schemas in CRDs
The spec.versions[].schema.openAPIV3Schema field within a CRD is where you define the validation rules for your custom resource using the OpenAPI v3 specification (specifically, a subset of JSON Schema Draft 7). This schema is paramount for ensuring data consistency and correctness.
When a user or an application attempts to create or update a custom resource, the Kubernetes API server intercepts the request. Before storing the resource in etcd, it validates the incoming object against the OpenAPI v3 schema defined in the CRD. If the object does not conform to the schema (e.g., a required field is missing, a string field receives an integer, or a value is outside a specified range), the API server rejects the request with a validation error.
Key benefits of OpenAPI v3 schemas in CRDs:
- Data Integrity: Prevents invalid or malformed data from entering the cluster state.
- API Consistency: Enforces a consistent structure for custom resources, making them predictable for consuming applications and controllers.
- Documentation: The schema effectively serves as documentation for your custom resource's structure, which can be automatically consumed by tools.
kubectlexplain:kubectl explain <CRD_KIND>leverages this schema to provide helpful documentation directly from the command line, just like it does for built-in resources.- IDE Support: Advanced IDEs can use these schemas to provide autocompletion and validation for YAML files.
Example Schema Snippets:
# ... inside spec.versions[].schema.openAPIV3Schema
type: object
properties:
spec:
type: object
properties:
message:
type: string
description: A custom message
maxLength: 256
replicaCount:
type: integer
description: Number of replicas
minimum: 1
maximum: 10
configMapRef:
type: string
pattern: "^[a-z0-9]([-a-z0-9]*[a-z0-9])?$" # Kubernetes name validation pattern
required: ["message", "replicaCount"] # Mark fields as mandatory
status: # Define the status subresource schema
type: object
properties:
observedReplicas:
type: integer
phase:
type: string
enum: ["Pending", "Running", "Failed"] # Enumerate possible values
This schema defines data types, descriptions, length constraints (maxLength), numerical ranges (minimum, maximum), string patterns (pattern), and required fields.
How Dynamic Clients Interact with Schemas
The Dynamic Client itself does not directly perform schema validation. Its purpose is to provide a raw, untyped interface to the Kubernetes API. When you receive an unstructured.Unstructured object from a Dynamic Client watch or get operation, that object has already passed through the API server's validation layer (if the CRD defined a schema). Therefore, you can generally trust that the structure of the data conforms to the CRD's OpenAPI schema.
However, this doesn't mean schemas are irrelevant when using the Dynamic Client. On the contrary, they are crucial for interpreting the unstructured.Unstructured objects:
- Field Access: When you need to access specific fields within an
unstructured.Unstructuredobject (e.g.,spec.messageorstatus.phase), you rely on your understanding of the CRD's schema to know the correct field paths and their expected data types. Theunstructured.NestedString,unstructured.NestedInt64,unstructured.NestedSliceetc., helper functions are used for this, but you provide the schema path. - Type Assertions: Since
unstructured.Unstructuredessentially stores data asmap[string]interface{}, retrieving values often involves type assertions (e.g.,value.(string),value.(int)). The schema tells you what type to expect. - Creating/Updating CRs: If your Dynamic Client code needs to create or update custom resources, you must construct the
unstructured.Unstructuredobject according to the CRD's schema. Failing to do so will result in validation errors from the API server.
Importance of Validation for Custom Resources
Validation is not just a "nice to have" feature; it's fundamental for building robust, self-healing, and predictable Kubernetes systems.
- Preventing Errors: It catches user errors or programmatic mistakes early, preventing incorrect configurations from propagating through the system.
- Controller Stability: Controllers that rely on specific fields being present or having certain values can operate reliably, knowing that the underlying data conforms to expectations. Without validation, controllers would need to implement extensive internal validation logic, leading to duplicated effort and potential inconsistencies.
- Clear Contracts: Schemas define a clear contract between the CRD definition and anyone creating or consuming instances of that CRD. This promotes interoperability and easier integration.
- Security: By restricting the types and values of fields, schemas can help prevent certain classes of misconfigurations or malicious inputs that might exploit weaknesses in a controller. For more advanced security validation,
admission webhookscan complementOpenAPIschemas with custom, dynamic logic.
In essence, while the Dynamic Client provides the "how" to interact with CRDs generically, the OpenAPI v3 schema provides the "what" – the blueprint that defines the structure and constraints of your custom resources. Both are indispensable components in a well-architected Kubernetes extension strategy. When designing your CRDs, invest time in a comprehensive and accurate schema; it will pay dividends in stability and ease of use for your custom resources.
Advanced Topics and Best Practices for Dynamic Client Usage
Mastering the Dynamic Client goes beyond basic watching. To build truly robust and efficient applications, it's essential to understand advanced concepts and follow best practices.
Resource Versions and Their Importance in Watching
Every Kubernetes object has a metadata.resourceVersion field. This string identifies a specific version of the resource within the Kubernetes API server's etcd store. Resource versions are crucial for:
- Watch Consistency: When you start a watch operation, you can specify
ResourceVersioninmetav1.ListOptions. If provided, the API server will return events after that specific resource version. If not provided (or if it's "0"), the watch starts from the current state and includes an initial "ADD" event for all existing resources. - Preventing Stale Reads: When performing a list operation, the
ResourceVersionfrom the list response can be used to initiate a subsequent watch, ensuring that you don't miss any events that occurred between the list and the watch start. This is the foundation of the "List and Watch" pattern. - Optimistic Concurrency: For update operations, you can include the
ResourceVersionof the object you fetched. If another update happens concurrently and changes theResourceVersion, your update will fail, preventing you from overwriting newer changes.
When building a generic watcher, handling ResourceVersion is vital for reliability. If a watch stream breaks (e.g., due to network issues or API server restart), you should try to re-establish the watch using the ResourceVersion of the last successfully processed event. This ensures that you don't miss any events during the brief disconnection period. The client-go informers handle this automatically, but with raw dynamic watches, you need to manage it yourself.
Resynchronization Periods
The Kubernetes API server watch mechanism is generally reliable, but watches can occasionally drop or experience issues that lead to missed events. To mitigate this, a common pattern in controllers is a "resynchronization period."
A resync period means that the controller will periodically re-list all relevant resources, even if no watch events have occurred. This allows the controller to reconcile its internal state with the actual cluster state, catching any discrepancies or missed events that might have slipped through the watch stream.
For a generic dynamic watcher, you might implement a resync logic: 1. Initial List: Perform a full list of resources for a given GVR. 2. Start Watch: Begin watching from the ResourceVersion obtained from the list. 3. Periodic Re-list: Every X minutes/hours, stop the current watch, perform a new full list, and restart the watch from the new ResourceVersion.
While this adds overhead, it significantly increases the robustness of your controller against subtle watch failures.
Informer Pattern vs. Raw Watch
For production-grade Kubernetes controllers, the client-go Informer pattern (found in k8s.io/client-go/informers) is almost always preferred over raw dynamicClient.Watch() calls.
| Feature | Raw Dynamic Watch (dynamicClient.Resource().Watch()) |
Informer Pattern (dynamicinformer.NewFilteredDynamicSharedInformerFactory()) |
|---|---|---|
| Simplicity | Easier to grasp for simple, short-lived watches. | More complex initial setup. |
| Event Handling | Direct event channel, manual processing, manual error handling/reconnect. | Event handlers (Add/Update/Delete) for callbacks. Automatic error handling and reconnect. |
| Local Cache | No local cache; every Get/List goes to API server. |
Maintains a thread-safe, eventually consistent local cache (Lister). |
| API Server Load | Can hammer API server with Get/List calls. |
Reduces API server load by serving Get/List from cache. |
| Resilience | Requires manual handling of ResourceVersion and reconnections. |
Handles ResourceVersion, initial listing, and reconnections automatically. |
| Resource Costs | Higher API server resource usage. | Lower API server resource usage, higher client-side memory for cache. |
| Use Cases | Simple CLI tools, one-off scripts, specific debugging. | Production-grade controllers, operators, long-running applications. |
The Dynamic Informer (dynamicinformer.NewFilteredDynamicSharedInformerFactory) combines the power of dynamic interaction with the robustness and efficiency of the informer pattern. It provides a shared informer that can watch multiple GVRs, maintain a local cache, and notify registered event handlers.
If you are building a generic controller that needs to run continuously and react to events across many CRDs, migrating from a raw dynamic watch to a dynamic informer is a critical best practice for production readiness.
Performance Considerations When Watching Many CRDs
Watching a large number of CRDs, especially in a cluster with high churn, can have performance implications:
- API Server Load: Each watch stream consumes resources on the API server. While watches are efficient, hundreds or thousands of concurrent watches can put a strain on the API server. Using informers greatly mitigates this by allowing multiple components to share a single watch stream and local cache.
- Network Bandwidth: High event rates can consume significant network bandwidth between your watcher and the API server.
- Client-Side Memory/CPU: Processing many events and potentially maintaining a local cache (with informers) requires adequate client-side memory and CPU. Deserializing
unstructured.Unstructuredobjects and iterating through them can also be CPU-intensive if not optimized.
Optimizations:
- Field Selectors and Label Selectors: Use
metav1.ListOptions.FieldSelectorandmetav1.ListOptions.LabelSelectorto filter events at the API server level. This reduces the amount of data transferred and processed. For example,labelSelector: "app=my-app"will only watch resources with that label. - Informer Pattern: As discussed, this is the primary recommendation for performance and reliability.
- Rate Limiting: If your event processing logic is heavy, consider rate-limiting how quickly you process events or how often you reconcile.
- Efficient Processing: When dealing with
unstructured.Unstructuredobjects, use helper functions likeunstructured.NestedStringfor safe and efficient field access, avoiding excessive manual type assertions if not strictly necessary.
Security Implications (RBAC for CRDs)
Any application interacting with the Kubernetes API, including those using the Dynamic Client, must have appropriate Role-Based Access Control (RBAC) permissions. For CRDs, this means:
CustomResourceDefinitionPermissions: To list CRDs themselves, your application needsget,list, andwatchpermissions on theCustomResourceDefinitionresource within theapiextensions.k8s.ioAPI group.- Custom Resource Permissions: To watch or manipulate instances of a specific CRD, your application needs
get,list,watch,create,update,patch,delete(as required) permissions on the custom resource itself. This means specifying thegroupandresource(plural name) of the CRD in yourClusterRoleorRole.
Example ClusterRole for a generic CRD watcher:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: generic-crd-watcher
rules:
- apiGroups: ["apiextensions.k8s.io"] # To list CRDs themselves
resources: ["customresourcedefinitions"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"] # To watch any custom resource
resources: ["*"]
verbs: ["get", "list", "watch"]
Important Note on apiGroups: ["*"], resources: ["*"]: This grants extremely broad permissions. In a production environment, you should strive for the principle of least privilege. If your watcher only cares about CRDs from a specific group (e.g., stable.example.com), limit apiGroups accordingly. If it only needs to watch specific resources, enumerate them. The * is suitable for truly generic, audit-like tools, but be cautious with its use.
Always create a ServiceAccount for your application, bind this ClusterRole to it using a ClusterRoleBinding, and then assign this ServiceAccount to the Pod running your dynamic client application.
Testing Your Dynamic Client Code
Testing code that interacts with the Kubernetes API can be challenging. For Dynamic Client code:
- Unit Tests: Test your business logic that processes
unstructured.Unstructuredobjects independently by creating mockunstructured.Unstructuredobjects ormap[string]interface{}. - Integration Tests (using
envtest): For more comprehensive testing without a full cluster,sigs.k8s.io/controller-runtime/pkg/envtestprovides a lightweight Kubernetes API server and etcd instance that you can run locally. This allows you to deploy your CRDs, create CR instances, and then run your dynamic client watcher against this local API server, simulating a real cluster environment. This is invaluable for testing watch reconnections, schema validation, and event processing. - End-to-End (E2E) Tests: Deploy your application to a real (or staging) Kubernetes cluster and verify its behavior with actual CRD deployments and resource modifications.
By carefully considering these advanced topics and integrating best practices, you can leverage the Dynamic Client to build highly reliable, performant, and secure Kubernetes applications that interact with the ever-evolving landscape of custom resources.
The Broader Ecosystem: API Gateways and Management
While dynamic clients enable us to programmatically interact with Kubernetes resources, including CRDs, at a low level, managing the exposure and lifecycle of the services these resources represent often requires a more comprehensive API management solution. This is where the concept of an API gateway becomes invaluable.
Kubernetes, through CRDs and custom controllers, allows us to define and manage application infrastructure in a declarative way. A custom resource like MyServiceInstance could, for example, represent a deployed microservice. However, simply having this resource defined in Kubernetes doesn't automatically mean it's securely exposed to external consumers, or that its traffic is managed, monitored, and discoverable. This gap is precisely what an api gateway bridges.
An api gateway acts as a single entry point for all api requests, sitting in front of a collection of backend services. It handles tasks such as:
- Traffic Management: Routing requests to appropriate backend services, load balancing, rate limiting, and circuit breaking.
- Security: Authentication, authorization,
apikey management, and sometimes even Web Application Firewall (WAF) capabilities. - Policy Enforcement: Applying custom policies to
apicalls. - Monitoring and Analytics: Collecting metrics, logging requests, and providing insights into
apiusage and performance. - Protocol Transformation: Translating requests between different protocols (e.g., HTTP to gRPC).
- Developer Portal: Offering documentation, examples, and self-service access to
apis for consumers.
In the context of Kubernetes and CRDs, an api gateway can play a crucial role. Imagine your custom controllers manage the deployment of various microservices or even AI models, each potentially represented by its own CRD. To expose these services securely and efficiently, you'd integrate them with an api gateway. The gateway could dynamically configure itself based on the presence of these custom resources, perhaps by an admission webhook that modifies custom resources to automatically register them with the gateway, or a dedicated controller that watches specific CRDs and updates gateway configurations.
This creates a powerful synergy: Kubernetes CRDs define and orchestrate the backend services, while an api gateway provides the unified, managed, and secure frontend for consuming those services. This is particularly relevant when dealing with modern, composite applications that mix traditional REST services with newer paradigms like AI inference models.
For instance, an open-source AI gateway and API management platform like APIPark can streamline the integration and management of diverse APIs, whether they are traditional REST services or cutting-edge AI models. APIPark offers robust features for API lifecycle management, traffic control, and secure access, complementing the backend management of resources orchestrated by Kubernetes CRDs and custom controllers. It allows organizations to quickly integrate over 100 AI models and present them through a unified API format, simplifying AI invocation. Furthermore, it enables users to encapsulate prompts into REST APIs, effectively turning AI models into easily consumable services. This is a powerful abstraction layer, allowing operations teams to manage API exposure with granular control, regardless of whether the underlying implementation is a traditional microservice or a dynamically managed AI model.
The comprehensive capabilities of a platform like APIPark, including end-to-end API lifecycle management, team-based API sharing, independent tenant access permissions, and robust performance rivaling Nginx, demonstrate how a sophisticated gateway can transform raw Kubernetes services into enterprise-grade APIs. Its detailed API call logging and powerful data analysis features further enhance observability and proactive maintenance, ensuring stability and security. By deploying APIPark, which can be done with a single command, organizations can bridge the gap between their dynamic Kubernetes backends and the external consumption of their APIs, creating a cohesive and powerful api economy.
In this integrated model, the api gateway becomes the control point for external access, while dynamic clients and CRDs remain the internal control plane for managing the application's components within the Kubernetes cluster. This layered approach allows for granular control at each stage, ensuring both operational efficiency and secure, performant api delivery.
Comprehensive Example: Dynamic CRD Monitoring with Enhanced Logging
To solidify our understanding and provide a truly actionable artifact, let's combine all the concepts discussed into a single, comprehensive Go program. This example will build upon the generic watcher, adding more detailed logging, demonstrating how to extract specific fields from unstructured.Unstructured objects, and incorporating best practices for resilience and graceful shutdown.
For this example, we will continue to use the MyResource CRD defined earlier:
# my-crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: myresources.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
message:
type: string
description: A custom message
replicaCount:
type: integer
description: Number of replicas
minimum: 1
required: ["message"]
scope: Namespaced
names:
plural: myresources
singular: myresource
kind: MyResource
shortNames:
- mr
And a sample instance:
# my-resource-instance.yaml
apiVersion: stable.example.com/v1
kind: MyResource
metadata:
name: example-myresource
namespace: default
labels:
app: example
spec:
message: "Initial message for example-myresource."
replicaCount: 1
Apply these to your cluster: kubectl apply -f my-crd.yaml and kubectl apply -f my-resource-instance.yaml.
Now, here's the full Go program:
package main
import (
"context"
"encoding/json"
"flag"
"fmt"
"log"
"os"
"os/signal"
"path/filepath"
"sync"
"syscall"
"time"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
// Global logger for consistent output
var logger = log.New(os.Stdout, "[CRD-WATCHER] ", log.LstdFlags|log.Lshortfile)
// WatchConfig defines the configuration for a single CRD watcher
type WatchConfig struct {
GVR schema.GroupVersionResource
Namespace string
Kind string // Added for clearer logging
}
// getKubeConfig builds and returns a Kubernetes rest.Config
func getKubeConfig() (*rest.Config, error) {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// Try in-cluster config first, then fall back to kubeconfig
config, err := rest.InClusterConfig()
if err == nil {
logger.Println("Using in-cluster Kubernetes config.")
return config, nil
}
logger.Printf("In-cluster config failed (%v), falling back to kubeconfig: %s\n", err, *kubeconfig)
config, err = clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
return nil, fmt.Errorf("error building kubeconfig: %w", err)
}
logger.Println("Using external kubeconfig.")
return config, nil
}
// isCRDEstablished checks if a CRD is ready to be used
func isCRDEstablished(crd *apiextensionsv1.CustomResourceDefinition) bool {
foundEstablished := false
for _, cond := range crd.Status.Conditions {
if cond.Type == apiextensionsv1.Established && cond.Status == apiextensionsv1.ConditionTrue {
foundEstablished = true
break
}
}
if !foundEstablished {
return false
}
hasServedVersion := false
for _, v := range crd.Spec.Versions {
if v.Served {
hasServedVersion = true
break
}
}
return hasServedVersion
}
// startDynamicWatch starts a watch for a specific GVR and processes events
func startDynamicWatch(ctx context.Context, wg *sync.WaitGroup, dynamicClient dynamic.Interface, config WatchConfig) {
defer wg.Done()
logger.Printf("Initiating watch for GVR: %s/%s (Kind: %s, Namespace: %s)\n", config.GVR.Group, config.GVR.Resource, config.Kind, config.Namespace)
// Keep track of the last resourceVersion to enable watch reconnection from a consistent point
var lastResourceVersion string
for {
select {
case <-ctx.Done():
logger.Printf("Stopping watch for %s/%s due to context cancellation.\n", config.GVR.Group, config.GVR.Resource)
return
default:
listOptions := metav1.ListOptions{}
if lastResourceVersion != "" {
listOptions.ResourceVersion = lastResourceVersion
listOptions.ResourceVersionMatch = metav1.ResourceVersionMatchNotOlderThan
logger.Printf("Attempting to resume watch for %s/%s from resourceVersion: %s\n", config.GVR.Group, config.GVR.Resource, lastResourceVersion)
} else {
// For the initial watch, perform a list to get current state and resource version
list, err := dynamicClient.Resource(config.GVR).Namespace(config.Namespace).List(ctx, metav1.ListOptions{})
if err != nil {
logger.Printf("Error during initial list for %s/%s: %v. Retrying in 5s...\n", config.GVR.Group, config.GVR.Resource, err)
time.Sleep(5 * time.Second)
continue
}
lastResourceVersion = list.GetResourceVersion()
logger.Printf("Initial list for %s/%s successful. Starting watch from resourceVersion: %s. Found %d existing resources.\n",
config.GVR.Group, config.GVR.Resource, lastResourceVersion, len(list.Items))
// Process initial list items as "ADDED" events if needed, but watch handles this.
// This initial list primarily sets up the resourceVersion for the actual watch.
listOptions.ResourceVersion = lastResourceVersion
listOptions.AllowWatchBookmarks = true // Allow bookmarks for efficiency
}
watcher, err := dynamicClient.Resource(config.GVR).Namespace(config.Namespace).Watch(ctx, listOptions)
if err != nil {
logger.Printf("Error starting watch for %s/%s: %v. Retrying in 5 seconds...\n", config.GVR.Group, config.GVR.Resource, err)
time.Sleep(5 * time.Second)
continue
}
logger.Printf("Watch established for %s/%s (Kind: %s).\n", config.GVR.Group, config.GVR.Resource, config.Kind)
for event := range watcher.ResultChan() {
// Check if context is cancelled while processing events
select {
case <-ctx.Done():
watcher.Stop()
return
default:
// Continue processing
}
if event.Object == nil {
logger.Printf("Received nil object for event type %s on %s/%s. Skipping.\n", event.Type, config.GVR.Group, config.GVR.Resource)
continue
}
// The event.Object is an unstructured.Unstructured object
obj, ok := event.Object.(runtime.Unstructured) // Using runtime.Unstructured for more flexibility
if !ok {
logger.Printf("Failed to cast event object to runtime.Unstructured for %s/%s. Type: %T\n", config.GVR.Group, config.GVR.Resource, event.Object)
continue
}
unstructuredObj := obj.(*unstructured.Unstructured)
lastResourceVersion = unstructuredObj.GetResourceVersion() // Update lastResourceVersion for potential reconnect
// Attempt to extract common fields
name := unstructuredObj.GetName()
namespace := unstructuredObj.GetNamespace()
uid := unstructuredObj.GetUID()
resourceVersion := unstructuredObj.GetResourceVersion()
labels := unstructuredObj.GetLabels()
logMsg := fmt.Sprintf("[%s] %s/%s (Kind: %s, GVR: %s/%s/%s, UID: %s, RV: %s, Labels: %v)",
event.Type,
namespace,
name,
config.Kind, // Use the Kind from WatchConfig
config.GVR.Group,
config.GVR.Version,
config.GVR.Resource,
uid,
resourceVersion,
labels,
)
// For specific CRDs like MyResource, you can attempt to extract spec fields
if config.Kind == "MyResource" {
message, foundMessage, _ := unstructured.NestedString(unstructuredObj.Object, "spec", "message")
replicaCount, foundReplicas, _ := unstructured.NestedInt64(unstructuredObj.Object, "spec", "replicaCount")
if foundMessage && foundReplicas {
logMsg += fmt.Sprintf(", Spec.Message: '%s', Spec.ReplicaCount: %d", message, replicaCount)
}
}
logger.Println(logMsg)
// Optionally, log the full object for detailed inspection
// if event.Type == "ADDED" || event.Type == "MODIFIED" {
// jsonBytes, err := json.MarshalIndent(unstructuredObj.Object, "", " ")
// if err == nil {
// logger.Printf(" Full Object (JSON):\n%s\n", string(jsonBytes))
// }
// }
}
watcher.Stop() // Watcher channel closed, try to reconnect
logger.Printf("Watch channel for %s/%s closed, attempting to reconnect in 2s...\n", config.GVR.Group, config.GVR.Resource)
time.Sleep(2 * time.Second) // Brief pause before reconnecting
}
}
}
func main() {
logger.Println("Starting Kubernetes Dynamic CRD Watcher...")
config, err := getKubeConfig()
if err != nil {
logger.Fatalf("Failed to get Kubernetes config: %v", err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
logger.Fatalf("Failed to create Kubernetes clientset: %v", err)
}
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
logger.Fatalf("Failed to create Dynamic Client: %v", err)
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// --- 1. Set up signal handling for graceful shutdown ---
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-sigChan
logger.Println("Received shutdown signal, initiating graceful shutdown...")
cancel() // Cancel the main context
}()
// --- 2. Discover all CRDs in the cluster ---
logger.Println("Discovering CustomResourceDefinitions...")
crdList, err := clientset.ApiextensionsV1().CustomResourceDefinitions().List(ctx, metav1.ListOptions{})
if err != nil {
logger.Fatalf("Failed to list CRDs: %v", err)
}
var watchConfigs []WatchConfig
for _, crd := range crdList.Items {
if !isCRDEstablished(&crd) {
logger.Printf("Skipping CRD %s: not established or no served/stored versions.\n", crd.Name)
continue
}
var targetVersion *apiextensionsv1.CustomResourceDefinitionVersion
for i := range crd.Spec.Versions {
version := &crd.Spec.Versions[i]
if version.Storage {
targetVersion = version
break
}
if version.Served {
targetVersion = version
}
}
if targetVersion == nil {
logger.Printf("CRD %s has no served or stored versions, skipping.\n", crd.Name)
continue
}
gvr := schema.GroupVersionResource{
Group: crd.Spec.Group,
Version: targetVersion.Name,
Resource: crd.Spec.Names.Plural,
}
namespace := metav1.NamespaceAll
if crd.Spec.Scope == apiextensionsv1.ClusterScoped {
namespace = ""
}
watchConfigs = append(watchConfigs, WatchConfig{GVR: gvr, Namespace: namespace, Kind: crd.Spec.Names.Kind})
logger.Printf("Prepared to watch %s/%s (Kind: %s, Scope: %s, Version: %s)\n",
gvr.Group, gvr.Resource, crd.Spec.Names.Kind, crd.Spec.Scope, gvr.Version)
}
if len(watchConfigs) == 0 {
logger.Println("No suitable CRDs found to watch. Exiting.")
return
}
// --- 3. Start concurrent watchers for each discovered CRD ---
var wg sync.WaitGroup
for _, cfg := range watchConfigs {
wg.Add(1)
go startDynamicWatch(ctx, &wg, dynamicClient, cfg)
}
logger.Printf("Started %d concurrent CRD watchers. Waiting for termination signal...\n", len(watchConfigs))
wg.Wait() // Wait for all goroutines to finish (after context cancellation)
logger.Println("All CRD watchers have stopped. Exiting gracefully.")
}
This extended example includes:
- Custom Logger: A global
loggerfor more consistent and informative output, including file and line numbers. - In-Cluster Fallback: The
getKubeConfigfunction now attempts to use in-cluster configuration first, which is standard for applications running inside Kubernetes, then falls back to kubeconfig. - Persistent
lastResourceVersion: EachstartDynamicWatchgoroutine now maintainslastResourceVersionto ensure that if a watch connection breaks, it attempts to resume from the last known resource version, minimizing missed events. - Initial List to Prime Watch: For the initial connection, it performs a
Listoperation to get the currentResourceVersionand then starts theWatchfrom that point, ensuring a comprehensive view from the beginning. - Detailed Event Logging: The log messages for each event are enriched with more metadata (GVR, Kind, UID, Labels, ResourceVersion).
- Conditional Spec Field Extraction: For the
MyResourcekind, it explicitly demonstrates how to useunstructured.NestedStringandunstructured.NestedInt64to safely extractspec.messageandspec.replicaCountfields, showcasing how you'd interact with the actual data within a custom resource. - Robust Reconnection Logic: The
forloop withinstartDynamicWatchalong withtime.Sleepensures that watches are automatically re-established if they drop.
To run this: 1. Save the Go code as main.go. 2. Run go mod tidy to ensure all dependencies are correct. 3. Execute: go run . -kubeconfig=/path/to/your/kubeconfig (or omit -kubeconfig if running inside a cluster or your kubeconfig is at ~/.kube/config). 4. Interact with MyResource or any other CRDs in your cluster (create, update, delete objects) and observe the detailed logs from your Go program.
This comprehensive example provides a powerful and resilient foundation for building applications that need to monitor the dynamic landscape of Custom Resource Definitions in Kubernetes, enabling deep integration and automation within your cloud-native environment.
Conclusion
The journey through Kubernetes Custom Resource Definitions and the Dynamic Client reveals a profound truth about the platform's power: its extensibility is boundless. CRDs empower us to tailor Kubernetes to the precise needs of our applications and operational workflows, transforming application-specific concerns into first-class citizens of the cluster API. This declarative approach, fundamental to Kubernetes, allows for the management of complex systems with unprecedented clarity and automation.
However, interacting with these custom extensions programmatically presents a unique set of challenges, especially when the specific structure of a custom resource isn't known at compile time. This is precisely where the Kubernetes Dynamic Client, from the client-go library, becomes an indispensable tool. By operating on generic unstructured.Unstructured objects, it liberates developers from the constraints of type-safe Go structs, offering the flexibility to observe and manipulate any Kubernetes resource—be it a built-in Pod or a bespoke CRD—with remarkable agility.
We've explored the foundational concepts of CRDs, their vital role in extending Kubernetes through Operators, and the crucial importance of OpenAPI v3 schemas for validation and data integrity. We then delved into the practicalities of setting up a Go environment, connecting to a Kubernetes cluster, and, most importantly, leveraging the Dynamic Client to discover CRDs, construct their GroupVersionResource (GVR), and initiate resilient, concurrent watch operations across multiple custom resource types.
From basic event processing to advanced considerations like resourceVersion management, watch reconnection strategies, and the performance implications of watching numerous CRDs, this guide has provided a holistic view. We've emphasized the robustness that comes from incorporating proper error handling, graceful shutdowns, and the strategic use of client-go informers for production-grade applications. Furthermore, we've positioned dynamic CRD interaction within the broader cloud-native ecosystem, highlighting how an API gateway and comprehensive API management platform—such as APIPark—can seamlessly extend the backend services managed by CRDs into securely exposed, governed, and discoverable APIs for external consumption. APIPark's ability to unify AI model integration, streamline API lifecycle management, and provide robust traffic control exemplifies how the control plane of Kubernetes (including CRDs) can be effectively translated into a powerful api economy.
Mastering the Dynamic Client is not merely about writing Go code; it's about unlocking a deeper level of control and observability within your Kubernetes clusters. It empowers you to build generic tools, powerful operators, and intelligent automation that can adapt to the ever-evolving landscape of your cloud-native environment. By embracing the dynamic nature of Kubernetes and leveraging these powerful client-go capabilities, you are well-equipped to architect more resilient, scalable, and intelligent systems that truly harness the full potential of the Kubernetes API. The future of cloud-native development hinges on such flexibility, and the Dynamic Client is a key enabler on this exciting frontier.
Frequently Asked Questions (FAQs)
1. What is the primary difference between a typed client and a dynamic client in Kubernetes client-go?
A typed client operates on specific Go structs (e.g., v1.Pod, appsv1.Deployment), offering compile-time type safety and direct field access. It's ideal when you know the exact structure of the Kubernetes resource at development time. A dynamic client, on the other hand, operates on unstructured.Unstructured objects, which are generic map[string]interface{} wrappers. It's used when the resource's type and structure are not known at compile time, making it perfect for interacting with arbitrary Custom Resource Definitions (CRDs) or building generic tools that inspect various resource types.
2. Why would I use a dynamic client instead of a typed client for watching CRDs?
You'd use a dynamic client for watching CRDs primarily when you need a generic solution. If you're building a tool that must watch any CRD deployed in a cluster, or if you don't want to generate Go structs for every possible CRD (which can be numerous), the dynamic client is the only practical approach. It allows your application to adapt to new or unknown custom resource types at runtime, providing unmatched flexibility for generic controllers, auditing tools, or backup solutions.
3. What is a GroupVersionResource (GVR) and why is it important for the dynamic client?
A GroupVersionResource (GVR) is a fundamental identifier for a specific resource type within the Kubernetes API. It combines the API Group (e.g., apps, stable.example.com), Version (e.g., v1, v1alpha1), and Resource (the plural name, e.g., deployments, myresources). The Dynamic Client requires a GVR to know which specific API endpoint to target when performing operations like List, Get, Watch, Create, Update, or Delete on resources. It's the dynamic client's way of addressing an API resource.
4. How do OpenAPI v3 schemas relate to the dynamic client and CRDs?
OpenAPI v3 schemas are embedded within CRD definitions to enforce validation rules for custom resources. When an object is created or updated, the Kubernetes API server validates it against this schema. While the dynamic client itself doesn't perform schema validation (it interacts with the API server after validation), the schema is crucial for interpreting the unstructured.Unstructured objects received. Developers use their knowledge of the CRD's OpenAPI schema to correctly navigate and extract specific fields from the generic unstructured.Unstructured data using helper functions like unstructured.NestedString, ensuring they access the correct data types.
5. When should I consider using an informer pattern (specifically dynamicinformer) instead of raw dynamic client.Watch() calls?
For production-grade, long-running Kubernetes controllers, the informer pattern (via dynamicinformer) is almost always preferred over raw dynamic client.Watch() calls. Informers provide a robust, resilient, and efficient way to watch resources by automatically handling reconnections, resource version management, and maintaining a local, thread-safe cache of resources. This significantly reduces load on the Kubernetes API server and simplifies client-side logic for consistency and fault tolerance, making your controller more stable and scalable compared to manually managing raw watch streams.
🚀You can securely and efficiently call the OpenAI API on APIPark in just two steps:
Step 1: Deploy the APIPark AI gateway in 5 minutes.
APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.
curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh

In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.

Step 2: Call the OpenAI API.

