How to Monitor Custom Resources in Go
The dynamic landscape of modern cloud-native applications often extends beyond the capabilities of standard Kubernetes resources. As organizations increasingly tailor their infrastructure to specific needs, Custom Resources (CRs) and Custom Resource Definitions (CRDs) have emerged as powerful paradigms for extending Kubernetes' native API. These custom extensions allow developers to define new types of objects within the Kubernetes ecosystem, bringing domain-specific logic directly into the declarative management plane. However, merely defining and deploying these custom resources is only the first step; ensuring their health, performance, and operational integrity necessitates robust monitoring. This deep dive will explore the intricacies of monitoring custom resources using Go, the lingua franca of Kubernetes, providing a comprehensive guide to building resilient and observable cloud-native systems.
The ability to monitor custom resources effectively is not merely a technical checkbox; it is a critical enabler for operational excellence. Without clear visibility into the state and behavior of these bespoke components, debugging becomes a labyrinthine challenge, proactive problem-solving turns into reactive firefighting, and the overall stability of complex deployments can be severely compromised. Our journey will traverse the foundational concepts of Kubernetes extensibility, delve into the Go-specific tools and patterns provided by client-go, and culminate in advanced strategies for comprehensive observability, ensuring that your custom resources are not just functional, but also transparent and manageable throughout their lifecycle.
Understanding Kubernetes Custom Resources (CRs) and Custom Resource Definitions (CRDs)
Before delving into the specifics of monitoring, it's essential to firmly grasp what Custom Resources and Custom Resource Definitions entail. Kubernetes, at its core, is a declarative system where users define the desired state of their applications and infrastructure using API objects like Pods, Deployments, and Services. The Kubernetes API server acts as the central control plane, receiving these desired states, validating them, and storing them. Controllers then continuously reconcile the current state of the cluster with the desired state.
CRDs provide a mechanism to extend the Kubernetes API by allowing users to define their own resource types. When you create a CRD, you are essentially telling the Kubernetes API server: "Hey, I'm introducing a new kind of object that you should recognize and store." This new object type behaves much like a built-in Kubernetes resource. For instance, if you're building an operator for a database, you might define a Database custom resource. Instead of directly manipulating Pods, Services, and PersistentVolumes, users can simply declare a Database object with desired specifications (e.g., spec.version, spec.replicas, spec.storage).
A Custom Resource Definition (CRD) is an API resource that defines a schema for a new type of object. It specifies the API group, version, scope (namespaced or cluster-scoped), and most importantly, the OpenAPI v3 schema that validates instances of your custom resource. This schema is crucial for ensuring that custom resources created by users conform to expected structures, providing robust validation similar to built-in resources. For example, a CRD for a Database might define fields like spec.engine (e.g., "PostgreSQL", "MySQL"), spec.version, spec.storageSize, and status.phase (e.g., "Provisioning", "Ready", "Failed").
A Custom Resource (CR) is an actual instance of a resource defined by a CRD. Once a CRD is registered with the Kubernetes API server, you can create, update, and delete instances of that custom resource using kubectl or programmatically, just like any other Kubernetes object. These CRs are stored in etcd, the cluster's distributed key-value store, and are accessible via the Kubernetes API. The real power comes when you pair a CRD with a controller (often referred to as an "operator"). This controller is a piece of code that watches for changes to your CRs and takes action to reconcile the cluster's state to match the desired state expressed in the CR. For our Database example, a Database controller would watch for new Database CRs, then provision the necessary Pods, Services, and PersistentVolumes to bring that database into existence. It would also update the status field of the Database CR to reflect the current state of the database deployment.
The ability to extend Kubernetes with CRDs offers unparalleled flexibility, enabling developers to abstract away complex infrastructure operations behind simple, declarative APIs. This aligns perfectly with the Kubernetes philosophy of managing applications as a system of loosely coupled, declarative components. However, this extensibility also introduces new challenges, particularly in ensuring that these custom components are functioning correctly and efficiently.
Why Monitor Custom Resources? The Operational Imperative
Monitoring is the bedrock of reliable system operations. For custom resources, its importance is amplified due to their bespoke nature and potential criticality within an application's architecture. Without effective monitoring, custom resources become black boxes, hiding potential issues until they manifest as larger system failures. Here's a breakdown of the compelling reasons to prioritize monitoring your custom resources:
- Ensuring Health and Availability: Just like built-in resources, custom resources represent a desired state. If a controller fails to reconcile a CR, or if the underlying infrastructure managed by the CR encounters issues, the CR's state might drift from the desired one. Monitoring the
statusfields of your CRs provides immediate insight into whether they are healthy, degraded, or outright failed. For instance, aDatabaseCR might have astatus.phasefield that cycles throughProvisioning,Ready,Scaling,Degraded, orFailed. Observing these transitions and the duration spent in each phase is crucial for understanding the operational state of your database instances. - Detecting Configuration Drift and Reconciliation Errors: Controllers are responsible for continuously reconciling the actual state with the desired state defined in a CR. If a controller encounters an error during this process, or if an external factor causes the actual state to diverge from the desired state without the controller detecting it, you have configuration drift. Monitoring allows you to detect these discrepancies by observing controller logs, specific
statusconditions, or even by comparing the desired state with the observed actual state through metrics. For example, if yourDatabaseCR specifies 3 replicas, but monitoring shows only 2 database Pods running, you have a reconciliation problem that needs immediate attention. - Performance and Resource Utilization Insights: Custom resources often manage underlying infrastructure components. Monitoring allows you to track the performance of these components from the perspective of the CR. For example, if your
CacheClusterCR manages a set of caching instances, you'd want to monitor metrics like cache hit rate, latency, and memory utilization. This gives you a holistic view of the custom resource's performance characteristics and helps identify bottlenecks or inefficiencies. Tracking resource utilization for components managed by CRs can also prevent unexpected costs or resource exhaustion. - Security and Compliance Auditing: For certain highly regulated environments, changes to critical infrastructure must be auditable. Every interaction with a Kubernetes object, including custom resources, leaves an audit trail via the Kubernetes
apiserver. Monitoring who changed a CR, when, and what the change was can be vital for security and compliance purposes. While not strictly "monitoring the CR itself," monitoring access patterns and changes to CRs is an essential part of the broader security posture. - Proactive Problem Solving and Alerting: The ultimate goal of monitoring is to enable proactive problem-solving. By setting up alerts based on deviations in CR
statusfields, specific log patterns, or critical metric thresholds, operators can be notified of potential issues before they impact end-users or lead to widespread outages. For example, an alert could trigger if aDatabaseCR remains in theProvisioningphase for an unusually long time, indicating a potential stuck operation. Or, if aBackupJobCR consistently fails to complete successfully, an alert can notify the team to investigate backup integrity. - Capacity Planning and Trend Analysis: Over time, collecting metrics related to custom resources can provide valuable data for capacity planning. How many
KafkaTopicCRs are typically active? What is the average number ofStreamProcessorCRs required per application? Understanding these trends helps in forecasting resource needs, optimizing cluster sizing, and making informed architectural decisions.
In essence, monitoring custom resources transforms them from opaque extensions into fully observable, manageable components of your Kubernetes infrastructure. This visibility is indispensable for maintaining the stability, efficiency, and reliability of complex, cloud-native deployments.
Core Concepts in Go for Kubernetes Interaction
Go has become the de facto language for building Kubernetes components, including controllers and operators. This is largely due to its performance, concurrency primitives, and the excellent support provided by the client-go library. To effectively monitor custom resources in Go, you need to understand several fundamental client-go concepts.
client-go: The Official Go Client Library
client-go is the official Go client library for interacting with the Kubernetes api server. It provides a robust and idiomatic way to perform operations like creating, reading, updating, and deleting Kubernetes objects, including custom resources. At its core, client-go handles the complexities of API authentication, HTTP requests, serialization/deserialization of Kubernetes objects (YAML/JSON), and error handling.
When you work with client-go, you'll typically use a Clientset for built-in resources (e.g., corev1.Pods(), appsv1.Deployments()) and a custom Clientset generated for your CRDs. client-go is not just for direct API calls; it's also the foundation for the more advanced patterns like Informers and Controllers, which are crucial for building efficient monitoring solutions.
SharedInformerFactory: Efficiently Watching Multiple Resource Types
Directly polling the Kubernetes api server for changes is inefficient and places undue load on the server. client-go addresses this with Informers. However, if you need to watch multiple types of resources (e.g., your Database CRs, and the Pods they manage), creating separate informers for each can lead to redundant connections and cached data.
The SharedInformerFactory is a central component that creates and manages a set of informers across multiple resource types within a single shared cache. This factory ensures that there's only one underlying connection to the api server for all resources it watches, making it highly efficient. All informers created from the same SharedInformerFactory share a single underlying cache. This is particularly useful for controllers that need to react to changes in multiple related resources.
// Example (conceptual) of SharedInformerFactory setup
config, err := rest.InClusterConfig() // or clientcmd.BuildConfigFromFlags
if err != nil { /* handle error */ }
// Create a new clientset for your custom resource
myClientset, err := clientset.NewForConfig(config)
if err != nil { /* handle error */ }
// Create a SharedInformerFactory for your custom resource
factory := informers.NewSharedInformerFactory(myClientset, time.Minute*10) // Resync every 10 minutes
// Get an informer for your specific custom resource type
myCRInformer := factory.MyCR().V1alpha1().MyCRDs().Informer()
The SharedInformerFactory is typically started with factory.Start(stopCh) which begins the process of listing all existing resources and then continuously watching for changes.
Informers: Caching and Event-Driven Notifications
An Informer is a pattern implemented by client-go that provides a robust and efficient way to watch Kubernetes resources. It consists of two main components:
- Reflector: This component watches the Kubernetes
apiserver for changes (List-Watch mechanism). It performs an initial "List" operation to get all existing objects of a certain type, and then continuously "Watches" for any subsequent add, update, or delete events. - DeltaFIFO: This is a queue that stores events (deltas) from the Reflector. It ensures that events are processed in order and handles edge cases like re-listing.
- Indexer (and Store): The Informer maintains a local, in-memory cache of the resources it's watching. This cache is populated by the Reflector and updated by the DeltaFIFO. This local cache (often an
Indexer) allows for fast, read-only access to objects without hitting theapiserver, significantly reducing load and improving performance.
When an event (add, update, delete) occurs for a resource that an Informer is watching, it adds the event to its internal queue. Controllers then retrieve these events and process them. This event-driven model is far superior to polling for changes, as it provides near real-time updates and significantly reduces network traffic to the api server.
Listers: Querying Cached Objects
A Lister is an interface provided by client-go that allows you to query the Informer's local cache. Instead of making an api call every time you need to retrieve an object, you can use a Lister to quickly fetch objects from the in-memory cache. This is incredibly fast and efficient.
For example, after your Informer has synced and populated its cache, you can use a Lister to get a specific custom resource by name or namespace, or to list all instances of your custom resource without any api server round trips. Listers are read-only, ensuring that the cache's integrity is maintained.
// Example (conceptual) of using a Lister
myCRLister := factory.MyCR().V1alpha1().MyCRDs().Lister()
// ... once informer is synced ...
cr, err := myCRLister.MyCRDs("mynamespace").Get("my-custom-resource-name")
if err != nil { /* handle error */ }
// Use 'cr' object
Workqueues: Processing Events Reliably
Controllers often need to process events (like a custom resource being added or updated) in a reliable and fault-tolerant manner. A Workqueue (specifically client-go/util/workqueue) is a thread-safe, rate-limited queue designed for this purpose.
When an Informer detects an event, instead of directly processing it, it typically adds a key (e.g., namespace/name) representing the affected object to a Workqueue. A controller's worker goroutines then pick items from this Workqueue and process them. The Workqueue handles:
- Deduplication: If multiple events for the same object arrive quickly, only one key is added to the queue, preventing redundant processing.
- Retries with Backoff: If a processing attempt fails, the
Workqueuecan automatically re-add the item after a delay, allowing for transient errors to resolve. - Rate Limiting: It can ensure that items are not re-processed too frequently, preventing thrashing.
This robust queueing mechanism is fundamental for building resilient controllers that can handle a high volume of events and recover from transient processing failures.
Controllers: The Logic That Acts Upon Events
A Controller is the core logic component in the Kubernetes ecosystem. In the context of custom resources, a controller is a Go application that watches for changes to one or more types of resources (often a custom resource and related built-in resources) and then takes actions to reconcile the actual state of the cluster with the desired state expressed in the resources.
A typical controller pattern involves:
- Setting up Informers: To watch the relevant custom resources and potentially other standard resources (e.g., Pods, Deployments) that the custom resource manages.
- Registering Event Handlers: The Informers call these handlers when an
Add,Update, orDeleteevent occurs. These handlers usually just add the object's key to aWorkqueue. - Running Worker Goroutines: These goroutines continuously pull items (object keys) from the
Workqueue. - Reconciliation Loop: For each item, a worker:
- Fetches the object from the Informer's cache using a
Lister. - Determines the desired state based on the object's
spec. - Compares the desired state with the current actual state of the cluster.
- Takes necessary actions (e.g., create/update/delete Pods, Services) to bring the actual state in line with the desired state.
- Updates the
statusfield of the custom resource to reflect the current actual state. - Handles errors and potentially re-queues the item for later retry.
- Fetches the object from the Informer's cache using a
When monitoring custom resources, our Go application will largely follow this controller pattern, but instead of reconciling state, it will be focused on observing state, collecting metrics, and generating alerts.
Setting Up Your Go Project for Custom Resource Monitoring
Building a Go application to monitor custom resources starts with a proper project setup. This includes defining your module, importing necessary client-go libraries, and potentially generating client code for your specific CRD.
- Initialize Your Go Module:
bash mkdir my-cr-monitor cd my-cr-monitor go mod init github.com/yourorg/my-cr-monitor - Add
client-goDependency:bash go get k8s.io/client-go@kubernetes-<version>Replace<version>with a specific Kubernetes version (e.g.,v0.28.3for Kubernetes 1.28). This ensures compatibility with your cluster. - Generate Client Code for Your CRD (Recommended): While you can use dynamic clients, it's highly recommended to generate strongly-typed clients for your CRD. This provides type safety and better IDE support. The process typically involves:
- Installing
kubernetes-codegen(orcontroller-genwhich wraps it). - Running a command like
go generate ./...if you've set upgo:generatedirectives, or directly invokingcodegen.shscripts. This will generate: clientset: A typed client for your CRD.informers: Typed informers for your CRD.listers: Typed listers for your CRD. These generated clients and informers will be used extensively in your monitoring application.
- Installing
Define Your Custom Resource Go Structs: You'll need Go structs that represent your custom resource. These structs will typically live in a separate package (e.g., pkg/apis/mycrd/v1alpha1) and include the TypeMeta, ObjectMeta, Spec, and Status fields common to Kubernetes objects. These structs are often generated using tools like controller-gen from kubernetes-sigs/controller-tools based on your CRD YAML definition. A simplified example might look like this:```go // pkg/apis/database/v1alpha1/types.go package v1alpha1import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" )// +genclient // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object// Database is the Schema for the databases API type Database struct { metav1.TypeMeta json:",inline" metav1.ObjectMeta json:"metadata,omitempty"
Spec DatabaseSpec `json:"spec,omitempty"`
Status DatabaseStatus `json:"status,omitempty"`
}// DatabaseSpec defines the desired state of Database type DatabaseSpec struct { Engine string json:"engine" Version string json:"version" StorageSize string json:"storageSize" Replicas int32 json:"replicas" }// DatabaseStatus defines the observed state of Database type DatabaseStatus struct { Phase string json:"phase,omitempty" // e.g., "Provisioning", "Ready", "Failed" Message string json:"message,omitempty" ActiveReplicas int32 json:"activeReplicas,omitempty" Conditions []metav1.Condition json:"conditions,omitempty" }// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object// DatabaseList contains a list of Database type DatabaseList struct { metav1.TypeMeta json:",inline" metav1.ListMeta json:"metadata,omitempty" Items []Database json:"items" } `` The+genclientand+k8s:deepcopy-gen:interfacescomments are annotations used by code generation tools to createclient-go` interfaces, deep-copy methods, and other boilerplate code specific to your CRD.
This structured approach ensures that your Go application has all the necessary tools and type definitions to interact with your custom resources effectively and safely.
Building a Basic Informer and Controller for Monitoring
Now, let's bring these concepts together to build a monitoring application. We'll set up an informer to watch our Database custom resources and define event handlers to log any changes. This forms the foundation upon which more sophisticated monitoring logic can be built.
// main.go
package main
import (
"context"
"flag"
"log"
"path/filepath"
"time"
// Import the generated clientset for your custom resource
databaseclientset "github.com/yourorg/my-cr-monitor/pkg/client/clientset/versioned"
databaseinformers "github.com/yourorg/my-cr-monitor/pkg/client/informers/externalversions"
databasev1alpha1 "github.com/yourorg/my-cr-monitor/pkg/apis/database/v1alpha1"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
"k8s.io/klog/v2" // For structured logging
)
func main() {
klog.InitFlags(nil)
flag.Parse()
// Set up Kubernetes config
var err error
var config *rest.Config
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// Try to use in-cluster config first, then fall back to kubeconfig
config, err = rest.InClusterConfig()
if err != nil {
log.Printf("Failed to get in-cluster config: %v. Falling back to kubeconfig...", err)
config, err = clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
log.Fatalf("Failed to build kubeconfig: %v", err)
}
}
// Create a clientset for our custom resource
dbClient, err := databaseclientset.NewForConfig(config)
if err != nil {
log.Fatalf("Error building database clientset: %v", err)
}
// Create a shared informer factory
// Resync period 0 means no periodic resync, rely on watches for updates
// For production, a non-zero resync period (e.g., 30s) is often recommended as a fallback.
dbInformerFactory := databaseinformers.NewSharedInformerFactory(dbClient, time.Second*30)
// Get an informer for the Database v1alpha1 resource
dbInformer := dbInformerFactory.Database().V1alpha1().Databases().Informer()
// Add event handlers
dbInformer.AddEventHandler(
&Controller{ // Our custom Controller struct will implement the ResourceEventHandler interface
clientset: dbClient,
},
)
// Create a context that can be cancelled to stop the informers
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Start the informers and wait for them to sync their caches
dbInformerFactory.Start(ctx.Done())
log.Println("Waiting for informer caches to sync...")
if !dbInformerFactory.WaitForCacheSync(ctx.Done()) {
log.Fatalf("Error waiting for informer caches to sync")
}
log.Println("Informer caches synced successfully. Monitoring Database resources...")
// Keep the application running indefinitely until context is cancelled
<-ctx.Done()
log.Println("Shutting down Custom Resource monitor.")
}
// Controller struct to hold client and implement ResourceEventHandler interface
type Controller struct {
clientset databaseclientset.Interface
// Add a workqueue here if you plan to implement reconciliation logic or complex event processing
// workqueue workqueue.RateLimitingInterface
}
// OnAdd is called when a new Database custom resource is added
func (c *Controller) OnAdd(obj interface{}) {
db, ok := obj.(*databasev1alpha1.Database)
if !ok {
klog.Error("Failed to assert object to Database type on Add")
return
}
klog.Infof("Database added: %s/%s, Phase: %s", db.Namespace, db.Name, db.Status.Phase)
// Here you would add the object key to a workqueue for processing
// c.workqueue.Add(key)
}
// OnUpdate is called when an existing Database custom resource is updated
func (c *Controller) OnUpdate(oldObj, newObj interface{}) {
oldDb, ok := oldObj.(*databasev1alpha1.Database)
if !ok {
klog.Error("Failed to assert old object to Database type on Update")
return
}
newDb, ok := newObj.(*databasev1alpha1.Database)
if !ok {
klog.Error("Failed to assert new object to Database type on Update")
return
}
// Compare status fields for changes, which is a common monitoring target
if oldDb.Status.Phase != newDb.Status.Phase || oldDb.Status.Message != newDb.Status.Message {
klog.Infof("Database updated: %s/%s, Phase changed from %s to %s, Message: %s",
newDb.Namespace, newDb.Name, oldDb.Status.Phase, newDb.Status.Phase, newDb.Status.Message)
// Perform specific monitoring actions based on status change
// e.g., send an alert if newDb.Status.Phase is "Failed"
}
// Also log spec changes, though status changes are often more critical for monitoring operational state
if oldDb.Spec.Replicas != newDb.Spec.Replicas {
klog.Infof("Database spec updated: %s/%s, Replicas changed from %d to %d",
newDb.Namespace, newDb.Name, oldDb.Spec.Replicas, newDb.Spec.Replicas)
}
// c.workqueue.Add(key)
}
// OnDelete is called when an existing Database custom resource is deleted
func (c *Controller) OnDelete(obj interface{}) {
db, ok := obj.(*databasev1alpha1.Database)
if !ok {
// handle DeletedFinalStateUnknown object
tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
klog.Error("Failed to assert object to Database type or DeletedFinalStateUnknown on Delete")
return
}
db, ok = tombstone.Obj.(*databasev1alpha1.Database)
if !ok {
klog.Error("Failed to assert DeletedFinalStateUnknown object to Database type on Delete")
return
}
}
klog.Infof("Database deleted: %s/%s, Phase: %s", db.Namespace, db.Name, db.Status.Phase)
// Remove associated metrics or state
// c.workqueue.Add(key)
}
This basic framework sets up the informer and logs events. For real-world monitoring, you would replace or augment the klog.Infof calls with actual metric collection, alerting logic, or more sophisticated event processing, often involving a workqueue to handle events asynchronously and robustly. The OnUpdate function is particularly important for monitoring, as it allows you to react to status changes of your custom resources, which are key indicators of health and operational progress.
APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! 👇👇👇
Implementing Monitoring Logic: From Status to Metrics
With the basic informer and event handler setup, we can now implement concrete monitoring logic. The goal is to extract meaningful insights from our custom resources and make them observable.
Status Fields: The Primary Indicator
The status field of a custom resource is perhaps the most direct and crucial source of monitoring information. Controllers are responsible for updating this field to reflect the current, observed state of the resource and any underlying components it manages.
- Monitoring
status.phase: As shown in theOnUpdateexample, tracking changes instatus.phase(e.g.,Provisioning,Ready,Failed) provides immediate high-level insight into the resource's lifecycle. - Monitoring
status.conditions: Many Kubernetes resources useconditions(a list ofmetav1.Conditionobjects) to provide more granular state information. Each condition has atype,status(True, False, Unknown),reason, andmessage. You can monitor specific conditions, such asReady,Available,Degraded, etc., and react to their changes. - Monitoring other custom
statusfields: Your CRD might have other specific fields in itsstatusthat are relevant, such asstatus.activeReplicas,status.observedGeneration, orstatus.lastBackupTime.
Your OnUpdate handler would expand to parse these status changes, potentially incrementing counters, recording gauges, or triggering alerts based on specific phase transitions or condition changes. For example, if status.phase becomes Failed, or a Ready condition transitions to False, you might want to send an immediate alert.
Metrics Collection with Prometheus
For numerical, time-series data, integrating with a metrics system like Prometheus is essential. Prometheus is a pull-based monitoring system that scrapes metrics from HTTP endpoints. Go has excellent client libraries for exposing Prometheus metrics.
- Import Prometheus client library:
go go get github.com/prometheus/client_golang/prometheus go get github.com/prometheus/client_golang/prometheus/promhttp - Define Metrics: You'll define various metric types (counters, gauges, histograms, summaries) to track aspects of your custom resources.```go // Inside your main function or a separate metrics package var ( crTotal = prometheus.NewGaugeVec( prometheus.GaugeOpts{ Name: "custom_resource_total", Help: "Total number of custom resources by namespace and phase.", }, []string{"namespace", "name", "phase", "engine"}, // Add labels for custom resource attributes ) crDurationInPhase = prometheus.NewHistogramVec( prometheus.HistogramOpts{ Name: "custom_resource_duration_in_phase_seconds", Help: "Duration custom resources spend in each phase.", Buckets: prometheus.DefBuckets, }, []string{"namespace", "name", "phase"}, ) crReplicas = prometheus.NewGaugeVec( prometheus.GaugeOpts{ Name: "custom_resource_active_replicas", Help: "Number of active replicas for a custom resource.", }, []string{"namespace", "name"}, ) )func init() { prometheus.MustRegister(crTotal, crDurationInPhase, crReplicas) } ```
- Expose Metrics Endpoint: Your Go application needs to expose an HTTP endpoint (typically
/metrics) where Prometheus can scrape the metrics.go // Inside main, start an HTTP server for metrics go func() { metricsPort := ":8080" // or configure via flag http.Handle("/metrics", promhttp.Handler()) log.Printf("Starting metrics server on %s", metricsPort) log.Fatal(http.ListenAndServe(metricsPort, nil)) }()This HTTP server will run in a separate goroutine. You'll then configure Prometheus to scrape this endpoint from your application's Pod.
Update Metrics in Event Handlers: Modify your OnAdd, OnUpdate, and OnDelete handlers to update these Prometheus metrics.```go // Inside OnAdd func (c Controller) OnAdd(obj interface{}) { db := obj.(databasev1alpha1.Database) crTotal.WithLabelValues(db.Namespace, db.Name, db.Status.Phase, db.Spec.Engine).Inc() // ... potentially record initial timestamp for phase duration tracking }// Inside OnUpdate func (c Controller) OnUpdate(oldObj, newObj interface{}) { oldDb := oldObj.(databasev1alpha1.Database) newDb := newObj.(*databasev1alpha1.Database)
if oldDb.Status.Phase != newDb.Status.Phase {
// Decrement old phase, increment new phase
crTotal.WithLabelValues(oldDb.Namespace, oldDb.Name, oldDb.Status.Phase, oldDb.Spec.Engine).Dec()
crTotal.WithLabelValues(newDb.Namespace, newDb.Name, newDb.Status.Phase, newDb.Spec.Engine).Inc()
// Record phase duration (requires storing start time of phase, more complex)
}
if oldDb.Status.ActiveReplicas != newDb.Status.ActiveReplicas {
crReplicas.WithLabelValues(newDb.Namespace, newDb.Name).Set(float64(newDb.Status.ActiveReplicas))
}
// ... other metric updates
}// Inside OnDelete func (c Controller) OnDelete(obj interface{}) { db := obj.(databasev1alpha1.Database) crTotal.WithLabelValues(db.Namespace, db.Name, db.Status.Phase, db.Spec.Engine).Dec() crReplicas.DeleteLabelValues(db.Namespace, db.Name) // Clean up gauge } ```
Logging: Detailed Event Trails
Logs are indispensable for debugging and understanding the sequence of events leading to a particular state. Your controller's event handlers should produce informative logs using a structured logging library like klog/v2.
- Lifecycle Events: Log
OnAdd,OnUpdate,OnDeleteevents with key identifiers (namespace, name) and relevant status changes. - Error Logging: Crucially, log any errors encountered during processing. This includes issues with API calls, data validation, or external interactions. Use
klog.Errorforklog.Fatalfappropriately. - Contextual Information: Include contextual details like the current phase, resource version, or controller action being taken.
// Example of detailed logging in OnUpdate
func (c *Controller) OnUpdate(oldObj, newObj interface{}) {
oldDb := oldObj.(*databasev1alpha1.Database)
newDb := newObj.(*databasev1alpha1.Database)
if oldDb.ResourceVersion == newDb.ResourceVersion {
// Only metadata changed, or no effective change, often due to periodic resync
// klog.V(5).Infof("No effective change for Database %s/%s, resource version %s", newDb.Namespace, newDb.Name, newDb.ResourceVersion)
return
}
klog.V(4).Infof("Processing update for Database %s/%s (ResourceVersion: %s)", newDb.Namespace, newDb.Name, newDb.ResourceVersion)
// Log status changes with previous and new values
if oldDb.Status.Phase != newDb.Status.Phase {
klog.Infof("Database %s/%s phase changed from '%s' to '%s'. Message: '%s'",
newDb.Namespace, newDb.Name, oldDb.Status.Phase, newDb.Status.Phase, newDb.Status.Message)
}
// Log condition changes
oldConditions := make(map[string]metav1.Condition)
for _, cond := range oldDb.Status.Conditions {
oldConditions[cond.Type] = cond
}
for _, newCond := range newDb.Status.Conditions {
if oldCond, found := oldConditions[newCond.Type]; !found || oldCond.Status != newCond.Status || oldCond.Reason != newCond.Reason {
klog.Infof("Database %s/%s condition '%s' changed: Status '%s'->'%s', Reason '%s'->'%s', Message: '%s'",
newDb.Namespace, newDb.Name, newCond.Type, oldCond.Status, newCond.Status, oldCond.Reason, newCond.Reason, newCond.Message)
}
}
// ... continue with metric updates and other logic
}
Using appropriate logging levels (klog.V(level)) allows you to control the verbosity, which is crucial in production environments.
Events API: Publishing Kubernetes Events
Kubernetes has a built-in Event api for publishing information about object lifecycle events, errors, or other noteworthy occurrences. These events are visible via kubectl describe and are often consumed by monitoring tools. Your monitoring application can publish events related to your custom resources.
- Create an Event Broadcaster:
go // Inside main or controller setup eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(klog.Infof) eventBroadcaster.StartRecordingToSink(&corev1.EventSinkImpl{Interface: clientset.NewForConfigOrDie(config).CoreV1().Events("")}) eventRecorder := eventBroadcaster.NewRecorder( scheme.Scheme, corev1.EventSource{Component: "my-cr-monitor"})You'll needk8s.io/client-go/kubernetes,k8s.io/client-go/tools/record, andk8s.io/client-go/kubernetes/scheme. - Publish Events: Use the
eventRecorderin your handlers.go // Inside OnUpdate, if a Database transitions to a Failed phase if oldDb.Status.Phase != "Failed" && newDb.Status.Phase == "Failed" { eventRecorder.Event(newDb, corev1.EventTypeWarning, "DatabaseFailed", "Database transitioned to failed phase.") } // Or for a successful provisioning if oldDb.Status.Phase == "Provisioning" && newDb.Status.Phase == "Ready" { eventRecorder.Event(newDb, corev1.EventTypeNormal, "DatabaseReady", "Database provisioned successfully.") }These events provide a human-readable audit trail that can be viewed withkubectl describe database <name>and can be consumed by other tools.
By combining status monitoring, Prometheus metrics, structured logging, and Kubernetes events, you create a robust, multi-faceted monitoring solution for your custom resources. This comprehensive approach ensures that all aspects of your custom resource's behavior are observable, making it easier to diagnose problems and maintain system health.
Advanced Monitoring Techniques and Observability Integration
Beyond the basic setup, several advanced techniques and integrations can elevate your custom resource monitoring to a full-fledged observability solution.
Health Probes for CR-Managed Applications
If your custom resource manages actual application workloads (e.g., a Deployment of pods), you should leverage Kubernetes' built-in health probes. * Liveness Probes: Determine if the application within a Pod is running correctly. If the probe fails, Kubernetes restarts the container. * Readiness Probes: Determine if a Pod is ready to serve traffic. If the probe fails, the Pod is removed from the Service's endpoints. * Startup Probes: For applications that take a long time to start up, these prevent the liveness probe from prematurely killing the application.
While these monitor the managed resources, your CRD's controller can aggregate their status into its own status field. For example, a Database CR's status.conditions could include Type: BackendReady based on the readiness of the database pods it manages.
External Checks and Synthetic Monitoring
Sometimes, the best way to monitor the functionality of a custom resource is from an external perspective, simulating user interaction. * Synthetic Transactions: For a Website CR that deploys a web application, you could have an external monitor (e.g., a separate Kubernetes job or an external SaaS monitoring tool) periodically hit the website's endpoint and assert its availability and correct functionality. * API Gateway Integration: If your custom resources expose APIs or manage services accessible via an api gateway, you can monitor the gateway's metrics for latency, error rates, and traffic patterns related to those services. This provides an end-to-end view. For instance, if your custom resources are provisioning AI models that are then exposed through an API gateway, monitoring the gateway traffic for those specific model endpoints offers crucial insights into their real-world usage and performance. Products like APIPark, an open-source AI gateway and API management platform, are specifically designed to manage, secure, and monitor such AI and REST services. Integrating APIPark can provide a centralized dashboard for the overall api health, including those powered by your custom resources, offering features like detailed call logging and powerful data analysis to complement your CRD-level monitoring.
Alerting: Turning Data into Actionable Insights
Metrics and logs are only useful if they can trigger alerts when something goes wrong. * Prometheus Alertmanager: Define alerting rules in Prometheus (e.g., in a rules.yml file) that evaluate your custom resource metrics (e.g., custom_resource_total{phase="Failed"} > 0). When a rule fires, Alertmanager can route notifications to various channels like Slack, PagerDuty, email, etc. * Log-based Alerts: If critical errors or specific patterns appear in your logs (e.g., "Out of Memory" for a managed component), log aggregation systems like Elasticsearch with Kibana (ELK Stack) or Loki with Grafana can trigger alerts.
Integration with Observability Stacks
A comprehensive observability stack combines metrics, logs, and traces to provide a holistic view of your system. * Metrics (Prometheus & Grafana): Visualize your custom resource metrics (phases, counts, latencies) in Grafana dashboards. Create panels that show trends, compare different CR instances, and highlight deviations. * Logs (Loki/Elasticsearch & Grafana/Kibana): Aggregate your controller logs and correlate them with specific custom resource instances. Being able to jump from a Grafana metric anomaly to the relevant logs for a CR is invaluable for debugging. * Tracing (Jaeger/Zipkin): If your custom resource controller involves complex distributed operations or interacts with many external services, implementing distributed tracing can help you understand the end-to-end flow and pinpoint latency bottlenecks across different services involved in a CR's lifecycle.
Connecting to API Concepts: api, OpenAPI, gateway
The keywords api, OpenAPI, and gateway are intimately woven into the fabric of Kubernetes custom resource monitoring, even if not immediately obvious for "gateway."
The Kubernetes API as the Foundation
At its core, monitoring custom resources in Go is fundamentally about interacting with the Kubernetes API. Every operation, from listing CRs to receiving event notifications from an informer, goes through the Kubernetes api server. Your Go monitoring application leverages client-go to make these api calls, authenticate, and manage connections. Understanding the Kubernetes api's architecture, its RESTful nature, and how it handles different resource types is paramount. The declarative model of Kubernetes revolves entirely around this api, and custom resources are simply extensions of it.
OpenAPI for Definition and Validation
OpenAPI (formerly Swagger) plays a crucial role in how custom resources are defined and consumed. * CRD Schema Validation: When you define a CRD, you specify its schema using OpenAPI v3. This schema rigorously validates the spec and status fields of your custom resources, ensuring that users provide valid input and that your controller can expect a consistent data structure. This validation is performed by the Kubernetes api server, acting as a powerful guardrail. * Client Generation: Tools that generate client-go libraries for your custom resources (like controller-gen or kubernetes-codegen) often derive information from the OpenAPI schema embedded in your CRD. This allows for strongly-typed client code, which enhances developer experience and reduces errors. * Documentation and Discovery: The Kubernetes api server exposes its entire api (including CRDs) via an OpenAPI specification. This allows tools, dashboards, and even other api gateway solutions to discover and understand the structure of your custom resources programmatically.
The adherence to OpenAPI standards ensures that custom resources are not just arbitrary data structures but are well-defined, validated, and discoverable components within the Kubernetes ecosystem. When you monitor a custom resource, you are implicitly relying on the integrity and structure defined by its OpenAPI schema.
Gateway in the Monitoring Context
The term gateway can be interpreted in several ways when discussing custom resource monitoring:
- Kubernetes API Gateway (Implicit): The Kubernetes
apiserver itself acts as the primarygatewayto all resources, including custom ones. Your monitoring application connects to thisgatewayto observe changes. - API Gateway for Exposing Monitoring Data: If your monitoring application or controller needs to expose its collected metrics or specific status information to external systems or other microservices, it might itself become an
apiprovider. For instance, your Go application might expose a custom/statusendpoint that summarizes the health of allDatabaseCRs. Such an internalapicould then be managed by a more general-purposeapi gatewayfor security, rate limiting, and routing. This is where a product like APIPark could be invaluable. As an open-source AIgatewayand API management platform,APIParkcan unify the management of various internal and external APIs. If your custom resources are driving critical services, exposing their aggregated health or performance data via anapiendpoint and then managing that endpoint withAPIParkallows for broader accessibility, security, and a unified view of your system's operationalapis.APIParkcan help ensure that access to this monitoring data is controlled, logged, and optimized, turning your raw monitoring insights into consumable API products. - API Gateway as a Monitored Custom Resource: In some advanced scenarios, an
api gatewayitself might be provisioned and managed as a custom resource within Kubernetes (e.g., anAPIGatewayCR). In such a case, your monitoring strategies would apply directly to monitoring instances of thisAPIGatewayCR, ensuring its health and configuration are correct. This demonstrates a cyclical relationship where thegatewayis both a tool for exposure and a component to be monitored.
Effectively, these three concepts—api as the interaction medium, OpenAPI for structured definition, and gateway for exposure and management—form a cohesive framework around how custom resources operate and how their monitoring integrates into the broader cloud-native ecosystem.
Best Practices for Monitoring Custom Resources
To ensure your custom resource monitoring solution is robust, efficient, and maintainable, adhere to these best practices:
- Define a Clear
statusField in Your CRD: Thestatusfield is your primary interface for monitoring. Design it carefully with well-defined phases, conditions, and relevant data points that reflect the operational state of your custom resource and its managed components. Avoid putting operational data in thespec; thespecis the desired state,statusis the observed state. - Granular Metrics and Labels: When collecting Prometheus metrics, choose appropriate metric types (counters for increments, gauges for current values, histograms for distributions) and use labels judiciously. Labels like
namespace,name,phase,controller_versionallow for powerful querying and aggregation in Prometheus and Grafana. Avoid high-cardinality labels unless absolutely necessary, as they can lead to performance issues. - Robust Error Handling and Retries: Your monitoring controller must be resilient. Implement comprehensive error handling for
apicalls and processing logic. Useworkqueuewith exponential backoff for retries to handle transient errors gracefully without overwhelming theapiserver. - Idempotency: Controller logic should be idempotent. Applying the same desired state multiple times should always result in the same actual state without side effects. This is crucial because reconciliation loops can re-process items multiple times (e.g., due to resyncs or transient failures).
- Clean Up Metrics on Deletion: When a custom resource is deleted, ensure that any associated metrics (especially gauges) are cleaned up to prevent stale data and memory leaks in your monitoring application. The
OnDeletehandler is the place for this. - Use Structured Logging: Adopt a structured logging library (like
klog/v2orzap) and include key-value pairs (e.g.,cr_name=my-db,cr_namespace=prod) for easy filtering, searching, and correlation in log aggregation systems. Control verbosity with logging levels. - RBAC for Monitoring Applications: Your monitoring application needs appropriate Role-Based Access Control (RBAC) permissions to
get,list, andwatchyour custom resources. Adhere to the principle of least privilege, granting only the necessary permissions. - Comprehensive Testing:
- Unit Tests: Test individual functions and logic within your controller.
- Integration Tests: Test your controller against a local, in-memory Kubernetes
apiserver (e.g., usingenvtestfromsigs.k8s.io/controller-runtime) to ensure it correctly reacts to CRD events and updates statuses. - End-to-End Tests: Deploy your CRD, controller, and monitoring application in a test cluster and verify that metrics are collected and alerts trigger correctly.
- Documentation: Clearly document your custom resource's
statusfields, the metrics your monitoring application exposes, and any alerting rules. This is crucial for operators and developers who need to understand the health of your custom resources. - Consider the Operator SDK or Controller-Runtime: For complex controllers, consider using frameworks like the Operator SDK or
controller-runtime. These libraries provide higher-level abstractions and boilerplate for building robust Kubernetes controllers, including features for metrics, logging, and leader election, significantly reducing development effort.
By adhering to these best practices, you can build a highly effective and reliable monitoring system for your Go-based custom resource applications, transforming them from potential blind spots into fully observable components of your Kubernetes clusters.
Deployment and Operation: Running Your Go Monitoring Application in Kubernetes
Deploying your Go-based custom resource monitoring application within the same Kubernetes cluster it monitors is the most common and efficient approach. This ensures low latency access to the Kubernetes api and simplifies resource management.
- Monitoring and Debugging Your Monitor:
- Check Pod logs: Use
kubectl logs -f <cr-monitor-pod>to see your application's output. - Prometheus UI/Grafana: Verify that metrics are being scraped and displayed correctly.
kubectl describe: Usekubectl describe deployment cr-monitor-deploymentto check the status of your deployment.kubectl get events: Check for any Kubernetes events related to your monitor's Pods.
- Check Pod logs: Use
Expose Metrics with a Service and ServiceMonitor: If your application exposes Prometheus metrics, create a Service to make the metrics endpoint discoverable. Then, use a ServiceMonitor (if you're using Prometheus Operator) or direct Prometheus configuration to scrape these metrics.```yaml
service.yaml
apiVersion: v1 kind: Service metadata: name: cr-monitor-metrics namespace: my-system labels: app: cr-monitor spec: selector: app: cr-monitor ports: - name: metrics port: 8080 targetPort: metrics # Refers to the name of the containerPort
servicemonitor.yaml (Requires Prometheus Operator)
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: cr-monitor namespace: my-system labels: release: prometheus-stack # Match your Prometheus release label spec: selector: matchLabels: app: cr-monitor endpoints: - port: metrics path: /metrics interval: 30s namespaceSelector: matchNames: - my-system ```
Service Account and RBAC: Your monitoring application needs permissions to list and watch your custom resources, and potentially to get other related Kubernetes objects (like Pods) if your monitoring logic depends on them.```yaml
rbac.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: cr-monitor-sa namespace: my-system
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cr-monitor-role namespace: my-system rules: - apiGroups: - "database.example.com" # Your CRD's API group resources: - "databases" - "databases/status" # If you monitor status changes verbs: - "get" - "list" - "watch"
If your monitor also needs to interact with standard K8s resources, add rules for them:
- apiGroups: [""]
resources: ["pods", "events"]
verbs: ["get", "list", "watch"]
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cr-monitor-rb namespace: my-system subjects: - kind: ServiceAccount name: cr-monitor-sa namespace: my-system roleRef: kind: Role name: cr-monitor-role apiGroup: rbac.authorization.k8s.io `` Apply these RBAC resources before deploying your application. TheServiceAccount` will be automatically injected into your Pod.
Create Kubernetes Deployment: Define a Kubernetes Deployment object to run your containerized monitoring application. Ensure it has appropriate resource requests and limits.```yaml
deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: cr-monitor-deployment namespace: my-system labels: app: cr-monitor spec: replicas: 1 selector: matchLabels: app: cr-monitor template: metadata: labels: app: cr-monitor spec: serviceAccountName: cr-monitor-sa containers: - name: cr-monitor image: your-repo/go-cr-monitor:latest # Replace with your image imagePullPolicy: Always args: - "--kubeconfig=/etc/kubernetes/admin.conf" # If running outside cluster, otherwise remove - "--logtostderr=true" - "--v=2" # Adjust verbosity for klog ports: - containerPort: 8080 # For Prometheus metrics name: metrics resources: requests: cpu: 100m memory: 128Mi limits: cpu: 200m memory: 256Mi # Optional: Node affinity, tolerations etc. ```
Containerize Your Application: Create a Dockerfile to build a container image for your Go application. This typically involves a multi-stage build to produce a small, efficient image.```dockerfile
Dockerfile
Build Stage
FROM golang:1.21-alpine AS builder WORKDIR /app COPY go.mod go.sum ./ RUN go mod download COPY . .
Use CGO_ENABLED=0 for static binaries without glibc dependencies, ideal for alpine
ENV CGO_ENABLED=0 ENV GOOS=linux RUN go build -ldflags="-s -w" -o /go-cr-monitor .
Run Stage
FROM alpine:latest WORKDIR /root/ COPY --from=builder /go-cr-monitor .
Expose the metrics port (if applicable)
EXPOSE 8080 CMD ["./go-cr-monitor"] ```
By following these deployment and operational steps, your custom resource monitoring application will run reliably within your Kubernetes environment, providing continuous observability for your extended resources.
Conclusion: Empowering Kubernetes with Custom Resource Observability
The journey through monitoring custom resources in Go reveals a profound truth about modern cloud-native systems: extensibility without observability is a recipe for operational opacity. Custom Resource Definitions unlock unparalleled power for tailoring Kubernetes to specific domain needs, but this power comes with the responsibility of ensuring these bespoke components are as transparent and manageable as their built-in counterparts.
Through the meticulous application of client-go's informer pattern, robust event handling, and strategic integration of metrics (via Prometheus), logging, and Kubernetes events, we can transform opaque custom resources into fully observable entities. By focusing on critical status fields, leveraging the descriptive power of OpenAPI schemas, and understanding the central role of the Kubernetes api as the ultimate gateway to all cluster resources, developers can build monitoring solutions that are not merely reactive but truly proactive.
Furthermore, integrating with established observability stacks and considering external monitoring perspectives ensures that insights from custom resources contribute to a holistic understanding of system health. And when it comes to managing the apis that might expose these monitoring insights, platforms like APIPark offer a robust solution for centralizing api governance, security, and analysis. The best practices outlined in this guide provide a roadmap for building resilient, efficient, and maintainable monitoring solutions, empowering operators and developers alike to navigate the complexities of their extended Kubernetes environments with confidence. In the continuous evolution of cloud-native infrastructure, the ability to see clearly into every corner of the system, including its most customized parts, remains an indispensable asset.
FAQ
- What are Custom Resources (CRs) and Custom Resource Definitions (CRDs) in Kubernetes? Custom Resource Definitions (CRDs) allow you to define your own resource types (like
Database,BackupJob,Website) that extend the Kubernetes API. A Custom Resource (CR) is an actual instance of one of these user-defined resource types, behaving like any other Kubernetes object (e.g., Pod, Deployment). They enable users to tailor Kubernetes to specific application domains and manage complex infrastructure components declaratively. - Why is it important to monitor Custom Resources specifically? Monitoring CRs is crucial because they represent critical, domain-specific components within your cluster. Without monitoring, you lack visibility into their operational health, performance, and lifecycle. This can lead to difficulties in debugging, delayed detection of failures, challenges in capacity planning, and an inability to proactively address issues, ultimately impacting the stability and reliability of your entire application.
- What are the key Go
client-gocomponents used for monitoring Custom Resources? The coreclient-gocomponents include:SharedInformerFactory: Efficiently creates and manages informers for multiple resource types, sharing a single cache.Informers: Watch the Kubernetes API for changes to resources, maintaining a local in-memory cache and triggering event handlers (Add,Update,Delete).Listers: Provide fast, read-only access to objects stored in the informer's local cache without hitting the API server.Workqueues: A robust, rate-limited queue for reliably processing events from informers, handling retries and deduplication.Controllers: The application logic that consumes events from informers (often via a workqueue) and acts upon them, in our case, to collect monitoring data.
- How can
OpenAPIandAPI Gatewayconcepts relate to Custom Resource monitoring?OpenAPIplays a key role in defining the schema for your CRDs, ensuring strong validation and facilitating code generation for typed clients. It makes your custom resources discoverable and structured. AnAPI Gatewayrelates in several ways: the Kubernetes API server itself acts as the gateway to all resources. Additionally, if your monitoring solution or the custom resources themselves exposeAPIendpoints (e.g., for metrics or status), an externalAPI Gateway(like APIPark) can manage, secure, and unify access to these endpoints, providing centralized control and observability over how your system'sAPIs, including those derived from CRD insights, are consumed. - What are some best practices for building a robust Custom Resource monitoring solution? Best practices include:
- Clearly defining the
statusfield in your CRD for key operational insights. - Using granular Prometheus metrics with descriptive labels for effective data analysis.
- Implementing robust error handling and idempotent logic in your controller.
- Utilizing structured logging for detailed event trails and efficient debugging.
- Cleaning up metrics when custom resources are deleted to prevent stale data.
- Adhering to the principle of least privilege with RBAC for your monitoring application.
- Conducting comprehensive testing (unit, integration, end-to-end) to ensure reliability.
- Considering higher-level frameworks like
controller-runtimefor complex controllers.
- Clearly defining the
🚀You can securely and efficiently call the OpenAI API on APIPark in just two steps:
Step 1: Deploy the APIPark AI gateway in 5 minutes.
APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.
curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh

In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.

Step 2: Call the OpenAI API.

