How to Read a Custom Resource Using Dynamic Client in Golang
This comprehensive guide delves into the intricate process of reading Custom Resources (CRs) in Kubernetes using the dynamic client in Golang. We will explore the fundamental concepts of Custom Resources, the nuances of client-go's dynamic client, and provide practical, detailed examples to empower developers to interact with their Kubernetes clusters programmatically. Furthermore, we will contextualize this process within the broader landscape of API management and the critical role played by solutions like an API gateway in orchestrating complex microservices architectures.
APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! πππ
How to Read a Custom Resource Using Dynamic Client in Golang
Introduction: Extending Kubernetes with Custom Resources and the Power of Dynamic Interaction
Kubernetes, at its core, is a platform designed to manage containerized workloads and services, providing a robust and extensible control plane. Its power lies not just in its built-in primitives like Pods, Deployments, and Services, but also in its ability to be extended. This extensibility is primarily achieved through Custom Resources (CRs), which allow users to define their own API objects, effectively teaching Kubernetes new types of resources to manage. When you define a Custom Resource, you're essentially adding new verbs and nouns to the Kubernetes API, making the platform aware of application-specific constructs.
While interacting with built-in Kubernetes resources is typically handled through strongly typed clients (known as client-sets), Custom Resources present a unique challenge. Their schemas are user-defined and can evolve, making compile-time strong typing impractical or impossible in many scenarios. This is where the dynamic client in Golang's client-go library shines. It provides a flexible, generic way to interact with any Kubernetes API resource, including Custom Resources, without needing prior knowledge of their Go struct definitions. This capability is invaluable for building generic controllers, operators, or command-line tools that need to work with arbitrary CRDs.
Understanding how to leverage the dynamic client to read Custom Resources is a fundamental skill for anyone building advanced Kubernetes tooling or operators. It unlocks the full potential of Kubernetes as an application platform, allowing developers to define sophisticated custom logic and manage it seamlessly within the cluster's control plane. As organizations increasingly adopt microservices and distributed systems, the ability to programmatically manage these custom configurations and services becomes paramount. These services often expose their own APIs, and a robust API gateway becomes essential for their unified management, security, and exposure.
This article will guide you through the journey of defining a Custom Resource, setting up your Golang environment, and most importantly, demonstrating how to use the dynamic client to read instances of your Custom Resource. We will cover the theoretical underpinnings, practical implementation details, and best practices to ensure your Go applications can reliably and efficiently interact with the Kubernetes API at a deeper level.
Chapter 1: The Kubernetes Control Plane and Custom Resources β Extending the Core API
Before diving into the code, it's crucial to grasp the architectural context within which Custom Resources operate. Kubernetes is built around a declarative API where users define their desired state, and the control plane works tirelessly to achieve and maintain that state.
1.1 Kubernetes Architecture Overview: The API Server as the Central Hub
The heart of the Kubernetes control plane is the API Server. Every interaction with a Kubernetes cluster, whether from kubectl, a controller, or a custom application, goes through the API Server. It's the front-end for the control plane, exposing a RESTful API that allows clients to query and manipulate the state of objects in the cluster. Behind the API Server, etcd acts as the persistent, consistent, and highly available key-value store where all cluster data, including the state of every Kubernetes object, is stored. Controllers continuously watch the API Server for changes, react to them, and reconcile the cluster's actual state with the desired state specified in the API objects.
This API-driven architecture is what makes Kubernetes so powerful and extensible. Everything in Kubernetes is an API object, whether it's a Pod, a Service, a Deployment, or indeed, a Custom Resource.
1.2 What are Custom Resources (CRs)?
Custom Resources are extensions of the Kubernetes API that allow you to define your own object kinds. They enable you to integrate application-specific domain knowledge directly into Kubernetes, making it the single source of truth for both generic infrastructure and application-specific configurations.
Imagine you're developing a custom database service. Instead of managing its configuration through external files or a separate configuration system, you could define a DatabaseInstance Custom Resource. This resource would specify parameters like database type, version, storage size, and backup policy. Kubernetes would then understand DatabaseInstance as a first-class object, allowing you to manage it using standard kubectl commands, define RBAC policies for it, and even build controllers that react to changes in DatabaseInstance objects.
1.3 CustomResourceDefinition (CRD) vs. Custom Resource (CR)
It's important to distinguish between a CustomResourceDefinition (CRD) and a Custom Resource (CR):
- CustomResourceDefinition (CRD): A CRD is a Kubernetes API object that defines a new custom resource type. When you create a CRD, you are essentially telling the Kubernetes API Server: "Hey, I'm introducing a new kind of object with this specific schema and behavior." The CRD specifies metadata about the new resource, such as its name, group, version, and the schema for its
specandstatusfields. - Custom Resource (CR): Once a CRD is created, you can then create actual instances of that custom resource type. These instances are the Custom Resources themselves. For example, after defining a
DatabaseInstanceCRD, you could create multipleDatabaseInstanceCRs, each representing a specific database deployment with its unique configuration.
1.4 Why are Custom Resources Needed? Benefits and Use Cases
Custom Resources address a critical need for extending Kubernetes beyond its built-in capabilities, offering several significant benefits:
- Unified Management: They allow you to manage application-specific configurations and components directly within the Kubernetes control plane, using the same declarative approach and tooling (
kubectl, YAML) as native resources. This simplifies operations and reduces cognitive load. - Increased Automation: By defining custom resources, you can build Kubernetes Operators that watch for changes to these resources and take specific actions. For instance, an operator for our
DatabaseInstanceCR could provision cloud databases, manage upgrades, and handle backups automatically. - Application-Specific APIs: CRs provide a clean, declarative API for your applications. Instead of imperatively calling external services, users interact with your application by creating or updating CRs, and your operator translates these declarations into real-world actions.
- Community and Ecosystem: Many popular open-source projects and cloud-native tools leverage CRDs to extend Kubernetes. Projects like Prometheus, Istio, Cert-Manager, and countless others rely heavily on CRDs to define their configurations and domain-specific objects.
- Separation of Concerns: CRDs allow application developers to define their desired state independently of how that state is achieved. Infrastructure teams can provide the underlying platform, and application teams can define their services using CRs, fostering a clear separation of responsibilities.
In essence, Custom Resources transform Kubernetes from a container orchestrator into a powerful application platform, capable of understanding and managing virtually any workload or service you can define. The ability to interact with these custom resources programmatically, particularly using flexible tools like the dynamic client, becomes a cornerstone for building robust and adaptable cloud-native systems.
Chapter 2: Introduction to client-go and its Clients β Navigating the Kubernetes API Programmatically
To interact with the Kubernetes API from Golang, the official client-go library is the indispensable tool. It provides a set of clients and utilities that abstract away the complexities of HTTP requests, authentication, and API versioning, allowing developers to focus on application logic. client-go offers several types of clients, each suited for different use cases. Understanding their distinctions is key to choosing the right tool for the job, especially when dealing with Custom Resources.
2.1 Overview of client-go
client-go is the Golang client library for Kubernetes. It is used by kubectl itself, as well as by controllers, operators, and various Kubernetes tools to communicate with the Kubernetes API Server. It handles:
- Authentication: Using
kubeconfig, service accounts (in-cluster), or other methods. - Serialization/Deserialization: Converting Go structs to and from JSON/YAML for API requests and responses.
- RESTful Communication: Managing HTTP requests, retries, and error handling.
- Discovery: Automatically figuring out API versions and available resources.
2.2 The Three Main Client Types in client-go
client-go primarily provides three levels of abstraction for interacting with the Kubernetes API:
- Clientset (Typed Client):
- Description: This is the most common and highest-level client. It's generated from the Kubernetes API definitions and provides strongly typed Go structs for all built-in Kubernetes resources (Pods, Deployments, Services, etc.).
- Pros: Type-safe, easy to use, excellent IDE support (autocompletion, type checking), less prone to runtime errors due to typos.
- Cons: Requires prior knowledge of the resource's Go struct. Cannot be used directly with Custom Resources unless a corresponding Go struct and clientset are generated (e.g., using
controller-gen). - Use Cases: Interacting with native Kubernetes resources where strong typing is desired and the schema is well-defined. Building applications that specifically target a known set of Kubernetes types.
- Dynamic Client:
- Description: This client operates on
unstructured.Unstructuredobjects. It doesn't require specific Go types for the resources it interacts with. Instead, it works with generic map-like structures, allowing access to resource fields via string keys. - Pros: Highly flexible, can interact with any API resource (built-in or custom) without compile-time knowledge of its schema. Ideal for generic tools, controllers, or operators that need to work with various or evolving Custom Resources.
- Cons: Not type-safe. Accessing fields requires string keys, which are prone to typos and runtime errors. Requires manual type assertions and error checking when parsing data.
- Use Cases: The primary focus of this article. Interacting with Custom Resources where you don't want to (or cannot) generate strongly typed clients. Building generic tools that inspect or manipulate arbitrary Kubernetes resources.
- Description: This client operates on
- REST Client:
- Description: This is the lowest-level client provided by
client-go. It offers a more direct way to send HTTP requests to the Kubernetes API Server. It operates onbytes.Bufferandio.Readerfor request/response bodies, requiring manual serialization/deserialization. - Pros: Most flexible, fine-grained control over HTTP requests. Can be used to interact with any Kubernetes API endpoint, including non-standard ones.
- Cons: Most complex to use, requires manual handling of serialization, deserialization, API versioning, and error parsing. Least convenient for typical resource operations.
- Use Cases: When you need very specific, low-level control over API interactions, or when dealing with endpoints not well-covered by higher-level clients. Often used as a building block for more specialized clients.
- Description: This is the lowest-level client provided by
2.3 Why the Dynamic Client is Ideal for Custom Resources
For Custom Resources, the dynamic client strikes a perfect balance between flexibility and ease of use. While you could use a REST client, it would be overly complex for standard CRUD operations. A clientset, while type-safe, would require generating Go types and a dedicated clientset for each CRD, which can be cumbersome, especially if you're building a generic tool or dealing with many CRDs that might change.
The dynamic client allows you to:
- Discover resources at runtime: You don't need to hardcode specific Go types for your CRs.
- Operate generically: Write code that can fetch, update, or delete any CR, regardless of its internal structure, as long as you know its GroupVersionResource (GVR).
- Adapt to schema changes: If the CRD schema evolves, your dynamic client code often doesn't need to change, as it just operates on generic
map[string]interface{}structures. You only need to adapt the logic that parses specific fields.
This makes the dynamic client an indispensable tool for developing robust and adaptable applications that interact with the Kubernetes API, particularly when Custom Resources are involved. It embodies the essence of an extensible system, allowing for flexible interaction with diverse and evolving APIs.
| Feature | Clientset (Typed Client) | Dynamic Client | REST Client |
|---|---|---|---|
| API Abstraction | High-level (Go structs for resources) | Mid-level (generic unstructured.Unstructured objects) |
Low-level (HTTP requests, raw bytes) |
| Type Safety | High (compile-time checking) | Low (runtime string-based access, prone to typos) | None (manual serialization/deserialization) |
| Schema Knowledge | Requires compile-time knowledge of resource Go structs | No compile-time schema knowledge required | No compile-time schema knowledge required (manual handling) |
| Use Case | Built-in resources, generated CRD clients | Custom Resources, generic tools, operators | Highly custom API interactions, specific endpoints |
| Complexity | Low (easy to use) | Medium (parsing Unstructured requires care) |
High (manual everything) |
| Performance | Good | Good | Very good (most control) |
| Flexibility | Low (tied to specific types) | High (interacts with any resource by GVR) | Very High (full HTTP control) |
Chapter 3: Setting Up Your Golang Environment for Kubernetes Interaction
Before we can start writing code to interact with Custom Resources, we need to set up a proper Golang development environment and ensure connectivity to a Kubernetes cluster. This chapter covers the necessary prerequisites and the initial steps to configure your project.
3.1 Prerequisites
To follow along with the code examples, ensure you have the following installed:
- Go Language: Version 1.16 or later. You can download it from the official Go website.
kubectl: The Kubernetes command-line tool. This is essential for interacting with your cluster, applying CRDs, and verifying resources. Follow the official Kubernetes documentation for installation.- A Kubernetes Cluster:
- Local Cluster: For development, a local cluster like
kind(Kubernetes in Docker),minikube, orDocker Desktop(with Kubernetes enabled) is ideal. They are easy to set up and provide a full-fledged Kubernetes environment. - Remote Cluster: If you have access to a remote cluster (e.g., GKE, EKS, AKS), ensure your
kubeconfigis properly configured to connect to it.
- Local Cluster: For development, a local cluster like
3.2 Initializing Your Golang Project
First, create a new directory for your project and initialize a Go module:
mkdir dynamic-cr-reader
cd dynamic-cr-reader
go mod init dynamic-cr-reader
3.3 Installing client-go
Next, you need to install the client-go library. It's recommended to pin the version to match your Kubernetes cluster's API version or a compatible one. For example, if your cluster is Kubernetes 1.25, you might use the v0.25.x version of client-go. For demonstration purposes, we'll use a relatively recent version.
go get k8s.io/client-go@v0.29.0 # Or a version compatible with your cluster
This command will download the client-go library and its dependencies, adding them to your go.mod file.
3.4 Configuring Kubernetes Client for In-Cluster vs. Out-of-Cluster Execution
When your Go application needs to interact with the Kubernetes API Server, it needs to know how to connect and authenticate. client-go supports two primary configuration methods:
- Out-of-Cluster (Local Development):
- This is typically used when your application runs outside of a Kubernetes cluster (e.g., on your local machine) and needs to connect to a remote or local cluster.
- It uses your
kubeconfigfile (usually located at~/.kube/config) for connection details and credentials. - This is what we will primarily use in our examples.
- In-Cluster (Running Inside Kubernetes):
- When your application runs inside a Kubernetes cluster (e.g., as a Pod), it can leverage the service account credentials automatically mounted into its container.
- Kubernetes injects environment variables (
KUBERNETES_SERVICE_HOST,KUBERNETES_SERVICE_PORT) and service account tokens into every Pod, allowing applications to discover and authenticate with the API Server without akubeconfigfile. - This is the standard approach for controllers and operators deployed within the cluster.
To make our examples flexible, we'll implement a function that can determine whether to use in-cluster config or load from kubeconfig based on the environment.
Create a file named client.go (or similar) and add the following helper function:
package main
import (
"fmt"
"os"
"path/filepath"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
// GetKubeConfig returns a Kubernetes rest.Config suitable for both in-cluster and out-of-cluster execution.
func GetKubeConfig() (*rest.Config, error) {
// Try to get in-cluster config first
config, err := rest.InClusterConfig()
if err == nil {
fmt.Println("Using in-cluster config.")
return config, nil
}
// If in-cluster config fails, try to load from kubeconfig file
// Determine kubeconfig path
var kubeconfigPath string
if home := homedir.HomeDir(); home != "" {
kubeconfigPath = filepath.Join(home, ".kube", "config")
} else {
// Fallback for environments without a home directory
kubeconfigPath = os.Getenv("KUBECONFIG")
if kubeconfigPath == "" {
return nil, fmt.Errorf("KUBECONFIG environment variable not set and home directory not found")
}
}
// Check if kubeconfig file exists
if _, err := os.Stat(kubeconfigPath); os.IsNotExist(err) {
return nil, fmt.Errorf("kubeconfig file not found at %s. Error: %w", kubeconfigPath, err)
}
config, err = clientcmd.BuildConfigFromFlags("", kubeconfigPath)
if err != nil {
return nil, fmt.Errorf("failed to build kubeconfig from flags: %w", err)
}
fmt.Printf("Using kubeconfig from %s.\n", kubeconfigPath)
return config, nil
}
// GetDynamicClient creates and returns a dynamic client from the rest.Config.
func GetDynamicClient() (dynamic.Interface, error) {
config, err := GetKubeConfig()
if err != nil {
return nil, fmt.Errorf("failed to get Kubernetes config: %w", err)
}
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
return nil, fmt.Errorf("failed to create dynamic client: %w", err)
}
return dynamicClient, nil
}
// GetClientset creates and returns a clientset from the rest.Config.
// Useful for interacting with built-in resources for context or initial setup.
func GetClientset() (kubernetes.Interface, error) {
config, err := GetKubeConfig()
if err != nil {
return nil, fmt.Errorf("failed to get Kubernetes config: %w", err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, fmt.Errorf("failed to create clientset: %w", err)
}
return clientset, nil
}
This GetKubeConfig function will first attempt to load an in-cluster configuration. If that fails (which it will when running locally), it then looks for your kubeconfig file in the standard location (~/.kube/config) or from the KUBECONFIG environment variable. This robust setup ensures your application can run both locally during development and later as a deployed component within a Kubernetes cluster. The GetDynamicClient function then uses this configuration to instantiate the dynamic.Interface, which is what we'll use to interact with our Custom Resources.
With the environment set up and the client configuration ready, we can now proceed to define our Custom Resource and then write the Golang code to read it.
Chapter 4: Defining a Custom Resource Definition (CRD): A Practical Example
To effectively demonstrate reading a Custom Resource, we first need a Custom Resource to read! In this chapter, we'll define a simple MyService Custom Resource Definition (CRD) and apply it to our Kubernetes cluster. This CRD will serve as the target for our dynamic client operations.
4.1 Designing Our Sample Custom Resource: MyService
Let's imagine we want to manage simple application services within Kubernetes using our own custom API. Our MyService resource might define fields like the service's name, the container image to use, the number of replicas, and perhaps some configuration parameters.
We'll define a CRD for a MyService object under the stable.example.com API group and v1 version.
4.2 Creating the CRD YAML
Create a file named myservice-crd.yaml with the following content:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# Name must match the plural name of the resource (services.stable.example.com)
# and be in the format <plural>.<group>
name: myservices.stable.example.com
spec:
# Group name to use for REST API: /apis/<group>/<version>
group: stable.example.com
names:
# Plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: myservices
# Singular name to be used as an alias on the CLI and for display
singular: myservice
# Kind is normally CamelCased and is the object name in the API
kind: MyService
# Short names for the resource (optional)
shortNames:
- ms
# Scope indicates whether this resource is namespace-scoped or cluster-scoped
scope: Namespaced
versions:
- name: v1
# Served designates that this is an API version that clients can access.
served: true
# Storage designates that this is the version to store in etcd.
storage: true
schema:
openAPIV3Schema:
type: object
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
properties:
name:
type: string
description: The name of the service.
image:
type: string
description: The container image for the service.
replicas:
type: integer
minimum: 1
description: The desired number of replicas.
config:
type: object
additionalProperties:
type: string
description: Additional key-value configuration for the service.
required:
- name
- image
- replicas
status:
type: object
properties:
availableReplicas:
type: integer
description: The number of available replicas.
ready:
type: boolean
description: Indicates if the service is ready.
Let's break down key parts of this CRD:
apiVersion: apiextensions.k8s.io/v1: This specifies the API version for the CRD itself, not the custom resource it defines.kind: CustomResourceDefinition: This identifies the object as a CRD.metadata.name: myservices.stable.example.com: This is a crucial field. It must be in the format<plural-name>.<group-name>.spec.group: stable.example.com: This defines the API group for our custom resource. When you interact with it, the URL path will look like/apis/stable.example.com/....spec.names: This block provides various names for our resource:plural: myservices: Used inkubectl get myservices.singular: myservice: The singular form.kind: MyService: The CamelCase name for the Go struct and the API kind.shortNames: [ms]: An optional shorter alias forkubectl.
spec.scope: Namespaced: This indicates thatMyServiceresources will exist within specific Kubernetes namespaces, just like Pods or Deployments. The alternative isClusterscope, which means the resource exists once per cluster (like a StorageClass).spec.versions: A list of API versions for our custom resource.name: v1: Our initial version.served: true: Means this version is exposed via the API.storage: true: Means objects of this version are stored in etcd. You should only have one storage version.
spec.versions[0].schema.openAPIV3Schema: This is where you define the validation schema for your custom resource using OpenAPI v3 specification. It ensures that anyMyServiceobject created conforms to these rules. We definespecwithname,image,replicas, andconfigfields. We also include astatusblock for potential future use by a controller.
4.3 Applying the CRD to Your Cluster
Once you've created myservice-crd.yaml, apply it to your Kubernetes cluster using kubectl:
kubectl apply -f myservice-crd.yaml
You should see output similar to: customresourcedefinition.apiextensions.k8s.io/myservices.stable.example.com created.
You can verify that the CRD has been registered by listing all CRDs:
kubectl get crds | grep myservice
Output:
myservices.stable.example.com 2023-10-27T10:00:00Z
This confirms that Kubernetes now understands our new MyService resource type. The Kubernetes API has been successfully extended.
4.4 Creating a Custom Resource Instance
Now that the CRD is in place, we can create an actual instance of MyService. Create a file named my-nginx-service.yaml:
apiVersion: stable.example.com/v1
kind: MyService
metadata:
name: my-nginx-service
namespace: default # Assuming we're deploying to the default namespace
spec:
name: nginx-web
image: nginx:latest
replicas: 3
config:
ENV: production
PORT: "80"
Notice apiVersion: stable.example.com/v1 and kind: MyService β these directly correspond to the group, version, and kind defined in our CRD.
Apply this Custom Resource instance:
kubectl apply -f my-nginx-service.yaml
Output: myservice.stable.example.com/my-nginx-service created
You can verify its creation:
kubectl get myservice my-nginx-service
Output:
NAME AGE
my-nginx-service 10s
And to see the full YAML:
kubectl get myservice my-nginx-service -o yaml
Now we have a concrete Custom Resource in our cluster that our Golang dynamic client can read. The stage is set for the main event: interacting with this resource programmatically.
Chapter 5: Interacting with Custom Resources using Dynamic Client
This is the core of our tutorial. We'll write Golang code to initialize the dynamic client, identify our Custom Resource using its GroupVersionResource (GVR), and then perform read operations (Get and List).
5.1 Understanding GroupVersionResource (GVR)
Unlike strongly typed clients that use Go structs, the dynamic client identifies resources using a schema.GroupVersionResource (GVR). This struct uniquely identifies a collection of resources within the Kubernetes API.
For our MyService Custom Resource:
- Group:
stable.example.com(fromspec.groupin the CRD) - Version:
v1(fromspec.versions[0].namein the CRD) - Resource:
myservices(fromspec.names.pluralin the CRD)
So, the GVR for our MyService will be stable.example.com/v1, Resource=myservices.
5.2 Initializing the Dynamic Client
We already have a helper function GetDynamicClient() from client.go that provides an dynamic.Interface. This interface is the entry point for all dynamic client operations.
5.3 Reading a Single Custom Resource Using Get
Let's write a program that fetches our my-nginx-service Custom Resource.
Create a file named main.go and add the following code:
package main
import (
"context"
"fmt"
"log"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
)
func main() {
// 1. Get the dynamic client
dynamicClient, err := GetDynamicClient()
if err != nil {
log.Fatalf("Error getting dynamic client: %v", err)
}
// 2. Define the GroupVersionResource (GVR) for MyService
myServiceGVR := schema.GroupVersionResource{
Group: "stable.example.com",
Version: "v1",
Resource: "myservices",
}
// 3. Specify the namespace and name of the custom resource to retrieve
namespace := "default"
resourceName := "my-nginx-service"
fmt.Printf("\nAttempting to read MyService '%s' in namespace '%s'...\n", resourceName, namespace)
// 4. Retrieve the Custom Resource using the dynamic client
unstructuredMyService, err := dynamicClient.Resource(myServiceGVR).Namespace(namespace).Get(context.TODO(), resourceName, metav1.GetOptions{})
if err != nil {
log.Fatalf("Error getting MyService '%s': %v", resourceName, err)
}
fmt.Printf("Successfully retrieved MyService: %s/%s\n", unstructuredMyService.GetNamespace(), unstructuredMyService.GetName())
// 5. Parse and access data from the unstructured object
// The retrieved object is of type *unstructured.Unstructured, which is essentially a map[string]interface{}.
// We need to safely navigate this map to extract our desired fields.
// Accessing metadata
fmt.Printf(" API Version: %s\n", unstructuredMyService.GetAPIVersion())
fmt.Printf(" Kind: %s\n", unstructuredMyService.GetKind())
fmt.Printf(" UID: %s\n", unstructuredMyService.GetUID())
fmt.Printf(" Creation Timestamp: %s\n", unstructuredMyService.GetCreationTimestamp())
// Accessing spec fields
spec, found, err := unstructuredMyService.NestedMap("spec")
if err != nil {
log.Fatalf("Error accessing spec field: %v", err)
}
if !found {
log.Fatalf("Spec field not found in MyService")
}
// Using helper functions to safely retrieve nested fields
name, found, err := unstructuredMyService.NestedString("spec", "name")
if err != nil {
log.Fatalf("Error accessing spec.name: %v", err)
}
if found {
fmt.Printf(" Spec.Name: %s\n", name)
}
image, found, err := unstructuredMyService.NestedString("spec", "image")
if err != nil {
log.Fatalf("Error accessing spec.image: %v", err)
}
if found {
fmt.Printf(" Spec.Image: %s\n", image)
}
replicas, found, err := unstructuredMyService.NestedInt64("spec", "replicas")
if err != nil {
log.Fatalf("Error accessing spec.replicas: %v", err)
}
if found {
fmt.Printf(" Spec.Replicas: %d\n", replicas)
}
config, found, err := unstructuredMyService.NestedStringMap("spec", "config")
if err != nil {
log.Fatalf("Error accessing spec.config: %v", err)
}
if found {
fmt.Printf(" Spec.Config:\n")
for k, v := range config {
fmt.Printf(" %s: %s\n", k, v)
}
}
fmt.Println("\nSuccessfully read and parsed MyService.")
}
Now, run this program:
go run .
You should see output similar to this, detailing the retrieved MyService and its fields:
Using kubeconfig from /Users/youruser/.kube/config.
Attempting to read MyService 'my-nginx-service' in namespace 'default'...
Successfully retrieved MyService: default/my-nginx-service
API Version: stable.example.com/v1
Kind: MyService
UID: a1b2c3d4-e5f6-7890-1234-567890abcdef
Creation Timestamp: 2023-10-27 10:30:00 +0000 UTC
Spec.Name: nginx-web
Spec.Image: nginx:latest
Spec.Replicas: 3
Spec.Config:
ENV: production
PORT: 80
Successfully read and parsed MyService.
Explanation of the Get operation:
dynamicClient.Resource(myServiceGVR): This returns adynamic.ResourceInterfacefor the specified GVR. This interface allows you to perform operations on resources of that type..Namespace(namespace): SinceMyServiceis a namespaced resource, we specify the namespace. If it were a cluster-scoped resource, you would omit this call..Get(context.TODO(), resourceName, metav1.GetOptions{}): This performs the actual GET request to the Kubernetes API Server.context.TODO(): A placeholder context. In real-world applications, you'd use a context with a timeout or cancellation.resourceName: Themetadata.nameof the Custom Resource instance.metav1.GetOptions{}: Options for the GET request, often left empty for a simple retrieve.
*unstructured.Unstructured: The result of a dynamic clientGetoperation is an*unstructured.Unstructuredobject. This struct is a wrapper aroundmap[string]interface{}, providing helper methods to safely access nested fields.NestedMap,NestedString,NestedInt64,NestedStringMap: These are crucial helper methods provided byunstructured.Unstructured. They allow you to safely traverse the underlying map structure, returning the value, a boolean indicating if the field was found, and an error if the path is invalid or the type assertion fails. Always checkfoundanderrto ensure robust parsing.
5.4 Listing Custom Resources Using List
Often, you'll need to retrieve multiple instances of a Custom Resource. The dynamic client's List method is designed for this.
Let's modify main.go to list all MyService resources in a given namespace. We'll add another MyService instance first to demonstrate listing multiple.
Create my-apache-service.yaml:
apiVersion: stable.example.com/v1
kind: MyService
metadata:
name: my-apache-service
namespace: default
spec:
name: apache-web
image: httpd:latest
replicas: 2
config:
LOG_LEVEL: info
Apply it:
kubectl apply -f my-apache-service.yaml
Now, modify main.go (or create a new file, list.go, for clarity) to list all MyService resources:
package main
import (
"context"
"fmt"
"log"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
)
func main() {
dynamicClient, err := GetDynamicClient()
if err != nil {
log.Fatalf("Error getting dynamic client: %v", err)
}
myServiceGVR := schema.GroupVersionResource{
Group: "stable.example.com",
Version: "v1",
Resource: "myservices",
}
namespace := "default"
fmt.Printf("\nAttempting to list MyServices in namespace '%s'...\n", namespace)
// List Custom Resources using the dynamic client
// We use metav1.ListOptions{} for a simple unfiltered list.
unstructuredList, err := dynamicClient.Resource(myServiceGVR).Namespace(namespace).List(context.TODO(), metav1.ListOptions{})
if err != nil {
log.Fatalf("Error listing MyServices: %v", err)
}
if len(unstructuredList.Items) == 0 {
fmt.Println("No MyServices found in the namespace.")
return
}
fmt.Printf("Found %d MyServices:\n", len(unstructuredList.Items))
// Iterate over the list of unstructured objects
for i, item := range unstructuredList.Items {
fmt.Printf("--- MyService %d ---\n", i+1)
fmt.Printf(" Name: %s\n", item.GetName())
fmt.Printf(" Namespace: %s\n", item.GetNamespace())
// Accessing spec fields similarly as with Get
name, found, err := item.NestedString("spec", "name")
if err != nil {
log.Printf("Error accessing spec.name for %s: %v", item.GetName(), err)
} else if found {
fmt.Printf(" Spec.Name: %s\n", name)
}
image, found, err := item.NestedString("spec", "image")
if err != nil {
log.Printf("Error accessing spec.image for %s: %v", item.GetName(), err)
} else if found {
fmt.Printf(" Spec.Image: %s\n", image)
}
replicas, found, err := item.NestedInt64("spec", "replicas")
if err != nil {
log.Printf("Error accessing spec.replicas for %s: %v", item.GetName(), err)
} else if found {
fmt.Printf(" Spec.Replicas: %d\n", replicas)
}
config, found, err := item.NestedStringMap("spec", "config")
if err != nil {
log.Printf("Error accessing spec.config for %s: %v", item.GetName(), err)
} else if found {
fmt.Printf(" Spec.Config:\n")
for k, v := range config {
fmt.Printf(" %s: %s\n", k, v)
}
}
}
fmt.Println("\nSuccessfully listed and parsed MyServices.")
}
Run this main.go (assuming you replaced the previous main.go or saved this as a new file and ran it):
go run .
Expected output:
Using kubeconfig from /Users/youruser/.kube/config.
Attempting to list MyServices in namespace 'default'...
Found 2 MyServices:
--- MyService 1 ---
Name: my-nginx-service
Namespace: default
Spec.Name: nginx-web
Spec.Image: nginx:latest
Spec.Replicas: 3
Spec.Config:
ENV: production
PORT: 80
--- MyService 2 ---
Name: my-apache-service
Namespace: default
Spec.Name: apache-web
Spec.Image: httpd:latest
Spec.Replicas: 2
Spec.Config:
LOG_LEVEL: info
Successfully listed and parsed MyServices.
Explanation of the List operation:
dynamicClient.Resource(myServiceGVR).Namespace(namespace).List(...): Similar toGet, but instead of a resource name, it takesmetav1.ListOptions.metav1.ListOptions{}: This struct allows you to filter the list of resources. You can specifyLabelSelector,FieldSelector,Limit,Continue(for pagination), and more. For this example, we're doing an unfiltered list.*unstructured.UnstructuredList: The result is an*unstructured.UnstructuredList, which contains a slice of*unstructured.Unstructuredobjects in itsItemsfield.- Iteration and Parsing: You then iterate through
unstructuredList.Items, and eachitemcan be processed in the same way as a single*unstructured.Unstructuredobject retrieved byGet.
5.5 Updating and Deleting (Brief Overview)
While this article focuses on reading, it's worth briefly mentioning that the dynamic.ResourceInterface also provides methods for Create, Update, Delete, and DeleteCollection.
Update: Takes an*unstructured.Unstructuredobject (which you might have modified after aGetoperation) andmetav1.UpdateOptions{}.Delete: Takes theresourceNameandmetav1.DeleteOptions{}.Create: Takes an*unstructured.Unstructuredobject representing the new resource andmetav1.CreateOptions{}.
These operations would follow a similar pattern: define GVR, select namespace (if applicable), and call the respective method. The key is always to work with *unstructured.Unstructured objects.
5.6 Real-World Considerations: Namespaces, Field Selectors, and Label Selectors
In production environments, you'll often need more granular control over which resources you retrieve:
- Namespaces: As shown, the
.Namespace(name)method restricts operations to a specific namespace. For cluster-scoped resources, you'd call.Resource(gvr)directly without.Namespace(). - Label Selectors: Use
metav1.ListOptions{LabelSelector: "app=nginx,env!=dev"}to filter resources based on their labels. This is extremely powerful for selecting subsets of your applications. - Field Selectors: Use
metav1.ListOptions{FieldSelector: "metadata.name=my-nginx-service"}to filter based on specific fields (e.g., metadata name, namespace, status fields). However, field selectors are generally less flexible and efficient than label selectors for custom data. - Context for Cancellation and Timeouts: Always use a
context.Contextwith timeouts or cancellation signals in real applications to prevent indefinite waits and manage resource lifecycle.
By mastering these techniques, you gain full programmatic control over Custom Resources in your Kubernetes cluster, laying the groundwork for building sophisticated operators and management tools.
Chapter 6: Advanced Scenarios and Best Practices for Dynamic Client Usage
Beyond basic CRUD operations, interacting with Custom Resources using a dynamic client in Golang involves several advanced considerations and best practices to ensure robustness, performance, and security.
6.1 Handling Different CRD Versions
CRDs, like built-in Kubernetes resources, can evolve over time, introducing new versions (e.g., v1alpha1, v1beta1, v1). Your dynamic client code needs to be aware of which version it's targeting.
- Specify the correct GVR: When defining
schema.GroupVersionResource, always use the specificVersionyou intend to interact with. For example,stable.example.com/v1for ourMyService. - Migration: If you update a CRD with a new
storageversion, Kubernetes handles data migration. Your client code might need to adapt its parsing logic if fields change or are removed across versions. Operators often useconversion webhooksto automatically convert objects between different versions. - API Server Discovery: For truly generic tools, you might need to dynamically discover available API groups and versions using
discovery.DiscoveryInterfaceto figure out the correct GVRs at runtime, rather than hardcoding them. This is more complex but makes your tool highly adaptable.
6.2 Robust Error Handling and Type Assertions
The unstructured.Unstructured object, while flexible, necessitates careful error handling due to its lack of compile-time type safety.
- Always check
foundanderr: As demonstrated, helper methods likeNestedStringreturn afoundboolean and anerror. Always check both. Iffoundisfalse, the field doesn't exist at that path. Iferris non-nil, there was a problem (e.g., type mismatch). - Default Values: If a field might be missing, provide sensible default values in your code after checking
found. - Logging: Use structured logging (e.g.,
log.Printfor a more advanced logging library) to capture details about parsing errors, including the resource name and the problematic field path. - DeepCopy: When you retrieve an
Unstructuredobject and intend to modify it for anUpdateoperation, it's crucial to create aDeepCopy()first. This prevents accidental modification of the original object returned by the API server, which could lead to unexpected behavior if not handled correctly.
Example of robust parsing:
val, found, err := unstructuredMyService.NestedString("spec", "nonExistentField")
if err != nil {
// This indicates a type assertion failure or internal error
log.Printf("Error accessing nonExistentField: %v", err)
} else if !found {
// This indicates the field simply doesn't exist
fmt.Println(" spec.nonExistentField: Not found (expected)")
} else {
fmt.Printf(" spec.nonExistentField: %s\n", val)
}
6.3 Using Informers for Continuous Watching (Brief Mention)
For building controllers or operators that need to react to changes in Custom Resources (create, update, delete events), a simple Get or List loop is inefficient and prone to missing events. client-go provides Informers for this purpose.
- Informers: Informers provide a way to watch the Kubernetes API Server for changes to resources and maintain an in-memory cache of these resources. They are highly efficient, reduce API server load, and ensure your application receives all events.
- Dynamic Informers:
client-goalso offersdynamicinformer.DynamicSharedInformerFactorywhich can create informers for arbitrary GVRs, extending the flexibility of the dynamic client to continuous watching. - Use Case: If your application needs to continuously monitor CRs and reconcile their state (e.g., an operator that provisions external infrastructure based on
MyServiceCRs), you should use informers instead of repeatedListcalls.
6.4 Security Implications: RBAC for Dynamic Clients
Any application interacting with the Kubernetes API must be authorized. When using a dynamic client, the RBAC policies must grant permissions to the underlying Resource types.
- Service Accounts: When your Go application runs in-cluster, it uses its Pod's service account. You must create appropriate
RoleandRoleBinding(orClusterRoleandClusterRoleBinding) to grant this service account permissions to your custom resources. - Example ClusterRole for
MyService:yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: myservice-reader rules: - apiGroups: ["stable.example.com"] # The API group of your CRD resources: ["myservices"] # The plural resource name of your CRD verbs: ["get", "list", "watch"] # Permissions to grant - Example RoleBinding: Bind this
ClusterRoleto aServiceAccountin a specific namespace.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: default-myservice-reader namespace: default # The namespace where your app runs subjects: - kind: ServiceAccount name: default # Or your custom service account namespace: default roleRef: kind: ClusterRole name: myservice-reader apiGroup: rbac.authorization.k8s.ioAlternatively, for a cluster-wide reader:yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: default-myservice-reader-cluster subjects: - kind: ServiceAccount name: default # Or your custom service account namespace: default roleRef: kind: ClusterRole name: myservice-reader apiGroup: rbac.authorization.k8s.io
Without correct RBAC permissions, your dynamic client will receive PermissionDenied errors when trying to access Custom Resources.
6.5 Performance Considerations
While client-go is generally efficient, large clusters or frequent API calls can impact performance.
- Batching/Listing vs. Getting: For retrieving multiple objects,
Listis significantly more efficient than multipleGetcalls. - Informers for Watches: For continuous monitoring, informers are the most performant approach as they maintain a local cache, reducing API server load.
- Selectors: Use
LabelSelectorandFieldSelectoreffectively inListOptionsto retrieve only the relevant subset of resources, minimizing network traffic and processing overhead. - Resource Management: Ensure your application's
client-goconnections are properly managed. Therest.Configincludes options forQPS(queries per second) andBurst(temporary burst capacity) to control the rate of API requests, preventing your client from overwhelming the API server.
By adhering to these advanced practices, developers can build robust, performant, and secure Go applications that leverage the full power and flexibility of the dynamic client to manage Custom Resources in Kubernetes. This capability is fundamental for creating the next generation of cloud-native applications and operators.
Chapter 7: The Broader Context: Custom Resources, APIs, and Gateways
Having explored the mechanics of reading Custom Resources with client-go's dynamic client, it's important to place this technical capability within a larger architectural context. Custom Resources are not isolated constructs; they are integral parts of complex, distributed systems that often involve numerous services and their corresponding APIs. Managing these APIs effectively, especially in a world increasingly reliant on microservices and AI-powered applications, demands a robust API gateway.
7.1 Custom Resources as an Extension of the Kubernetes API
We've seen how Custom Resources extend the Kubernetes API itself, allowing us to define new resource types like MyService. This means that application-specific configurations and state can be managed using the same declarative principles and tooling as native Kubernetes objects. This consistency is a massive advantage, simplifying the operational overhead for platform engineers and developers alike.
Applications or operators that consume these Custom Resources often take their declarative specifications and translate them into operational realities. For instance, our MyService Custom Resource might trigger an operator to:
- Provision actual backend services (e.g., Deployments, Services).
- Configure network policies or ingress rules.
- Integrate with external systems or cloud services.
Each of these underlying services, and indeed the applications they support, typically exposes its own APIs. These APIs could be RESTful, gRPC, or even specialized AI inference endpoints. As the number and diversity of these APIs grow, so does the complexity of managing them.
7.2 The Indispensable Role of an API Gateway
In a microservices architecture, direct client-to-service communication often leads to several problems:
- Security: Clients need to know about and handle authentication/authorization for each service.
- Routing & Load Balancing: Clients need to know where services are located and how to distribute requests.
- Traffic Management: Rate limiting, caching, and circuit breaking become complex to implement per service.
- API Versioning: Managing different API versions across many services is a challenge.
- Observability: Centralized logging, monitoring, and tracing are difficult without a single point of entry.
This is precisely where an API gateway comes into play. An API gateway acts as a single entry point for all clients, routing requests to the appropriate backend services, applying policies, and handling cross-cutting concerns. It effectively centralizes the management of all your APIs, both internal and external.
A robust API gateway is not just a proxy; it's a strategic component that provides:
- Unified API Exposure: A single, consistent API facade for all backend services.
- Security Enforcement: Centralized authentication, authorization (like OAuth2, JWT validation), and threat protection.
- Traffic Management: Request routing, load balancing, rate limiting, and circuit breaking.
- API Transformation: Request/response manipulation, protocol translation.
- Observability: Centralized logging, metrics, and tracing for all API calls.
- API Lifecycle Management: Tools to design, publish, version, and deprecate APIs.
7.3 APIPark: An Open Source AI Gateway & API Management Platform
When discussing the holistic management of APIs, especially in modern cloud-native environments that leverage Kubernetes and increasingly, Artificial Intelligence, solutions like APIPark become incredibly relevant.
APIPark is an all-in-one AI gateway and API developer portal, open-sourced under the Apache 2.0 license. It's designed specifically to simplify the management, integration, and deployment of both traditional REST services and advanced AI services. For organizations building applications on Kubernetes, where Custom Resources might define internal service configurations, APIPark provides the external facing API gateway necessary to expose and manage these services safely and efficiently.
Consider how APIPark complements a Kubernetes setup utilizing Custom Resources: if your MyService CR defines the backend details of an AI inference service, APIPark can then manage the exposure of that service's API to external consumers.
Here's how APIPark adds significant value in this ecosystem:
- Quick Integration of 100+ AI Models: For services configured by CRs that might be AI-driven, APIPark offers a unified management system for various AI models, handling authentication and cost tracking centrally. This is crucial for applications leveraging multiple AI capabilities.
- Unified API Format for AI Invocation: APIPark standardizes the request data format across AI models. This means changes in the underlying AI models (perhaps defined and managed via new Custom Resources) do not break existing applications because APIPark abstracts the complexity, simplifying AI usage and maintenance.
- Prompt Encapsulation into REST API: Users can quickly combine AI models with custom prompts to create new APIs (e.g., sentiment analysis, translation). These custom APIs can then be exposed and managed through the gateway, regardless of how their underlying AI service might be configured within Kubernetes using Custom Resources.
- End-to-End API Lifecycle Management: Beyond just routing, APIPark assists with the entire lifecycle of APIs, from design and publication to invocation and decommissioning. It helps manage traffic forwarding, load balancing, and versioning of published APIs β all critical aspects that a Kubernetes Custom Resource might enable at the infrastructure level, but which APIPark elevates to the API management layer.
- API Service Sharing within Teams: The platform offers a centralized display of all API services, making it easy for different departments to discover and utilize necessary APIs. This is vital in large organizations where various teams might consume services whose configurations are managed via Custom Resources.
- Independent API and Access Permissions for Each Tenant: APIPark supports multi-tenancy, allowing for independent applications, data, user configurations, and security policies per team, while sharing underlying infrastructure. This aligns well with multi-tenant Kubernetes deployments where Custom Resources might be used to define tenant-specific configurations.
- API Resource Access Requires Approval: Features like subscription approval ensure that callers must subscribe to an API and await administrator approval before invocation, preventing unauthorized API calls and enhancing security β a crucial complement to Kubernetes RBAC for internal service control.
- Performance Rivaling Nginx: With strong performance capabilities (over 20,000 TPS with modest resources), APIPark can handle large-scale traffic, ensuring that your APIs, whether for REST or AI services, are always responsive and available.
- Detailed API Call Logging & Powerful Data Analysis: Comprehensive logging and data analysis features provide invaluable insights into API usage, performance trends, and potential issues, enabling proactive maintenance and improved service quality. This is a critical layer of observability that works in conjunction with Kubernetes monitoring for deeper operational insights.
In essence, while Custom Resources allow developers to extend the Kubernetes control plane to manage application-specific configurations, an API gateway like APIPark provides the essential layer for securely and efficiently exposing, managing, and observing the APIs that these applications and services ultimately offer to consumers. It bridges the gap between internal Kubernetes orchestration and external API consumption, offering a complete solution for modern distributed systems.
Conclusion: Mastering Custom Resources and the Holistic View of API Management
The journey through reading Custom Resources using the dynamic client in Golang unveils a powerful facet of Kubernetes extensibility. By leveraging client-go, developers gain the ability to programmatically interact with any custom API object, treating application-specific configurations as first-class citizens within the Kubernetes ecosystem. This flexibility is paramount for building generic tools, robust operators, and adaptable cloud-native applications that can evolve with the ever-changing landscape of custom resource definitions.
We've delved into the creation of CustomResourceDefinitions, demonstrated the setup of a Golang environment for Kubernetes interaction, and provided detailed code examples for retrieving single Custom Resources with Get and collections of resources with List. Understanding the nuances of unstructured.Unstructured objects and the importance of robust error handling are critical for safe and effective dynamic client usage. Furthermore, advanced considerations such as RBAC, versioning, and performance optimization through informers underscore the depth required for production-grade applications.
Beyond the technical mechanics, it's crucial to appreciate how Custom Resources fit into the broader picture of API management. While CRDs extend the internal Kubernetes API, the services configured by these resources often expose their own APIs to external consumers. This is where an API gateway becomes indispensable. Solutions like APIPark provide the crucial layer of governance, security, and performance required to manage these external-facing APIs. By unifying the management of diverse services, including AI models, APIPark complements the power of Kubernetes and Custom Resources by ensuring that your distributed applications are not only well-orchestrated internally but also securely, efficiently, and intelligently exposed to the world.
Mastering the dynamic client in Golang empowers developers to build sophisticated Kubernetes-native applications, and by integrating these capabilities with comprehensive API gateway solutions, organizations can achieve a truly end-to-end strategy for managing their complex, API-driven landscapes.
Frequently Asked Questions (FAQs)
1. What is the primary difference between client-go's Clientset and Dynamic Client? The primary difference lies in type safety and flexibility. A Clientset (typed client) is generated from Go structs representing Kubernetes resources, offering strong type checking at compile-time and excellent IDE support. It's ideal for built-in Kubernetes resources and well-defined Custom Resources (if you generate client-sets for them). The Dynamic Client, on the other hand, operates on generic unstructured.Unstructured objects (essentially map[string]interface{}), providing high flexibility to interact with any API resource (including CRs whose schemas are unknown at compile time) without strong typing, making it suitable for generic tools and operators.
2. When should I use the Dynamic Client over a generated Clientset for Custom Resources? You should use the Dynamic Client for Custom Resources when: * You are building a generic tool or operator that needs to work with multiple, arbitrary CRDs whose schemas might not be known or might evolve frequently. * You want to avoid the overhead of generating and maintaining typed clients for every CRD, especially in environments with many custom types. * You only need to perform basic CRUD operations and are comfortable with runtime type assertions on unstructured.Unstructured objects. If strong type safety is paramount and the CRD schema is stable, generating a Clientset (e.g., using controller-gen) might be preferred for better developer experience.
3. What is a GroupVersionResource (GVR), and why is it important for the Dynamic Client? A GroupVersionResource (GVR) is a schema.GroupVersionResource struct that uniquely identifies a collection of resources within the Kubernetes API. It consists of the API Group (e.g., stable.example.com), Version (e.g., v1), and the plural Resource name (e.g., myservices). The Dynamic Client relies on GVRs to tell the Kubernetes API Server which specific type of resource it wants to interact with, as it doesn't use strongly typed Go structs for identification. It's the key to addressing any resource using the dynamic client.
4. How do I handle RBAC permissions for an application using the Dynamic Client to read Custom Resources? RBAC permissions for an application using a Dynamic Client are configured the same way as for any other Kubernetes client. You need to create ClusterRole or Role resources that grant get, list, watch (and potentially create, update, delete) verbs on the specific apiGroups and resources (the plural name of your CRD) that your application needs to access. This role is then bound to the ServiceAccount that your application's Pod uses via a RoleBinding or ClusterRoleBinding.
5. How does an API gateway like APIPark fit into an architecture that uses Kubernetes Custom Resources? Kubernetes Custom Resources are excellent for defining and managing application-specific configurations and desired states within the Kubernetes cluster. However, the services orchestrated and configured by these Custom Resources often expose their own APIs that need to be consumed by external clients or other microservices. An API gateway like APIPark serves as the crucial layer to manage these external-facing APIs. It provides centralized authentication, authorization, traffic management (rate limiting, load balancing), API lifecycle management, and observability for all your APIs, including those exposed by services whose internal configuration is managed by Custom Resources. APIPark specifically excels in managing AI APIs, offering unified formats and quick integration, complementing the Kubernetes control plane by providing robust external API governance.
πYou can securely and efficiently call the OpenAI API on APIPark in just two steps:
Step 1: Deploy the APIPark AI gateway in 5 minutes.
APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.
curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh

In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.

Step 2: Call the OpenAI API.

