How to Create Proxy in Mulesoft: A Step-by-Step Guide

How to Create Proxy in Mulesoft: A Step-by-Step Guide
how to create proxy in mulesoft

In the rapidly evolving landscape of digital transformation, Application Programming Interfaces (APIs) have emerged as the foundational building blocks for connecting disparate systems, services, and data. They are the invisible threads that weave together modern applications, enabling seamless communication between front-end interfaces, microservices, third-party integrations, and legacy systems. However, as the number of APIs within an organization grows, so does the complexity of managing, securing, and optimizing them. This is where the concept of an API proxy becomes not just beneficial, but absolutely indispensable. An API proxy acts as an intermediary, sitting between the client application and the backend service, providing a crucial layer of abstraction and control. It allows organizations to decouple consumers from producers, implement centralized governance, and enhance the overall resilience and performance of their API ecosystem.

MuleSoft, with its Anypoint Platform, stands out as a leading solution for comprehensive API management, offering powerful capabilities for designing, building, deploying, and managing APIs across their entire lifecycle. At the heart of MuleSoft's API gateway functionality lies its ability to effortlessly create and manage API proxies. This guide will delve deep into the mechanics of setting up and configuring API proxies within MuleSoft, providing a granular, step-by-step walkthrough. We'll explore not just the "how," but also the "why," illuminating the profound impact that well-managed API proxies can have on security, scalability, and operational efficiency. By the end of this extensive guide, you will possess a thorough understanding of how to leverage MuleSoft's robust platform to build, deploy, and govern your API proxies, transforming your approach to API integration and unlocking new levels of architectural flexibility and control.

Understanding API Proxies and Their Indispensable Role

Before we embark on the practical steps of creating an API proxy in MuleSoft, it's crucial to establish a firm understanding of what an API proxy is, its core functionalities, and why it has become an essential component in any modern API gateway architecture.

What Exactly is an API Proxy?

At its simplest, an API proxy is a server that sits in front of a backend API service. Instead of client applications directly calling the backend API, they interact with the proxy. The proxy, in turn, forwards the request to the actual backend API, receives the response, and then sends it back to the client. Think of it as a sophisticated receptionist or a diligent bodyguard for your backend services. Clients interact with the receptionist (the proxy), who then directs the request to the appropriate department (the backend API). The receptionist handles various tasks—screening calls, ensuring proper identification, managing appointments—before the client ever reaches the internal teams. This layer of indirection is precisely what gives an API proxy its immense power and flexibility.

Unlike a direct API call, which exposes the backend service's network location and internal structure to every client, an API proxy provides an abstract, stable endpoint. This means that if the backend API's location changes, or if it needs to be replaced or refactored, the client applications consuming the proxy remain unaffected. The proxy itself handles the redirection and internal mapping, creating a seamless experience for consumers and a significant operational advantage for producers. It's a fundamental concept that underpins the robustness and adaptability of distributed systems.

The Multifaceted Importance of API Proxies

The strategic deployment of API proxies offers a myriad of benefits that extend far beyond simple request forwarding. These benefits collectively contribute to a more secure, performant, and manageable API ecosystem.

1. Enhanced Security and Access Control

Security is paramount in any API-driven architecture, and API proxies serve as the first line of defense. By intercepting all incoming requests, a proxy can enforce a comprehensive suite of security policies before any traffic reaches the backend API. This includes:

  • Authentication and Authorization: Verifying the identity of the client (authentication) and determining if they have the necessary permissions to access the requested resource (authorization). This can be achieved through various mechanisms like OAuth 2.0, API keys, JWT validation, or SAML. The proxy can offload this crucial security concern from the backend service, allowing the backend to focus purely on business logic.
  • Threat Protection: Shielding backend services from malicious attacks such as SQL injection, XML External Entity (XXE) attacks, DDoS attacks, or overly large payloads that could destabilize the backend. Proxies can filter, validate, and sanitize incoming requests to mitigate these threats.
  • IP Whitelisting/Blacklisting: Restricting API access based on the source IP address, allowing only trusted clients or blocking known malicious ones.
  • Data Masking and Encryption: Ensuring sensitive data is handled securely, potentially masking or encrypting certain parts of the request or response as it passes through the gateway.

Without an API gateway acting as a proxy, each backend service would need to implement these security measures independently, leading to duplication of effort, potential inconsistencies, and increased vulnerability.

2. Optimized Performance and Scalability

API proxies are not just about security; they are also critical for enhancing the performance and scalability of your API infrastructure:

  • Caching: Proxies can cache responses from the backend API, serving subsequent identical requests directly from the cache without hitting the backend. This significantly reduces latency, decreases the load on backend systems, and improves response times for frequently accessed data. Intelligent caching strategies can dramatically boost performance, especially for read-heavy APIs.
  • Load Balancing: When multiple instances of a backend API are running, the proxy can intelligently distribute incoming requests among them. This prevents any single instance from becoming a bottleneck, ensuring high availability and optimal resource utilization. Modern load balancing algorithms can factor in server health, current load, and other metrics to make informed routing decisions.
  • Traffic Shaping and Throttling: Proxies can manage the flow of traffic to backend services by implementing rate limiting and quota policies. This prevents a sudden surge of requests from overwhelming the backend, maintaining stability and ensuring fair usage across different consumers. For instance, a proxy might allow a specific client only 100 requests per minute, blocking any excess.
  • Protocol Transformation: Proxies can bridge different communication protocols. For example, a proxy could expose a RESTful API to clients while communicating with a legacy backend service using SOAP, message queues, or even custom binary protocols. This facilitates integration without requiring backend modifications.

3. Centralized Management and Governance

One of the most compelling advantages of an API gateway that incorporates proxy functionality is the ability to centralize API management and governance:

  • API Versioning: Proxies make it easier to manage different versions of an API. Clients can continue to use an older version through the proxy, while the backend API is updated to a new version. The proxy handles routing requests to the appropriate backend version based on client requests, ensuring backward compatibility and smooth transitions.
  • Policy Enforcement: All API governance policies—security, QoS, routing, logging—can be defined and enforced at the proxy layer. This ensures consistency across all exposed APIs, simplifies policy updates, and reduces the operational overhead of managing policies on individual backend services.
  • Monitoring and Analytics: Proxies can collect comprehensive metrics on API usage, performance, and errors. This data is invaluable for understanding API consumption patterns, identifying bottlenecks, troubleshooting issues, and making informed decisions about API evolution and capacity planning. The centralized nature of this data collection provides a holistic view of the entire API estate.
  • Developer Portal Integration: When integrated with a developer portal, the API gateway exposes the proxy endpoints to external developers, providing self-service access to API documentation, client registration, and subscription management. This streamlines the consumption and adoption of your APIs.

4. Decoupling and Architectural Flexibility

The fundamental principle of an API proxy—indirection—leads to significant architectural flexibility:

  • Backend Abstraction: Clients are decoupled from the specific implementation details of the backend API. They only know the proxy's endpoint. This allows backend services to be refactored, replaced, or migrated without impacting client applications. This is critical for microservices architectures, where services might be updated or scaled independently.
  • Service Mesh Integration: In complex microservices environments, API proxies can complement a service mesh by handling North-South traffic (traffic entering and exiting the cluster), while the service mesh manages East-West traffic (inter-service communication within the cluster).
  • Mediation and Orchestration: While a basic proxy simply forwards requests, more advanced API gateway implementations can perform light orchestration or mediation logic. This might involve aggregating data from multiple backend services, transforming data formats, or enriching requests before forwarding them.

API Gateway vs. API Proxy: Clarifying the Relationship

The terms "API Gateway" and "API Proxy" are often used interchangeably, but it's important to understand their relationship.

  • An API Proxy is primarily concerned with forwarding requests and responses, providing a layer of indirection for a single backend API. It can apply policies specific to that API.
  • An API Gateway is a broader, more comprehensive concept. It's a single entry point for multiple APIs, acting as a central gateway for all client interactions. An API gateway implements proxy functionality for many APIs, but it also provides a rich set of additional features like routing to multiple backend services, composing requests, centralized authentication and authorization across multiple APIs, and a unified management console.

In essence, an API gateway uses API proxies as a core mechanism to achieve its broader goals of comprehensive API management. MuleSoft's Anypoint Platform, specifically its Anypoint API Manager, functions as a powerful API gateway, allowing you to create and manage numerous API proxies, each with its own set of policies and configurations, all from a centralized interface.

MuleSoft's Anypoint Platform: A Comprehensive API Gateway Solution

MuleSoft's Anypoint Platform is an integrated, unified platform designed to manage the entire API lifecycle, from design and development to deployment, management, and governance. Its robust capabilities make it an excellent choice for organizations seeking a sophisticated API gateway solution, particularly when it comes to creating and managing API proxies.

Anypoint Platform Overview

The Anypoint Platform is not merely a runtime engine; it's a comprehensive ecosystem comprising several interconnected components that facilitate enterprise integration and API-led connectivity:

  • Anypoint Design Center: This is where API specifications (using RAML, OpenAPI/Swagger) are designed, and Mule applications are developed. It provides a visual design environment and a code editor, enabling developers to build integration flows and API implementations.
  • Anypoint Exchange: A central hub for discovering, sharing, and reusing API assets, templates, connectors, and examples. It acts as an internal marketplace for developers, promoting reusability and standardization across the organization. This is also where API definitions, once published, become discoverable and where client applications can register for access.
  • Anypoint Runtime Manager: This component is responsible for deploying, managing, monitoring, and troubleshooting Mule applications (which can include your API implementations or proxies) across various environments, including CloudHub (MuleSoft's cloud platform), customer-hosted runtimes (on-premises servers), or hybrid deployments. It provides centralized control over runtime instances.
  • Anypoint API Manager: This is the core component for API governance and lifecycle management. It's where you define and apply policies to your APIs, manage access, monitor usage, and gain insights through analytics. API Manager is the control plane where you will primarily interact to create and configure your API proxies.
  • Anypoint Studio: A powerful Eclipse-based IDE for designing, building, and testing Mule applications. While you can create API proxies directly through the Anypoint Platform web interface, Anypoint Studio is essential for developing complex API implementations or custom policies.

Why MuleSoft Excels as an API Gateway

MuleSoft's strength as an API gateway and platform for building robust API proxies lies in several key areas:

  • Policy-Driven Governance: Anypoint API Manager provides a rich library of pre-built policies (security, QoS, transformation) that can be easily applied to any API proxy. This policy-driven approach allows for consistent enforcement of rules across your entire API estate without modifying backend code.
  • Flexible Deployment Options: Whether your backend APIs reside on-premises, in the cloud, or in a hybrid environment, MuleSoft offers flexible deployment models for your API gateway proxies. CloudHub provides a fully managed, scalable cloud environment, while hybrid options allow you to deploy runtimes closer to your backend systems.
  • Comprehensive Connectivity: With hundreds of connectors to various systems (databases, SaaS applications, ERPs, messaging queues), MuleSoft goes beyond simple API proxies. It enables you to build complex integration flows that can consume, transform, and expose data from virtually any source, significantly enhancing the capabilities of your API layer.
  • Full API Lifecycle Management: From initial design in Design Center, through publication in Exchange, deployment via Runtime Manager, and ongoing governance in API Manager, MuleSoft provides a unified experience for managing APIs from inception to retirement. This holistic approach ensures consistency and reduces fragmentation in your API strategy.
  • Scalability and Resilience: MuleSoft's runtime engine, Mule Runtime, is designed for high performance and scalability. When deploying proxies to CloudHub, you can easily scale worker instances and leverage built-in load balancing to handle varying traffic loads and ensure high availability.
  • Detailed Monitoring and Analytics: API Manager offers robust monitoring and analytics capabilities, providing real-time insights into API performance, usage patterns, error rates, and policy violations. This data is critical for proactive issue resolution, capacity planning, and demonstrating the value of your API program.

By leveraging MuleSoft's Anypoint Platform, organizations can establish a sophisticated and agile API gateway strategy, centralizing the management of their API proxies and ensuring that all exposed APIs are secure, performant, and well-governed. This comprehensive approach is vital for building a connected enterprise where information flows freely and securely across all digital assets.

Prerequisites for Creating an API Proxy in MuleSoft

Before diving into the step-by-step instructions, ensure you have the necessary prerequisites in place. Having these elements ready will streamline the proxy creation process and allow you to focus on the core configurations.

  1. MuleSoft Anypoint Platform Account: You will need an active Anypoint Platform account. If you don't have one, you can sign up for a free trial on the MuleSoft website. This account provides access to all the necessary components like API Manager, Runtime Manager, and Exchange.
  2. Basic Understanding of APIs (REST, HTTP Methods): Familiarity with fundamental API concepts, particularly RESTful principles, HTTP methods (GET, POST, PUT, DELETE), request/response structures, and status codes, will be beneficial. While the process itself is intuitive, understanding the underlying API principles helps in configuring the proxy effectively.
  3. A Backend API to Proxy: This is critical. You need an existing API endpoint that your proxy will sit in front of. This could be:
    • A publicly available test API: Such as https://jsonplaceholder.typicode.com/posts or a similar mock API service.
    • An internal backend API: If you're working within an organization, you might have an existing service (e.g., http://mycompany.com/api/v1/products).
    • A Mule application API: You might have already built and deployed a Mule application that exposes an API, and now you want to put a proxy in front of it for governance. Make sure you have the exact URL of this backend API implementation.
  4. MuleSoft Anypoint Studio (Optional but Recommended for Deeper Understanding): While not strictly necessary for creating a basic API proxy via the web interface, Anypoint Studio is indispensable if you plan to:
    • Develop custom API implementations that the proxy will front.
    • Create custom policies that go beyond the out-of-the-box offerings.
    • Troubleshoot complex integration scenarios within a Mule application.
  5. Basic Understanding of Network Concepts (Optional): Familiarity with concepts like firewalls, VPNs, Virtual Private Clouds (VPCs), and DNS can be helpful, especially when deploying proxies in complex enterprise network environments or configuring advanced networking options for CloudHub.
  6. HTTP Client Tool: A tool like Postman, Insomnia, or even curl is essential for testing your deployed API gateway proxy to ensure it's functioning correctly and policies are being enforced.

With these prerequisites in place, you are well-equipped to proceed with the hands-on process of creating an API proxy in MuleSoft.

Step-by-Step Guide: Creating an API Proxy in MuleSoft Anypoint Platform

This section provides a detailed, granular walkthrough of creating an API gateway proxy in MuleSoft's Anypoint Platform. We will cover each step meticulously, from accessing the platform to deploying the proxy and applying policies, ensuring you have a comprehensive understanding of the process.

For this guide, let's assume we want to proxy a simple public API endpoint, for example: https://jsonplaceholder.typicode.com/posts. This endpoint returns a list of JSON posts.

Step 1: Access Anypoint Platform and Navigate to API Manager

  1. Log in to Anypoint Platform: Open your web browser and navigate to https://anypoint.mulesoft.com. Enter your username and password to log in.
  2. Navigate to API Manager: Once logged in, you'll see the Anypoint Platform dashboard. On the left-hand navigation pane, locate and click on "API Manager."
    • The API Manager dashboard provides an overview of all your managed APIs, their status, and any active alerts. This is your central control point for API governance. You'll see options to "Add API," view existing APIs, manage API groups, and review analytics.

Step 2: Add a New API to Manage

  1. Initiate API Creation: In the API Manager dashboard, click the prominent "Add API" button, usually located on the right side or center of the screen. From the dropdown, select "New API."
    • This action initiates a wizard that guides you through the process of defining your API. You'll be presented with choices for how to define your API and how it will be managed.
  2. Configure API Manually: For creating a proxy, we'll configure the API manually. Select the option "Configure API manually." This allows us to define the API's characteristics and then associate it with a proxy deployment.
  3. Provide API Details:
    • API Name: Enter a descriptive name for your API, e.g., "JSONPlaceholder Posts API." This name will be used throughout Anypoint Platform to identify your API.
    • Asset ID: (Optional, but good practice) This field is automatically populated based on the API Name, but you can customize it. It's used for unique identification within Anypoint Exchange. Example: jsonplaceholder-posts-api.
    • Version: Specify the version of your API, e.g., "v1." Versioning is crucial for managing API evolution.
    • Instance Label: This is an optional label to distinguish different instances of the same API version, useful in environments with multiple deployments (e.g., "Production," "Staging"). You can leave this blank for now or use "Default."
    • API Type: Select the type of API you are managing. For our example, "REST API" is appropriate.
    • API Definition: This is where you can upload an API specification file (e.g., RAML, OpenAPI/Swagger YAML/JSON). While optional for creating a basic proxy, providing a definition is highly recommended as it enables full lifecycle management capabilities, including auto-discovery and automatic documentation in Exchange. For this guide, you can choose "Skip API definition" if you just want to get a proxy running quickly, but ideally, you would have one ready. If you have an OpenAPI specification for https://jsonplaceholder.typicode.com/posts, you could upload it here.
  4. Select API Management Type: Click "Next." Now, you'll be prompted to choose how to manage the API. This is a crucial decision for proxy creation:Select "Endpoint with proxy" and then click "Next."
    • Basic endpoint: This option simply registers an existing API for monitoring and policy application without deploying a new proxy application. The API implementation must already be deployed and accessible.
    • Endpoint with proxy: This is the option we need. It tells MuleSoft to deploy a new proxy application to a Mule runtime (like CloudHub) that will sit in front of your backend API. This provides the full benefits of an API gateway and policy enforcement.
    • Download proxy: This option allows you to download a Mule application project containing the proxy logic, which you can then modify in Anypoint Studio and deploy manually to any Mule runtime. This offers maximum flexibility but requires more manual effort.

Step 3: Configure Proxy Settings and Deployment

This is where you define the details of your backend API and how the proxy application will be deployed.

  1. Implementation URL: This is arguably the most critical setting. Enter the URL of your actual backend API that you want to proxy.
    • Example: For our scenario, enter https://jsonplaceholder.typicode.com/posts.
    • Detailed Explanation: This is the target endpoint for all requests that hit your proxy. The proxy will receive a request from a client, and then internally forward it to this Implementation URL. Ensure this URL is correct and accessible from where your proxy will be deployed (e.g., CloudHub).
  2. Proxy Application Name: Provide a unique name for the proxy application that MuleSoft will generate and deploy.
    • Example: jsonplaceholder-posts-proxy-v1.
    • Detailed Explanation: This name must be unique across all applications deployed to CloudHub within your organization. It forms part of the proxy's URL and will be visible in Runtime Manager. Choose a descriptive name that indicates its purpose and the API it proxies.
  3. Deployment Target: Select where you want to deploy the proxy application.
    • CloudHub: This is MuleSoft's fully managed, highly available cloud platform. It's the simplest and most common deployment target for proxies, especially for publicly exposed APIs. MuleSoft handles the infrastructure, scaling, and operational aspects.
    • Customer Hosted: This refers to deploying to a Mule runtime that you manage on your own servers (on-premises, EC2, Kubernetes via Runtime Fabric). This offers more control over the environment but requires more operational overhead.
    • Hybrid: A combination where the control plane is in the cloud, but the runtime is customer-hosted.
    • For this guide, select "CloudHub" for simplicity and immediate results.
  4. Runtime Version: Choose a compatible Mule runtime version. MuleSoft typically provides the latest stable versions. Select the most recent recommended version, e.g., 4.x.x.
    • Detailed Explanation: The runtime version dictates the capabilities and features available to your proxy application. It's crucial to select a version that is stable and compatible with any potential custom logic or policies you might add later.
  5. Worker Size & Number of Workers: These settings control the computing resources allocated to your proxy application on CloudHub.
    • Worker Size: Defines the CPU and memory allocated per worker. Options typically range from 0.1 vCore/500MB to multiple vCores/GBs. For a simple proxy, 0.1 vCore / 500 MB is usually sufficient for testing. For production, consider 0.2 vCore / 1 GB or higher, depending on expected traffic and policy complexity.
    • Number of Workers: Specifies how many instances of your proxy application should be deployed. For high availability and load balancing, it's recommended to deploy at least two workers in production. For testing, one worker is fine.
    • For this guide, select 0.1 vCore / 500 MB and 1 Worker.
    • Detailed Explanation: These settings directly impact the performance, scalability, and cost of your proxy. More workers and larger sizes increase throughput and resilience but also consume more resources and incur higher costs.
  6. Advanced Options (Optional but valuable):For this basic guide, we'll skip these advanced options.
    • VPC: If your backend API is in a private network (e.g., AWS VPC), you'll need to configure your CloudHub deployment to use a VPC that has connectivity to your backend. This is crucial for secure internal communication.
    • Static IP: For certain networking configurations (e.g., firewall whitelisting), you might need a static IP address for your proxy. This option is available in CloudHub.
    • Object Store: Used for persistent storage within Mule applications, though less common for simple proxies.
    • Load Balancer: For CloudHub deployments, a shared load balancer is automatically provided. You can also configure a dedicated load balancer for custom domains or SSL certificates.

Step 4: Deploy the Proxy

  1. Initiate Deployment: After configuring all the proxy and deployment settings, click the "Save and Deploy" button.
    • Detailed Explanation: MuleSoft will now take these configurations, generate a Mule application for your proxy, and deploy it to CloudHub (or your chosen deployment target). This process can take a few minutes as the runtime instance needs to spin up and the application needs to be deployed.
  2. Monitor Deployment Status: You will be redirected to the API details page in API Manager. The status of your proxy deployment will be displayed. You can also navigate to "Runtime Manager" on the left-hand pane, and then to "Applications," to see the deployment status of your new proxy application (e.g., jsonplaceholder-posts-proxy-v1).
    • Look for the status to change from "Starting" or "Deploying" to "Started" or "Running."
  3. Obtain Proxy URL: Once the proxy application is successfully deployed and running, return to the API details page in API Manager. You will find the Proxy URL listed under the "Endpoints" section.
    • Example: http://jsonplaceholder-posts-proxy-v1.<region>.cloudhub.io/api/v1/posts (the exact path might vary depending on your API definition).
    • Detailed Explanation: This is the public endpoint that clients will use to access your API. All requests to this URL will be routed through your newly deployed MuleSoft API gateway proxy.

Step 5: Apply Policies to the Proxy

The real power of an API gateway lies in its ability to enforce policies. Now that your proxy is deployed, let's add some governance.

  1. Access Policy Section: From the API details page in API Manager, click on the "Policies" tab.
    • Detailed Explanation: This section is where you manage all the policies that will be applied to your API proxy. MuleSoft offers a wide range of pre-built policies for security, quality of service (QoS), transformation, and more.
  2. Add a Policy (Example: Client ID Enforcement): Let's add a common security policy: "Client ID Enforcement." This policy ensures that only registered client applications with a valid client ID and client secret can access your API.Other common policies you might consider: * Rate Limiting: To control the number of requests clients can make (e.g., 100 requests per minute). * SLA Based Rate Limiting: To provide different rate limits based on client subscription tiers (e.g., a "Gold" tier gets more requests than a "Bronze" tier). * JWT Validation: To validate JSON Web Tokens for authentication and authorization, often used with OAuth 2.0 flows. * CORS: To manage Cross-Origin Resource Sharing, allowing web browsers from specific domains to access your API. * HTTP Caching: To cache responses and improve performance.
    • Click "Apply New Policy."
    • From the list of available policies, select "Client ID Enforcement."
    • Click "Configure Policy."
    • Configuration:
      • Client ID expression: By default, MuleSoft looks for the client ID in the client_id header or query parameter. You can customize this. The default expression #[attributes.headers['client_id']] usually works.
      • Client Secret expression: Similarly, for the client secret, the default #[attributes.headers['client_secret']] is standard.
      • Action if credentials invalid: Choose "Block request" to prevent unauthorized access.
      • Enable Client ID enforcement for specific resources/methods: You can apply the policy globally to all resources and methods (/*) or to specific paths and HTTP methods. For this example, leave it at /*.
    • Click "Apply."
    • Detailed Explanation: Once applied, this policy will be enforced by your API gateway proxy. Any request to the proxy that does not contain valid client_id and client_secret headers will be blocked, returning an HTTP 401 Unauthorized or 403 Forbidden error.

Step 6: Test the API Proxy

Now it's time to test your deployed API gateway proxy and verify that the policies are being enforced.

  1. Prepare for Testing:
    • Get Proxy URL: Copy the Proxy URL from API Manager (e.g., http://jsonplaceholder-posts-proxy-v1.<region>.cloudhub.io/api/v1/posts).
    • Register a Client Application (for Client ID Enforcement):
      • Navigate to "Exchange" on the left-hand pane of Anypoint Platform.
      • Find your "JSONPlaceholder Posts API" (published automatically when you created it in API Manager).
      • Click on it, then click "Request Access."
      • Select "New application," provide an "Application Name" (e.g., "MyTestApp"), and click "Create."
      • MuleSoft will generate a Client ID and Client Secret for this application. Copy these values; you'll need them for testing.
  2. Test Without Client ID/Secret (Expected Failure):
    • Open Postman (or your preferred HTTP client).
    • Create a new GET request.
    • Paste your Proxy URL (e.g., http://jsonplaceholder-posts-proxy-v1.<region>.cloudhub.io/api/v1/posts).
    • Send the request without adding client_id or client_secret headers.
    • Expected Result: You should receive an error response, likely an HTTP 401 Unauthorized or 403 Forbidden, indicating that the "Client ID Enforcement" policy has blocked the request. The response body might contain a message like "Access Denied" or "Invalid Client."
  3. Test With Valid Client ID/Secret (Expected Success):
    • In the same Postman request, go to the "Headers" tab.
    • Add two new headers:
      • client_id: Paste the Client ID you copied from Exchange.
      • client_secret: Paste the Client Secret you copied from Exchange.
    • Send the request again.
    • Expected Result: You should now receive a successful HTTP 200 OK response, and the response body should contain the data from the backend https://jsonplaceholder.typicode.com/posts endpoint. This confirms that your API gateway proxy is successfully forwarding requests with valid credentials and that the policy is working as intended.
  4. Test Other Policies: If you added other policies (e.g., Rate Limiting), test them similarly. For Rate Limiting, make many requests quickly to see if it starts returning 429 Too Many Requests.

Step 7: Monitoring and Analytics

Once your API gateway proxy is live and being consumed, monitoring its performance and usage is crucial.

  1. Access Analytics: In API Manager, navigate to the "Analytics" tab for your API.
    • Detailed Explanation: This section provides rich dashboards and reports on your API's performance and usage. You can view metrics such as:
      • Total Requests: The overall volume of traffic.
      • Response Times: Average and percentile response times for your API.
      • Error Rates: Percentage of requests resulting in errors (e.g., 4xx, 5xx).
      • Throughput: Requests per second.
      • Policy Violations: How often policies (like rate limiting or client ID enforcement) are triggered.
      • Client Application Usage: Which client applications are consuming your API the most.
  2. Explore Dashboards: Explore the various charts and graphs. You can filter data by time range, application, and other criteria.
    • Detailed Explanation: This analytical data is invaluable for:
      • Proactive Issue Detection: Spotting sudden spikes in errors or latency.
      • Capacity Planning: Understanding traffic patterns to scale your backend services or proxy workers appropriately.
      • Performance Optimization: Identifying slow endpoints or bottlenecks.
      • Business Insights: Understanding how your APIs are being used and by whom.
      • Security Auditing: Reviewing policy violation logs to detect potential misuse or attacks on your API gateway.

By meticulously following these steps, you will have successfully created, deployed, secured, and tested a robust API gateway proxy using MuleSoft's Anypoint Platform. This foundational setup empowers you to centralize control, enhance security, and optimize the performance of your backend API services.

APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! 👇👇👇

Advanced API Proxy Concepts in MuleSoft

While the basic steps provide a fully functional API gateway proxy, MuleSoft's Anypoint Platform offers much more sophisticated capabilities to handle complex scenarios. Understanding these advanced concepts can further empower your API management strategy.

1. Custom Policies: Extending Governance Capabilities

MuleSoft provides a comprehensive set of out-of-the-box policies, but sometimes, an organization's specific requirements necessitate unique governance logic. This is where custom policies come into play.

  • When to Use Custom Policies: You would opt for a custom policy when existing policies don't cover your use case. Examples include:
    • Specific authentication schemes not natively supported.
    • Complex request/response header manipulation or validation based on custom logic.
    • Integration with a unique internal authorization system.
    • Auditing or logging mechanisms specific to your enterprise data formats.
    • Advanced content-based routing logic.
  • Development Process: Custom policies are developed as Mule applications using Anypoint Studio. They typically involve a set of Mule flows that define the policy's logic, including intercepting requests, performing custom operations, and potentially modifying the flow or blocking the request.
  • Deployment and Application: Once developed, a custom policy is packaged and published to Anypoint Exchange. From Exchange, it can then be discovered and applied to any API in API Manager, just like a built-in policy. This mechanism allows organizations to extend their API gateway capabilities seamlessly.
  • Example: Imagine you need to validate a custom X-Customer-ID header against an internal database before allowing a request through. A custom policy could implement this database lookup and block the request if the ID is invalid, or enrich the request with customer details for the backend API.

2. Proxying with API Implementations (Mule Applications)

The "Endpoint with proxy" option creates a simple forwarding proxy. However, MuleSoft allows for much more intelligent proxies by deploying a full Mule application as the "proxy."

  • Mule Application as an Intelligent Proxy: Instead of just defining an Implementation URL in API Manager, you can develop a Mule application in Anypoint Studio that acts as your proxy. This application would typically contain:
    • An HTTP Listener to receive client requests (the proxy endpoint).
    • Mule flows that implement complex logic:
      • Advanced Routing: Dynamically routing requests to different backend services based on request content, headers, or runtime conditions.
      • Data Transformation: Modifying request payloads (e.g., converting XML to JSON, enriching with additional data) before forwarding to the backend, or transforming backend responses before sending them to the client.
      • Service Orchestration/Aggregation: Calling multiple backend services, combining their responses, and presenting a unified response to the client. This effectively creates a composite API.
      • Error Handling and Fallbacks: Implementing custom error strategies and fallback mechanisms if backend services are unavailable.
      • Security Logic: Implementing custom authentication, authorization, or token validation.
  • Deployment: This Mule application is then deployed to CloudHub or a customer-hosted runtime.
  • API Auto-Discovery: To apply policies and gain governance benefits from API Manager, this deployed Mule application is configured for API Auto-Discovery. This mechanism links the deployed application instance to its corresponding API definition in API Manager, allowing API Manager to govern the application as if it were a proxy.
  • Benefits: This approach provides maximum flexibility and power, allowing the API gateway layer to perform significant value-added logic beyond simple forwarding. It enables the creation of sophisticated facade APIs, experience APIs, and mediation layers.

3. API Auto-Discovery: Linking Implementations to Governance

API Auto-Discovery is a crucial feature that connects a running Mule application (whether it's an API implementation or a more complex proxy application) to an API definition in Anypoint API Manager.

  • Mechanism: When you develop a Mule application that exposes an API, you embed an "API Auto-Discovery" global element in its configuration. This element references the API's ID and version from Anypoint Exchange.
  • Functionality: When the Mule application starts, it registers itself with API Manager. This registration allows API Manager to recognize the application as the implementation of the specified API. Consequently, any policies applied to that API in API Manager will automatically be enforced by the running Mule application.
  • Importance: This enables a separation of concerns: developers focus on building the API's business logic, while API administrators/governance teams manage policies through API Manager, without requiring code changes. It ensures consistent governance across all API implementations, regardless of their complexity or specific deployment model.

4. Load Balancer and High Availability for the Gateway

For production environments, the API gateway itself must be highly available and capable of handling significant traffic.

  • CloudHub Shared Load Balancer (SLB): For CloudHub deployments, MuleSoft automatically provides a shared load balancer that distributes traffic across multiple workers of your proxy application. This is included by default.
  • CloudHub Dedicated Load Balancer (DLB): For more advanced requirements, such as custom domain names, specific SSL certificates, or VPC integration, you can provision a Dedicated Load Balancer (DLB). A DLB offers more control and can serve as a single entry point for multiple applications within a VPC.
  • High Availability: By deploying your proxy application with multiple workers (e2.g., 2 or more) in CloudHub, you achieve high availability. If one worker fails, the load balancer automatically directs traffic to the healthy workers, ensuring continuous API access.
  • Horizontal Scaling: As traffic increases, you can easily scale out your proxy by increasing the number of workers in Runtime Manager, allowing the gateway to handle larger loads seamlessly.

5. VPC and Secure Networking

Securing communication between your API gateway proxy and your backend services, especially when they reside in private networks, is paramount.

  • Virtual Private Cloud (VPC): In CloudHub, you can provision a VPC (Virtual Private Cloud) that is logically isolated from other networks. This VPC can then be peered with your corporate network (e.g., AWS VPC, Azure VNet) via VPN or Direct Connect.
  • Secure Backend Connectivity: Deploying your API proxy within this CloudHub VPC allows it to securely access backend services that are also within or connected to your corporate network, without exposing them to the public internet. This ensures that the communication path remains private and protected.
  • Firewall Rules: Within the VPC, you can configure firewall rules (security groups) to control inbound and outbound traffic, adding another layer of security.
  • Static IP Addresses: For situations where your backend services' firewalls require whitelisting specific IP addresses, you can provision static IP addresses for your CloudHub applications within a VPC.

6. Integrating with External Identity Providers (OAuth 2.0, OpenID Connect)

Modern API gateways frequently integrate with external identity providers to manage user authentication and authorization.

  • OAuth 2.0 and OpenID Connect: MuleSoft's API gateway supports common standards like OAuth 2.0 (for delegated authorization) and OpenID Connect (for authentication built on OAuth 2.0).
  • Policy-Driven Integration: You can apply policies like "OAuth 2.0 Validation" or "JWT Validation" to your API proxies. These policies configure the gateway to:
    • Intercept incoming requests with access tokens.
    • Communicate with an external OAuth provider (e.g., Okta, Auth0, PingFederate, Salesforce) to validate the token's authenticity and expiration.
    • Extract scopes or claims from the token to determine authorization.
    • Optionally transform or enrich the request with user information before forwarding it to the backend.
  • Benefits: This offloads complex security logic from your backend services, centralizes identity management, and enables single sign-on (SSO) across multiple APIs.

By mastering these advanced concepts, you can leverage MuleSoft's Anypoint Platform to build a highly sophisticated, secure, and resilient API gateway architecture that scales with your enterprise needs and adapts to evolving integration challenges.

Best Practices for API Proxy Management

Effective API gateway and proxy management extends beyond mere configuration; it involves adopting a set of best practices that ensure security, performance, maintainability, and scalability across the entire API lifecycle.

1. Granular Policy Application

While it's easy to apply policies globally (/*), fine-tune them to specific resources and HTTP methods where appropriate.

  • Just-in-Time Enforcement: Not all policies are relevant for every endpoint. For instance, a GET request to a public resource might not need client ID enforcement, but a POST request to an update endpoint certainly would. Applying policies only where they are needed reduces overhead and improves clarity.
  • Layered Security: Combine multiple policies for robust security. For example, use JWT validation for initial authentication, followed by an IP Whitelisting policy for an extra layer of defense, and then a Rate Limiting policy to protect against abuse.
  • Performance Impact: Be mindful that each policy adds a small amount of processing overhead. Over-applying policies can cumulatively impact performance, especially under high load.

2. Strategic API Versioning

Plan for the evolution of your APIs from the outset. Proxies are instrumental in managing multiple API versions gracefully.

  • Versioning Strategy: Adopt a clear versioning strategy (e.g., URI-based api/v1/resource, header-based Accept: application/vnd.myapi.v2+json).
  • Proxy-led Versioning: Use API gateway proxies to route requests to different backend versions. When a new version of a backend API is deployed, you can update the proxy configuration to point to the new implementation while maintaining older proxy endpoints for backward compatibility, giving clients ample time to migrate.
  • Deprecation Management: Clearly communicate deprecation plans for older API versions and use the API gateway to return informative messages for deprecated endpoints.

3. Comprehensive Documentation

Well-documented APIs are easier to consume and maintain, reducing support overhead.

  • API Specification: Always start with a clear API specification (OpenAPI, RAML) that defines your API's resources, methods, parameters, and responses. Publish this to Anypoint Exchange.
  • Developer Portal: Leverage Anypoint Exchange as a developer portal to provide self-service access to API documentation, examples, and client registration. Ensure the documentation reflects the behavior of the proxy, including any policies that modify requests or responses.
  • Internal Documentation: Maintain internal documentation for your API proxies, detailing their purpose, backend APIs, deployed policies, and any specific configurations. This is invaluable for troubleshooting and onboarding new team members.

4. Robust Monitoring and Alerting

Proactive monitoring is essential for maintaining the health and performance of your API gateway and underlying APIs.

  • Real-time Dashboards: Utilize Anypoint API Manager's analytics dashboards for real-time visibility into API traffic, latency, and error rates.
  • Custom Alerts: Configure alerts for critical metrics. For example, set up alerts for:
    • High error rates (e.g., 5xx errors exceeding a threshold).
    • Spikes in latency.
    • Sudden drops in traffic (indicating a potential outage).
    • Frequent policy violations.
  • Integrated Monitoring: Integrate MuleSoft's monitoring capabilities with your existing enterprise monitoring and alerting systems (e.g., Splunk, DataDog, PagerDuty) for a unified operational view.

5. Security by Design

Embed security considerations into every stage of your API proxy design and implementation.

  • Least Privilege: Configure access controls and permissions for API Manager users based on the principle of least privilege, ensuring individuals only have the necessary permissions.
  • Secure Configuration: Avoid hardcoding sensitive credentials in proxy configurations. Utilize secure properties, Anypoint Security features, or external secret management systems.
  • Regular Audits: Conduct regular security audits of your API proxy configurations and policies to identify and rectify potential vulnerabilities.
  • Input Validation: Even with backend validation, the proxy should perform basic input validation to prevent malformed requests from reaching the backend, reducing the attack surface.

6. Environment Management (Dev, Test, Prod)

Maintain strict separation between development, testing, and production environments for your API proxies.

  • Separate Deployments: Deploy separate instances of your API proxies and their associated policies for each environment. This prevents changes in development from impacting production.
  • Automated Promotion: Implement CI/CD pipelines to automate the promotion of API proxy configurations and policies across environments, ensuring consistency and reducing manual errors.
  • Environment-Specific Configuration: Use environment-specific configuration properties for backend URLs, credentials, and policy thresholds (e.g., lower rate limits in dev, higher in prod).

7. Automation with CI/CD

Automate the deployment and management of your API proxies and policies as much as possible.

  • MuleSoft Maven Plugin: Leverage the MuleSoft Maven plugin for automated deployment of proxy applications (if using custom implementations) and for managing API definitions.
  • Anypoint Platform APIs: Use the Anypoint Platform's management APIs to programmatically configure APIs, apply policies, and manage client applications. This enables "GitOps" or "Policy as Code" approaches.
  • Reduced Manual Errors: Automation significantly reduces the risk of human error during configuration changes, leading to more reliable and consistent API gateway deployments.

By adhering to these best practices, organizations can maximize the value derived from their MuleSoft API gateway implementation, building a resilient, secure, and highly manageable API ecosystem that supports current and future business needs.

The Broader API Ecosystem and The Role of API Management Platforms

While MuleSoft provides exceptionally powerful API gateway capabilities perfectly suited for enterprise integration and managing RESTful APIs, the broader API ecosystem is continuously expanding and evolving. The rise of new technologies, particularly Artificial Intelligence, introduces new challenges and opportunities for API management. The need for specialized platforms that cater to these emerging requirements is becoming increasingly evident.

The complexity of modern applications often involves not just traditional REST APIs but also an ever-growing array of AI models, each with its own invocation patterns, authentication mechanisms, and data formats. Managing this diverse landscape with traditional API gateway solutions can become cumbersome, leading to fragmented governance, increased development overhead, and inconsistent security postures. This is where the landscape of API management platforms expands to include specialized solutions designed to address these distinct challenges.

Consider, for example, the unique requirements of integrating and managing hundreds of different AI models. Each model might have a slightly different input schema, output format, or authentication method. Developers often struggle to normalize these differences, leading to custom code for each AI integration, which is both time-consuming and prone to errors. Furthermore, tracking usage and costs across numerous AI services can be a significant operational challenge.

This brings us to a complementary and innovative solution in the API management space: APIPark. APIPark is an open-source AI gateway and API Management Platform designed specifically to simplify the integration, management, and deployment of both AI and REST services. It addresses many of the emerging complexities in the modern API landscape by offering distinct features that complement broader API gateway strategies.

One of APIPark's standout capabilities is its quick integration of 100+ AI models with a unified management system for authentication and cost tracking. This means that instead of developers having to learn the specifics of each AI model's API, APIPark provides a standardized layer. This standardization is further enhanced by its unified API format for AI invocation, ensuring that changes in underlying AI models or prompts do not disrupt consuming applications or microservices. This significantly simplifies AI usage, reduces maintenance costs, and accelerates the adoption of AI within enterprises.

Moreover, APIPark allows users to encapsulate prompts into REST APIs, effectively transforming complex AI model interactions into simple, consumable REST endpoints. This is powerful for creating tailored AI services, such as sentiment analysis or translation APIs, without deep AI expertise. For broader API governance, APIPark offers end-to-end API lifecycle management, assisting with design, publication, invocation, and decommissioning, mirroring the comprehensive control provided by leading API gateway solutions. It helps regulate traffic forwarding, load balancing, and versioning, ensuring robust operation.

From an operational standpoint, APIPark focuses on performance and visibility. It boasts performance rivaling Nginx, capable of achieving over 20,000 TPS with modest hardware, and supports cluster deployment for large-scale traffic. Its detailed API call logging and powerful data analysis features provide businesses with the insights needed to trace issues, monitor long-term trends, and perform preventive maintenance, which are critical for maintaining system stability and data security. The platform also fosters collaboration through API service sharing within teams and offers independent API and access permissions for each tenant, allowing for efficient resource utilization and multi-tenancy. For controlled access, APIPark supports API resource access that requires approval, adding a crucial layer of security against unauthorized calls.

In essence, while MuleSoft excels as a robust API gateway for building sophisticated enterprise integration and managing traditional RESTful APIs with powerful proxy capabilities, platforms like APIPark offer specialized, open-source solutions particularly strong in the burgeoning AI API space. They provide innovative approaches to integrating and governing the increasingly complex array of AI models, ensuring that organizations can confidently leverage the full spectrum of modern API and AI services. The intelligent selection and integration of such diverse API management platforms allow enterprises to build a truly comprehensive, resilient, and forward-looking digital architecture.

Conclusion

The journey through creating and managing API gateway proxies in MuleSoft's Anypoint Platform underscores their critical role in modern API architectures. We've explored how an API proxy serves as an indispensable intermediary, providing a robust layer for security, performance optimization, and centralized governance over backend services. From shielding sensitive implementations to enforcing stringent access controls and enhancing system scalability through caching and load balancing, the benefits of a well-implemented API gateway are profound and far-reaching.

MuleSoft, with its comprehensive Anypoint Platform, offers a powerful and intuitive environment for defining, deploying, and governing these proxies. Its Anypoint API Manager acts as the central command center, enabling administrators to easily apply a rich suite of policies—from client ID enforcement and rate limiting to JWT validation and custom transformations—without requiring any modifications to the backend API code. This policy-driven approach fosters consistency, reduces operational overhead, and ensures that all exposed APIs adhere to enterprise-wide standards. The platform's flexibility, demonstrated through various deployment options and the ability to build intelligent proxy implementations with Mule applications, ensures it can adapt to even the most complex integration challenges.

Furthermore, we delved into advanced concepts such as custom policies, API auto-discovery, dedicated load balancers, and secure networking through VPCs, illustrating MuleSoft's capability to support highly sophisticated and resilient API ecosystems. Adhering to best practices in granular policy application, strategic versioning, comprehensive documentation, and robust monitoring is paramount to unlocking the full potential of your API gateway solution.

In an increasingly interconnected world, where the volume and diversity of APIs continue to grow, the ability to effectively manage and secure them is no longer optional but a fundamental requirement for digital success. By mastering the art of creating API proxies in MuleSoft, organizations can build a resilient, secure, and high-performing API layer that accelerates innovation, streamlines integration, and establishes a strong foundation for future growth. The strategic adoption of a powerful API gateway ensures that your API architecture is not just functional, but truly future-proof, capable of adapting to new technologies and evolving business demands, including the seamless integration of burgeoning AI APIs with specialized platforms like APIPark.

Comparison of Common API Proxy Policies in MuleSoft

Policy Type Description Key Benefit MuleSoft Application Example
Rate Limiting Controls the maximum number of requests a client can make within a specified time window. Prevents backend overload, ensures fair resource usage, and protects against abuse. Apply to a public GET /products endpoint to limit anonymous users to 100 requests per minute.
Client ID Enforcement Requires client applications to provide valid client credentials (ID and Secret) for access. Authentication and basic access control, ensuring only registered applications consume the API. Secure a POST /orders endpoint, requiring a valid client_id and client_secret in headers for order submission.
JWT Validation Validates JSON Web Tokens present in incoming requests for authentication and authorization. Secure delegated access, enables single sign-on (SSO), and verifies user identity and permissions. Protect a GET /userprofile endpoint, validating a JWT token issued by an external OAuth provider.
CORS Manages Cross-Origin Resource Sharing, allowing or disallowing web browsers to make cross-origin requests. Enables web application integration while maintaining security. Configure allowed origins (https://mywebapp.com), HTTP methods, and headers for a frontend application consuming the API.
IP Whitelisting Restricts API access to requests originating from a specified list of IP addresses or ranges. Enhanced security by limiting access to trusted networks or clients. Allow only requests from the corporate VPN's IP range to access an internal administration API.
Message Logging Records details of API requests and responses (headers, payload, metadata) to an external logging system. Auditing, debugging, and compliance, providing visibility into API traffic. Log all incoming request headers and partial payloads for a sensitive POST /data API to an ELK stack.
Transform Message Modifies the request or response payload and/or headers before forwarding. Data standardization, protocol conversion, and request/response enrichment. Convert an incoming XML request payload to JSON for a backend API that only accepts JSON.

5 Frequently Asked Questions (FAQs)

1. What is the fundamental difference between an API Proxy and an API Gateway in MuleSoft?

While often used interchangeably, an API proxy in MuleSoft primarily focuses on creating an intermediary endpoint for a single backend API, providing abstraction and enforcing policies specifically for that API. An API gateway, like MuleSoft's Anypoint API Manager, is a broader concept that acts as a single entry point for multiple APIs. It encompasses the capabilities of numerous API proxies, offering centralized routing, comprehensive policy management across a portfolio of APIs, aggregation, orchestration, and a unified developer experience. Essentially, the API gateway uses API proxies as a core mechanism to achieve its overarching goal of enterprise-wide API management.

2. Can I apply multiple policies to a single API Proxy in MuleSoft, and how are they enforced?

Yes, you can apply multiple policies to a single API Proxy in MuleSoft. When several policies are applied, they are enforced sequentially based on their defined order. In Anypoint API Manager, you can easily drag and drop policies to reorder them. The processing order is crucial, as a policy enforced earlier in the chain (e.g., Client ID Enforcement) might block a request before it reaches a later policy (e.g., Rate Limiting). MuleSoft processes policies from top to bottom as listed in the API Manager, ensuring a predictable execution flow.

3. How does MuleSoft handle API versioning with proxies?

MuleSoft provides robust support for API versioning using proxies. You can deploy different versions of your backend API (e.g., api/v1/products, api/v2/products) and manage each version as a separate API in Anypoint API Manager. Each API definition can then have its own proxy, routing traffic to the corresponding backend version. This allows you to offer backward compatibility for older client applications while developing and deploying new versions of your API. The proxy acts as a stable contract for consumers, shielding them from backend changes and facilitating smooth transitions between API versions.

4. Is it possible to use a custom domain for a MuleSoft API Proxy deployed to CloudHub?

Yes, it is definitely possible to use a custom domain for a MuleSoft API Proxy deployed to CloudHub. To achieve this, you need to configure a CloudHub Dedicated Load Balancer (DLB). Once a DLB is provisioned, you can associate your custom domain (e.g., api.mycompany.com) with it and upload your SSL certificates. You'll then configure your DNS provider to point your custom domain to the DLB's CNAME. Your API Proxy applications deployed within the CloudHub VPC will then be accessible via your custom domain through the DLB.

5. What are the key security benefits of using an API Proxy in MuleSoft?

Using an API Proxy in MuleSoft offers significant security benefits by acting as a strong defense layer for your backend services. Key benefits include: Centralized Authentication and Authorization (e.g., Client ID enforcement, JWT validation, OAuth 2.0) offloading security logic from backends; Threat Protection (e.g., against injection attacks, oversized payloads, DDoS attempts); IP Whitelisting/Blacklisting to control network access; and Data Masking/Encryption capabilities to protect sensitive information in transit. By enforcing these policies at the API gateway level, you ensure consistent security across all exposed APIs, simplify security management, and significantly reduce the attack surface on your backend systems.

🚀You can securely and efficiently call the OpenAI API on APIPark in just two steps:

Step 1: Deploy the APIPark AI gateway in 5 minutes.

APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.

curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh
APIPark Command Installation Process

In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.

APIPark System Interface 01

Step 2: Call the OpenAI API.

APIPark System Interface 02