The AI-Powered Response Transformer is a Kong plugin designed to enhance your API responses by seamlessly integrating with the OpenAI chat completion API. This plugin enables you to transform the response generated by your upstream server in real-time before delivering it to the client.
- JSON Field Manipulation: You can add or replace JSON fields within the response by providing a customizable prompt.
- Parameterization: The plugin supports parameterization, allowing you to include one of the JSON fields from the response body in your prompt. This enables advanced customization based on the content of the response itself.
Note: This plugin is an updated version of the Response Transformer by Kong Inc., enhancing its capabilities with AI-powered transformations.
Whether you need to enrich your API responses with contextual information, generate dynamic content, or make on-the-fly adjustments, the AI-Powered Response Transformer plugin helps you to create customizable and intelligent responses for your API clients.
This plugin inherits the configuration parameters from the Kong Response Transformer by Kong Inc. Additionally, it introduces three new configuration parameters:
-
config.add_with_ai
: This parameter allows you to specify how you want to add new fields to the response body using AI support. It consists of the following sub-parameters:-
json
: An array of key-value pairs that you want to add to the JSON response body. You can use parameters in${FIELD_NAME}
format in the prompt to include one of the JSON fields from the response body. Ex.:"json": [ "error_detail:Generate human readable message for this error: ${error}" ]
-
max_tokens
: You can set this parameter to restrict the maximum number of tokens per OpenAI API request when adding new fields. This helps in managing the size and complexity of AI-generated content. (Default: 50)
-
-
config.replace_with_ai
: This parameter enables you to define how existing fields in the response body should be replaced using AI support. It includes the following sub-parameters:-
json
: An array of key-value pairs that you want to replace in the JSON response body. You can use parameters in${FIELD_NAME}
format in the prompt to include one of the JSON fields from the response body. Ex.:"json": [ "error:Generate human readable message for this error: ${error}" ]
-
max_tokens
: Similar to theconfig.add_with_ai
parameter, you can setmax_tokens
here to limit the number of tokens used in AI-generated replacements. (Default: 50)
-
-
config.openai_api_key
: This parameter allows you to set the OpenAI API Key, which is essential for making requests to the OpenAI chat completion API.
The following examples provide configurations for enabling the ai-powered-response-transformer plugin on a service:
Note: Replace SERVICE_NAME|ID
with the id or name of the service that this plugin configuration will target. Also, the example presumes the response has username
, temp
, humid
and error
JSON fields in the body.
Declarative (YAML):
plugins:
- name: ai-powered-response-transformer
service: SERVICE_NAME|ID
config:
add_with_ai:
json:
- greeting:'Generate a greeting message for ${username}.'
- weather:'Generate a weather description for the temperature ${temp} and humidity ${humid}'
max_tokens: 50
replace_with_ai:
json:
- error:'Generate human readable message for this error: ${error}'
max_tokens: 100
openai_api_key: 0296217561490155228da9c17fc555cf9db82d159732f3206638c25f04a285c4
Kong Admin API:
curl -X POST http://localhost:8001/services/{serviceName|Id}/plugins \
--data "name=ai-powered-response-transformer" \
--data "config.add_with_ai.json=greeting:Generate a greeting message for ${username}." \
--data "config.add_with_ai.json=weather:Generate a weather description for the temperature ${temp} and humidity ${humid}" \
--data "config.add_with_ai.max_tokens=50" \
--data "config.replace_with_ai.json=error:Generate human readable message for this error: ${error}" \
--data "config.add_with_ai.max_tokens=100" \
--data "config.openai_api_key=0296217561490155228da9c17fc555cf9db82d159732f3206638c25f04a285c4"
Here are some future improvements planned for the plugin:
- Headers Transformation: Enhance the plugin to support additional header transformations, including adding, replacing, and appending headers to the response.
- Body Append: Extend the capabilities to append content to the response body, enabling even more dynamic and customized responses.
- Custom Language Model (LLM) Integration: Allow users to specify custom language models (LLMs) for response transformations. This feature will enable fine-grained control and specialized language models tailored to specific use cases.
- Request Transformation: Introduce support for request transformation, enabling users to preprocess incoming requests before they reach the upstream server. This can include modifying request parameters or headers based on AI-powered transformations.
- Comprehensive Testing: Expand the test suite to ensure robustness and reliability.