Documentation Index
Fetch the complete documentation index at: https://docs.palico.ai/llms.txt
Use this file to discover all available pages before exploring further.
Some notable AI Gateway are:
For this guide, we’ll use Portkey as our AI Gateway.
Setup Portkey
You can setup Portkey locally, or use the hosted version at portkey.ai. Once you have Portkey setup, continue with the following steps.
Add Portkey to your project
Using Portkey to call OpenAI models
const handler: Chat = async ({ userMessage }) => {
const { userMessage } = content;
// Create Portkey client
const portkey = new Portkey({
Authorization: "Bearer sk-xxxxx",
provider: "openai",
});
// Call the API
const response = await portkey.chat.completions.create({
messages: [{ role: "user", content: userMessage }],
model: "gpt-3.5-turbo",
});
return {
message: response.choices[0].message.content,
};
};
Using Portkey to call any LLM model with App Config
To be able to easily swap different LLM models in your application, you can use App Config along with Portkey. This will allow you to use any LLM model without changing your code and just updating the AppConfig.
import { Portkey } from "portkey";
interface AppConfig {
model: string;
provider: string;
}
const handler: Chat = async ({ userMessage, appConfig }) => {
const { model, provider } = appConfig;
// highlight-start
const portkey = new Portkey({
Authorization: "Bearer sk-xxxxx",
provider: provider,
// ...additional authorization params
});
const response = await portkey.chat.completions.create({
messages: [{ role: "user", content: userMessage }],
model: model,
});
// highlight-end
return {
message: response.choices[0].message.content,
};
};
Using AppConfig to swap LLM models allows you to easily run experiments with different variations of your applications locally, or run A/B tests in production without changing your code.