App config is a key-value object that can you pass into your LLM application for each request. Using this key-value object, you can control the behavior of your LLM application. Here’s an example of an app config that let’s us easily swap different OpenAI models:
From your external services, such as a frontend application, you can call your agent with the app config. Here’s an example of how you can call an agent with app config:
LLM Development involves testing lots of different ideas and as such you should look to build a modular application. You can use app config to then drive the behavior of your application. Some common use-cases for app config are:
Trying different LLM models
Trying different prompts
Trying different RAG versions
Trying different call-chaining strategies
Trying different LLM architectures
Additional, our evaluation framework is built around using app config to test different variations of your LLM application.