Cache LLM results using Redis.

const model = new ChatOpenAI({
cache: new RedisCache(new Redis(), { ttl: 60 }),
});

// Invoke the model with a prompt
const response = await model.invoke("Do something random!");
console.log(response);

// Remember to disconnect the Redis client when done
await redisClient.disconnect();

Hierarchy (view full)

Constructors

Methods

Constructors

Properties

redisClient: Redis
ttl?: number

Methods

  • Retrieves data from the Redis server using a prompt and an LLM key. If the data is not found, it returns null.

    Parameters

    • prompt: string

      The prompt used to find the data.

    • llmKey: string

      The LLM key used to find the data.

    Returns Promise<null | Generation[]>

    The corresponding data as an array of Generation objects, or null if not found.

  • Updates the data in the Redis server using a prompt and an LLM key.

    Parameters

    • prompt: string

      The prompt used to store the data.

    • llmKey: string

      The LLM key used to store the data.

    • value: Generation[]

      The data to be stored, represented as an array of Generation objects.

    Returns Promise<void>

""