pref0 + LangChain

Add preference learning to your LangChain agents. pref0 extracts preferences from conversations and injects them into your chain's system prompt automatically.

Quick start

python
from langchain.chat_models import ChatOpenAI
from langchain.schema import SystemMessage, HumanMessage
import requests

PREF0_API = "https://api.pref0.com"
PREF0_KEY = "pref0_sk_..."

def get_preferences(user_id: str) -> str:
    res = requests.get(
        f"{PREF0_API}/v1/profiles/{user_id}",
        headers={"Authorization": f"Bearer {PREF0_KEY}"},
    )
    prefs = res.json().get("preferences", [])
    return "\n".join(
        f"- {p['key']}: {p['value']}"
        for p in prefs if p["confidence"] >= 0.5
    )

def chat(user_id: str, message: str):
    learned = get_preferences(user_id)
    system = f"You are a helpful assistant.\n\nLearned preferences:\n{learned}"
    llm = ChatOpenAI()
    return llm.invoke([SystemMessage(content=system), HumanMessage(content=message)])

# After the conversation ends, track it
def track(user_id: str, messages: list):
    requests.post(
        f"{PREF0_API}/v1/track",
        headers={"Authorization": f"Bearer {PREF0_KEY}"},
        json={"userId": user_id, "messages": messages},
    )

Why use pref0 with LangChain

Drop-in integration

Add pref0 to any LangChain chain or agent with a few lines of Python. No changes to your existing chain structure.

Works with any LLM

pref0 is LLM-agnostic. Use it with OpenAI, Anthropic, or any model supported by LangChain.

Automatic extraction

Send your conversation history to /track and pref0 extracts preferences automatically. No manual labeling.

Confidence scoring

Each preference has a confidence score that compounds over time. Only inject high-confidence preferences into your prompts.

Other integrations

Add preference learning to LangChain

Your users are already teaching your agent what they want. pref0 makes sure the lesson sticks.