Add preference learning to OpenAI Agents. pref0 learns from user corrections and serves preferences to your agent's instructions.
from agents import Agent, Runner
import requests
PREF0_API = "https://api.pref0.com"
PREF0_KEY = "pref0_sk_..."
def get_preferences(user_id: str) -> str:
res = requests.get(
f"{PREF0_API}/v1/profiles/{user_id}",
headers={"Authorization": f"Bearer {PREF0_KEY}"},
)
prefs = res.json().get("preferences", [])
return "\n".join(
f"- {p['key']}: {p['value']}"
for p in prefs if p["confidence"] >= 0.5
)
learned = get_preferences("user_abc123")
agent = Agent(
name="Coding Assistant",
instructions=f"You are a helpful coding assistant.\n\nLearned user preferences:\n{learned}",
)
result = Runner.run_sync(agent, "Help me set up a new project")
print(result.final_output)Add learned preferences directly to Agent instructions. The agent follows them like any other instruction.
Preferences are available to the agent alongside any tools you've defined. No conflicts.
When agents hand off between each other, preferences persist through the entire workflow.
Fetch preferences before creating the agent. That's it. No SDK modifications needed.
Your users are already teaching your agent what they want. pref0 makes sure the lesson sticks.