Preference learning for AI agents

Your agent should
learn preferences

Users correct their agents every day, then the session ends and it's forgotten. pref0 makes those lessons stick.

See it learn

Session #14
Monday
Preference detected
css_framework: tailwind
0.55
Session #16
Wednesday
Preference reinforced
css_framework: tailwind
0.80
Session #18
Friday
No correction needed
css_framework: tailwind
0.80

Same preference, different sessions. Confidence compounds until the agent just knows.

Every correction is a learning signal

Real signals pref0 extracts and compounds across conversations.

"Use TypeScript, not JavaScript"

language: typescript0.70

"Deploy to Vercel, not Netlify"

deploy_target: vercel0.70

"Use pnpm instead of npm"

package_manager: pnpm0.70

"Bullet points, not paragraphs"

response_format: bullet_points0.70

"Keep it under 5 lines"

response_length: concise0.40

"Use Postgres, not MySQL"

database: postgres0.70

Each preference starts with a confidence score. Repeat it across different conversations and it becomes a strong learned preference.

How it works

Three API calls. The learning happens automatically.

1. Send conversations

Pass chat history after each session. pref0 extracts corrections and preferences automatically.

2. Preferences compound

Same preference across sessions? Confidence goes up. The profile gets sharper over time.

3. Inject at inference

Fetch learned preferences before your agent responds. It behaves like it already knows the user.

0%

fewer repeated corrections

0 sessions

to reach high confidence

0 endpoints

to integrate with any agent

Corrections carry the most weight

Explicit corrections score higher than implied preferences. Your agent learns fastest from direct feedback.

Confidence compounds across sessions

Mention a preference twice across sessions and confidence climbs. Three times and it's fully learned.

User, Team, Org hierarchy

Preferences cascade from org to team to user. New hires inherit conventions immediately.

Two endpoints. Any framework.

POST /track. GET /profiles. Works with LangChain, CrewAI, Vercel AI SDK, or raw API calls.

Stop re-correcting. Start learning.

Your users are already teaching your agent what they want. pref0 makes sure the lesson sticks.