Your agent should
learn preferences
Users correct their agents every day, then the session ends and it's forgotten. pref0 makes those lessons stick.
Users correct their agents every day, then the session ends and it's forgotten. pref0 makes those lessons stick.
css_framework: tailwindcss_framework: tailwindcss_framework: tailwindSame preference, different sessions. Confidence compounds until the agent just knows.
Real signals pref0 extracts and compounds across conversations.
"Use TypeScript, not JavaScript"
language: typescript0.70"Deploy to Vercel, not Netlify"
deploy_target: vercel0.70"Use pnpm instead of npm"
package_manager: pnpm0.70"Bullet points, not paragraphs"
response_format: bullet_points0.70"Keep it under 5 lines"
response_length: concise0.40"Use Postgres, not MySQL"
database: postgres0.70Each preference starts with a confidence score. Repeat it across different conversations and it becomes a strong learned preference.
Three API calls. The learning happens automatically.
Pass chat history after each session. pref0 extracts corrections and preferences automatically.
Same preference across sessions? Confidence goes up. The profile gets sharper over time.
Fetch learned preferences before your agent responds. It behaves like it already knows the user.
fewer repeated corrections
to reach high confidence
to integrate with any agent
Explicit corrections score higher than implied preferences. Your agent learns fastest from direct feedback.
Mention a preference twice across sessions and confidence climbs. Three times and it's fully learned.
Preferences cascade from org to team to user. New hires inherit conventions immediately.
POST /track. GET /profiles. Works with LangChain, CrewAI, Vercel AI SDK, or raw API calls.
Your users are already teaching your agent what they want. pref0 makes sure the lesson sticks.