What Design Systems Can Learn From Vector Databases
Agentic Design Systems, part 1
đ Get weekly insights, tools, and templates to help you build and scale design systems. More: Design Tokens Mastery Course / YouTube / My Linkedin
I am not affiliated with any of the suggested tools
I know what youâre thinking.
âVector databases? Thatâs for AI engineers and data scientists. What does that have to do with my design system?â
Hereâs the thing: Vector databases solve a problem that design systems desperately need to solve, too.
They organize information by meaning, not just by structure.
And that changes everything.
The Problem With How We Organize Design Systems Today
Right now, your design system is probably organized like a SQL database.
You have rigid hierarchies:
Components â Buttons â Primary â Hover State
Tokens â Color â Background â Primary
Itâs clean. Itâs logical. It works.
But hereâs what it doesnât do: It doesnât help you find things based on what they mean or why youâd use them.
When a designer asks, âWhat should I use for a destructive action?â they have to translate that intent into your taxonomy. Is that a âdanger buttonâ? A ânegative actionâ? A âwarning stateâ?
You didnât have to memorize the exact folder structure in your brain. You just had to know the right keywords.
But what if your design system could understand intent?
How Vector Databases Actually Work
Let me show you what I mean.
Vector databases donât store data in rows and columns. They store meaning as high-dimensional numerical coordinates.
Hereâs a simple example:
When you search for âkingâ in a vector database, it doesnât just find exact matches. It finds concepts that are semantically similar:
âqueenâ (same domain, different gender)
âmonarchâ (same meaning, different word)
ârulerâ (related concept)
The database understands that these concepts cluster together in meaning-space.
Now imagine applying that to your design system.
What Changes for Humans
You Can Search by Intent, Not Just by Name
Instead of browsing through component categories, you could ask:
âShow me all patterns that convey urgencyâ
And your design system would return:
Error messages
Warning banners
Destructive action buttons
Toast notifications with alert icons
Red status indicators
These components live in completely different parts of your taxonomy. But they share semantic meaning.
Thatâs the power of meaning-based organization.
You Can Finally Solve the âNew Component vs. Variantâ Debate
Iâve sat through countless design system meetings where teams debate:
âDo we need a new component, or is this just a variant of what we have?â
Iâve seen this firsthand. When youâre looking at a system with 50, 80, even close to 100 components, the overlaps become invisible. Manual audits catch some of them. But naming differences, context differences, and organic growth across teams create duplicates that no spreadsheet review will surface.
Vector logic could actually quantify similarity between components.
Compare two components based on:
Visual embeddings (how they look in Figma)
Code structure (how theyâre built)
Token usage (what design decisions they share)
Usage context (where they appear in products)
If two components score high similarity across these dimensions, you probably donât need both.
You might be wondering, âHow accurate is this really?â
Thatâs fair. But hereâs what Iâve learned: You donât need perfect similarity scores. You need better visibility into what you already have.
Even a rough similarity map helps you spot duplicates that manual audits miss.
What Changes for Agents
AI Tools Actually Know What to Use
Hereâs where it gets interesting for agentic design.
Right now, when an AI tool generates a UI, it relies on your component names and descriptions. Itâs essentially doing keyword matching.
But with vector-based context retrieval, the AI could understand:
âIâm building a form validation error. What components, patterns, tokens, and accessibility guidelines are semantically related to this task?â
The system retrieves:
Similar error patterns from your existing designs
The correct tokens (not just âredâ but âcolor.feedback.errorâ)
Accessibility requirements for error announcements
Validation timing patterns
Microcopy guidelines for error messages
All pulled together by meaning, not by manual tagging.
Design Tokens Could Become Behavioral
Traditional tokens store values: color.blue.500 = #3B82F6
Vector-inspired tokens could store behavioral meaning:
How assertive (0.8 on a scale)
How calm (0.2)
How noticeable (0.9)
Now imagine an adaptive interface that reads the userâs context and adjusts its tone.
Picture a checkout flow. The payment fails. Instead of the UI just swapping in a red banner, the entire interface shifts. Visual assertiveness dials down. Calmness increases. The system isnât just showing an error state. Itâs modulating the experience to match the emotional weight of the moment.
A celebration moment? It cranks up energy and noticeability.
The same components, but contextually tuned based on meaning vectors, not just static values.
Components Gain Semantic Elasticity
In agentic systems, context determines behavior.
Take a toast notification. It can:
Confirm a successful action
Warn about a risky operation
Inform about system status
Celebrate an achievement
Right now, youâd handle these with separate variants or different components entirely.
But with vectorized meaning, a single component could adapt based on detected intent.
The system reads the context (âthis is a celebrationâ) and adjusts:
Color intensity
Animation style
Icon selection
Duration on screen
All semantically informed, not hardcoded.
Compare that to manually coding every possible variant upfront. The difference is massive.
The Reality Check
Let me be honest: Most design systems arenât ready for this yet.
This isnât because the technology doesnât exist. Vector databases are mature. Embedding models are accessible.
The challenge is conceptual.
Weâve built design systems around structure: hierarchies, taxonomies, strict naming conventions.
Shifting to meaning-based organization requires rethinking:
How we document components
How we describe patterns
How we train our teams
How AI tools interact with our systems
Itâs a fundamental shift from âwhat is this thing called?â to âwhat does this thing mean?â
What This Actually Looks Like
You donât have to rebuild your entire design system around vectors tomorrow.
But you can start thinking differently today.
Hereâs what that might look like:
Start with documentation:
Add semantic descriptions to components: âThis pattern expresses urgency and requires immediate attention.â
Tag components with intent, not just categories: âconfirmation,â âprevention,â âcelebration.â
Describe usage contexts: âUse when the user needs reassurance that their action succeeded.â
Experiment with embeddings:
Generate vector embeddings for your component descriptions
Build a simple similarity search
See what clusters emerge naturally
Augment, donât replace:
Keep your existing taxonomy
Layer semantic search on top
Let both approaches coexist
The next generation of design systems will likely blend:
Token databases (structured facts)
Vector databases (semantic meaning)
Context engines (runtime reasoning)
âĄď¸ Thatâs how you get agentic design systems. Systems that understand why a component exists, not just what itâs called.
Try This
If youâre ready to experiment, hereâs where to start:
Pick 10-20 components from your system
Add a semantic description for each: Whatâs its purpose? When would you use it? What feeling does it convey?
Use an embedding model (OpenAIâs text-embedding-3-small works well) to generate vectors
Run similarity comparisons between them
Look at what clusters together
You might discover patterns you didnât know existed.
You might find duplicates hiding under different names.
You might see how your system could be organized by meaning instead of taxonomy.
And thatâs when design systems start to get really interesting. đ
Next week, I will share more about Agentic Design Systems, my setup, and MCP. â¨
Stay tuned,
Romina
â If you enjoyed this post, please tap the Like button below đ This helps me see what you want to read. Thank you.
đ Community Gems
Building Effective AI Coding Agents for the Terminal: Scaffolding, Harness, Context Engineering, and Lessons Learned
AI agents suffer from the same problem as large design systems:
Context collapse. A new 55-page paper on terminal coding agents explains why.đ
AI agents in long sessions forget their instructions, bloat their context, and start making bad decisions.
đ Link




