Uncovering a Hidden Gap in Drift Management
While designing the PowerFlex upgrade experience, I discovered there was no clear way to manage drifts after upgrades, leaving users to rely on personal experience.
Spotting Inconsistencies Across Products
I reached out to other product teams for reference but found each had its own way of handling drifts. That’s when I realized a key gap in our design system: It kept visuals consistent but ignored how users actually completed tasks.
Finding the Shared Pain Points
To bring clarity, I met with design leads from 11 products and reviewed their drift user flows. Despite different contexts, 3 recurring pain points emerged:
Inconsistent Terms
Manual Executions
Relying on Personal Experiences
Challenge
Teaching AI to Think Like a Designer
My first attempt was simple: feed insights into Gen-AI tools and hope for quick answers. The output was equally simple - fast but generic. AI flattened nuances and missed context.
That’s when I shifted my approach. I treated AI like a junior designer needing direction: rewriting prompts with context and examples and asking AI to simulate user perspectives.
By reviewing each AI-generated user story and testing my assumptions against research, the results evolved from surface-level summaries to deeper insights showing real decision logic and role differences.
Reviewed with Researchers and Design Leads
I validated these insights with UX researchers and design leads from each product. Their feedback helped refine the findings into actionable design directions, ensuring speed didn’t compromise accuracy or alignment.
Choosing the Right Type of AI: Workflows vs Agents
Not every AI idea is practical. To stay grounded, I compared AI Workflows and Agentic AI.
In high-stakes environments, reliability outweighs novelty. So I chose the AI Workflow approach, focusing on:
Drift Detection & Classification
Impact Prediction & Resolution Advice,
Post-Action Summaries & Documentation.
Designing Human-AI collaboration in the Flow
I designed AI as a collaborator, not an add-on. Each touch point clearly defined what AI assists with and what the human owns:
AI explains, predicts, and drafts.
Humans review, refine, and decide.
"I detect and classify drifts, explain what changed, assess the risk, and suggest who should act."
Monitor
"I simulate potential system impacts and recommend resolutions based on historical data."
Analyst
"I summarize what happened, identify who took action, and update the baseline when change is intentional."













