How I Designed Scalable, AI-Powered Drift Workflows Across 11 Products

How I Designed Scalable, AI-Powered Drift Workflows Across 11 Products

AI-Powered

Workflow Design

Design Strategy

~8 mins Read

Context

Drift is the gradual deviation of a system from its intended state, often caused by manual changes or failed deployments, in IT environments.

This case study aims to standardize drift management workflows across products, enabling teams to respond confidently and establish future benchmarks.

Context

Drift is the gradual deviation of a system from its intended state, often caused by manual changes or failed deployments, in IT environments.

This case study aims to standardize drift management workflows across products, enabling teams to respond confidently and establish future benchmarks.

Context

Drift is the gradual deviation of a system from its intended state, often caused by manual changes or failed deployments, in IT environments.

This case study aims to standardize drift management workflows across products, enabling teams to respond confidently and establish future benchmarks.

Challenges

  • Limited time and resources to explore and validate directions

  • Simplifying vague & complex workflows without losing key functionality

  • Designing a unified solution for varied product needs

  • Balancing consistency with team-specific flexibility

Challenges

  • Limited time and resources to explore and validate directions

  • Simplifying vague & complex workflows without losing key functionality

  • Designing a unified solution for varied product needs

  • Balancing consistency with team-specific flexibility

Challenges

  • Limited time and resources to explore and validate directions

  • Simplifying vague & complex workflows without losing key functionality

  • Designing a unified solution for varied product needs

  • Balancing consistency with team-specific flexibility

My Contributions

  • Used AI to accelerate research synthesis and standardized 9 workflows

  • Defined key Human-AI collaboration touch points

  • Designed 3 additional AI-powered workflows

  • Collaborated with researchers and aligned 11 product teams

My Contributions

  • Used AI to accelerate research synthesis and standardized 9 workflows

  • Defined key Human-AI collaboration touch points

  • Designed 3 additional AI-powered workflows

  • Collaborated with researchers and aligned 11 product teams

My Contributions

  • Used AI to accelerate research synthesis and standardized 9 workflows

  • Defined key Human-AI collaboration touch points

  • Designed 3 additional AI-powered workflows

  • Collaborated with researchers and aligned 11 product teams

"Unifying Drift workflows across 11 products required more than design - it needed strategy, structure, and smart use of AI."

Define Problem

A Fragmented Approach to Drift Management Across 11 Products

Realized A Critical Gap in UX

While designing the upgrade experience for PowerFlex, I ran into a critical gap: there was no clear way to handle system drifts after changes, especially when environments deviate from their intended state silently.

I reached out to other teams for reference but found only fragmented experience. To address this, I partnered with the Design System team to untangle the broken drift management experience.

Details are intentionally vague due to NDA—contact me for more context.

Details are intentionally vague due to NDA—contact me for more context.

The Problem: Drift is Everywhere - But Handled Differently

Drift is the misalignment between a system’s desired and actual state. Some teams viewed it as configuration logs. Others treated it like anomaly detection. In some cases, there were specific features; in others, no support for drift at all.

  • “Known good array – everything is compared to that array… do all the arrays match that golden record?”

    PowerMax

    Framed drift as deviation from a golden config standard.

  • “A record of any changes that are made in the system, which really is a fancy word for logs…”

    iDRAC

    Saw drift as logs or manual config checks—no unified tracking.

  • “We’re adding a diff view so users can compare configuration versions.”

    Moogsoft

    Defined drift as config comparison—focused on version differences.

  • “When I update a Deployment, I’d like to visualize the drift between the planned and last provisioned revision.”

    Dell Automation Platform

    See drift as a version gap and enables Diff View to track changes.

  • “There’s no official change log beyond our standard records.”

    PowerProtect

    Limited support for historical tracking of drift or system deviation.

  • “If anything starts to drift, we show the current status and tell users to go check the report.”

    VxRail

    Only flagged drift when config deviated from compliance

My Goals

  • Standardize Drift Management across 11 products with a shared workflows

  • Enable adoption with scalable, adaptable deliverables

  • Identify AI touch points and drive future Human-AI collaboration

My Goals

  • Standardize Drift Management across 11 products with a shared workflows

  • Enable adoption with scalable, adaptable deliverables

  • Identify AI touch points and drive future Human-AI collaboration

My Goals

  • Standardize Drift Management across 11 products with a shared workflows

  • Enable adoption with scalable, adaptable deliverables

  • Identify AI touch points and drive future Human-AI collaboration

Design Challenge #1

How I use AI to speed up the process, Without Losing Quality

When faced with 30+ hours of stakeholder interviews from 11 product teams, I knew I couldn’t afford to synthesize everything manually, not while designing two other projects. So I brought in AI to help me accelerate the research process.

First Attempt: AI is NOT a Magic Answer Machine

My initial approach was straightforward: feed the research transcripts into AI tools to cluster themes and identify patterns.

The results I got were also straightforward: fast summaries, but painfully generic. I got insights like "users need to identify and track changes over time" and "users expect visual indicators, version comparisons" - which, while true, didn’t help me design meaningful workflows.

Worse, the AI flattened many things.

Critical distinctions like intended vs. unintended drift, or acceptable vs. unacceptable deviations, were lost in summaries. But these differences weren’t superficial variations—they directly shaped what users needed to do next.

That’s when I realized: I couldn’t just use AI as a magic answer machine.

Teaching AI to Think Like a Designer

I started treating AI more like a junior designer who needed direction.

Instead of generic prompts, I rewrote them with context and examples, asking AI to simulate specific user perspectives.

By reviewing each AI-generated user story, I started having vague ideas and design assumptions. I fed those assumptions back into the loop, asking AI to test them against the original research.

Here's my hypothesis: users only fix drift if it's unintended and unacceptable. Can you find evidence of this behavior in the research transcripts?

Are there examples in the research where users ignore drift alerts because they already know the change was planned but not yet applied?

This transformed the quality of the output. I was no longer getting surface-level takeaways like “comparison view is important”. I started seeing decision logic, role distinctions, and conditional actions.

Defining 3-phase Model & 9 Main Tasks

Define & Detect

  • Define Expectations & Baseline

  • Define Monitoring Threshold

  • Detect Drifts & Assess Urgency

Define & Detect

  • Define Expectations & Baseline

  • Define Monitoring Threshold

  • Detect Drifts & Assess Urgency

Define & Detect

  • Define Expectations & Baseline

  • Define Monitoring Threshold

  • Detect Drifts & Assess Urgency

Classify & Resolve

  • Understand & Classify Drifts

  • Acknowledge Acceptable Drifts

  • Resolve Unacceptable Drifts

Classify & Resolve

  • Understand & Classify Drifts

  • Acknowledge Acceptable Drifts

  • Resolve Unacceptable Drifts

Classify & Resolve

  • Understand & Classify Drifts

  • Acknowledge Acceptable Drifts

  • Resolve Unacceptable Drifts

Verify & Documents

  • Post-action System Validation

  • Update Baseline

  • Document & Share Reports

Verify & Documents

  • Post-action System Validation

  • Update Baseline

  • Document & Share Reports

Verify & Documents

  • Post-action System Validation

  • Update Baseline

  • Document & Share Reports

Human + AI = Better Together

Once I drafted the initial workflows, I partnered closely with the user researchers who had conducted the original interviews. Together, we reviewed the drafts with a critical lens, focusing on how well they aligned with real-world scenarios.

We Asked Ourselves:

  • Does this flow match what users actually do?

  • Does each drift type lead to the right action?

  • Is the flow adaptable across products?

  • Are we missing any key edge cases?

We Asked Ourselves:

  • Does this flow match what users actually do?

  • Does each drift type lead to the right action?

  • Is the flow adaptable across products?

  • Are we missing any key edge cases?

We Asked Ourselves:

  • Does this flow match what users actually do?

  • Does each drift type lead to the right action?

  • Is the flow adaptable across products?

  • Are we missing any key edge cases?

With evaluations from user researchers, I could ensure that the drafts weren't just theoretically sound—it was practical, adaptable, and grounded in real use case.

I also saw the limitations of AI more clearly. While it was powerful for speeding up research synthesis and early ideations, it fell short in critical ways during refinement.

Limitations of AI in Practice

  • Passive thinking – Answered prompts without pushing ideas forward

  • Inconsistent logic – Overlapping or conflicting tasks across multi-round conversations.

  • Optimized for pleasing, not precision – Tending to agree with assumptions rather than challenge

  • Weak at convergence – Good at generating ideas, poor at narrowing them down

Limitations of AI in Practice

  • Passive thinking – Answered prompts without pushing ideas forward

  • Inconsistent logic – Overlapping or conflicting tasks across multi-round conversations.

  • Optimized for pleasing, not precision – Tending to agree with assumptions rather than challenge

  • Weak at convergence – Good at generating ideas, poor at narrowing them down

Limitations of AI in Practice

  • Passive thinking – Answered prompts without pushing ideas forward

  • Inconsistent logic – Overlapping or conflicting tasks across multi-round conversations.

  • Optimized for pleasing, not precision – Tending to agree with assumptions rather than challenge

  • Weak at convergence – Good at generating ideas, poor at narrowing them down

Finalized 9 Workflows in 1 Week

Despite its limitations, AI was still a powerful accelerator. With its support, I was able to refine and deliver 9 workflows in just one week. Each one tied to a drift category and designed to support users from detection through resolution and documentation.

Design Challenge #2

Exploring for the Future: Embedding AI into the Drift Workflow

After delivering all 9 workflows, I was inspired by how collaborating with LLM-based products had accelerated and sharpened my own design process.

That experience led me to explore how AI could be embedded into the Drift Management experience itself. I began identifying key moments where users were likely to feel overwhelmed, uncertain, or prone to error—the exact points where AI could meaningfully support decision-making, not just a general chatbot that leaves users guessing what to do next.

What’s Possible with AI Today: AI Workflows vs. Agentic AI

Not every AI opportunity is realistic to build.

To make sure my designs were practical, I compared two common approaches used in today’s AI products: Agentic AI and AI Workflows.

What LLM Adds

  • Natural language explanations

  • Smarter Classification

  • Resolution Suggestion

Maturity

Widely used in production with LLMs for summarization, labeling, guidance

Fit for Drift MGMT

✅ Practical, trustworthy, and scalable in high-stakes use cases

Reliability

✅ High—predictable outputs, testable logic

Tech Readiness

High—can use rules, ML, or LLMs within known bounds

AI Workflows

LLMs Supercharge AI Workflows by making them smarter and more user-friendly without compromising reliability.

What LLM Adds

  • Natural language explanations

  • Smarter Classification

  • Resolution Suggestion

Maturity

Widely used in production with LLMs for summarization, labeling, guidance

Fit for Drift MGMT

✅ Practical, trustworthy, and scalable in high-stakes use cases

Reliability

✅ High—predictable outputs, testable logic

Tech Readiness

High—can use rules, ML, or LLMs within known bounds

AI Workflows

LLMs Supercharge AI Workflows by making them smarter and more user-friendly without compromising reliability.

What LLM Adds

  • Natural language explanations

  • Smarter Classification

  • Resolution Suggestion

Maturity

Widely used in production with LLMs for summarization, labeling, guidance

Fit for Drift MGMT

✅ Practical, trustworthy, and scalable in high-stakes use cases

Reliability

✅ High—predictable outputs, testable logic

Tech Readiness

High—can use rules, ML, or LLMs within known bounds

AI Workflows

LLMs Supercharge AI Workflows by making them smarter and more user-friendly without compromising reliability.

vs

What LLM Adds

  • Multi-step reasoning

  • Goal planning

  • Tool use via API chaining

Maturity

Experimental—still unstable in live systems beyond sandbox environments

Fit for Drift MGMT

⚠️ Possible, but risky without strong rollback mechanisms

Reliability

⚠️ Medium—Can be unpredictable or fragile

Tech Readiness

Medium—requires strong context handling + memory

Agentic AI

In Agentic AI, LLMs enable flexibility and multi-step planning - but introduce risks like hallucinations and unpredictable behavior.

What LLM Adds

  • Multi-step reasoning

  • Goal planning

  • Tool use via API chaining

Maturity

Experimental—still unstable in live systems beyond sandbox environments

Fit for Drift MGMT

⚠️ Possible, but risky without strong rollback mechanisms

Reliability

⚠️ Medium—Can be unpredictable or fragile

Tech Readiness

Medium—requires strong context handling + memory

Agentic AI

In Agentic AI, LLMs enable flexibility and multi-step planning - but introduce risks like hallucinations and unpredictable behavior.

What LLM Adds

  • Multi-step reasoning

  • Goal planning

  • Tool use via API chaining

Maturity

Experimental—still unstable in live systems beyond sandbox environments

Fit for Drift MGMT

⚠️ Possible, but risky without strong rollback mechanisms

Reliability

⚠️ Medium—Can be unpredictable or fragile

Tech Readiness

Medium—requires strong context handling + memory

Agentic AI

In Agentic AI, LLMs enable flexibility and multi-step planning - but introduce risks like hallucinations and unpredictable behavior.

Based on research and my own experience, LLMs today work best as assistants - not fully autonomous agents - especially in high-stakes enterprise environments like Drift Management. With that in mind, I chose to move forward with an AI Workflow approach, and narrowed down the AI opportunities into 3 focused areas:

  • Drift Detection & Classification

  • Impact Prediction & Resolution Advise

  • Post-Action Summaries & Documentation

Designing AI as Part of the System, Not an Add-On Feature

AI Tasks:

  • Provides LLM-based explanation for the drift

  • Suggests a drift label based on context and patterns

  • Assigns a risk score based on historical data

  • Drafts a ticket and suggests the appropriate team

Human Tasks:

  • Reviews AI-generated insights (label, risk, explanation)

  • Makes the final decision on drift classification

  • Edits and approves the ticket assignment

Monitor

Detecting + Classifying Drifts

AI Tasks:

  • Provides LLM-based explanation for the drift

  • Suggests a drift label based on context and patterns

  • Assigns a risk score based on historical data

  • Drafts a ticket and suggests the appropriate team

Human Tasks:

  • Reviews AI-generated insights (label, risk, explanation)

  • Makes the final decision on drift classification

  • Edits and approves the ticket assignment

Monitor

Detecting + Classifying Drifts

AI Tasks:

  • Provides LLM-based explanation for the drift

  • Suggests a drift label based on context and patterns

  • Assigns a risk score based on historical data

  • Drafts a ticket and suggests the appropriate team

Human Tasks:

  • Reviews AI-generated insights (label, risk, explanation)

  • Makes the final decision on drift classification

  • Edits and approves the ticket assignment

Monitor

Detecting + Classifying Drifts

AI Tasks:

  • Predicts potential system impacts

  • Identifies affected components, users, or services

  • Suggests resolution actions based on historical data

Human Tasks:

  • Reviews predicted impact and evaluates risk

  • Selects or modifies the suggested resolution

  • Execute the resolution

  • Flags if AI suggestions are inaccurate

Analyst

Simulating + Suggesting Resolution

AI Tasks:

  • Predicts potential system impacts

  • Identifies affected components, users, or services

  • Suggests resolution actions based on historical data

Human Tasks:

  • Reviews predicted impact and evaluates risk

  • Selects or modifies the suggested resolution

  • Execute the resolution

  • Flags if AI suggestions are inaccurate

Analyst

Simulating + Suggesting Resolution

AI Tasks:

  • Predicts potential system impacts

  • Identifies affected components, users, or services

  • Suggests resolution actions based on historical data

Human Tasks:

  • Reviews predicted impact and evaluates risk

  • Selects or modifies the suggested resolution

  • Execute the resolution

  • Flags if AI suggestions are inaccurate

Analyst

Simulating + Suggesting Resolution

AI Tasks:

  • Summarizes what actions was taken, by whom, and why

  • Auto-generates a draft report

  • Suggests labels for future reference

  • Updates baseline if drift is "Intended + Acceptable"

Human Tasks:

  • Reviews and edits the AI-generated reports

  • Approves and publishes the final report

  • Flags any errors or missing information for future AI improvement

Recorder

Generating Reports

AI Tasks:

  • Summarizes what actions was taken, by whom, and why

  • Auto-generates a draft report

  • Suggests labels for future reference

  • Updates baseline if drift is "Intended + Acceptable"

Human Tasks:

  • Reviews and edits the AI-generated reports

  • Approves and publishes the final report

  • Flags any errors or missing information for future AI improvement

Recorder

Generating Reports

AI Tasks:

  • Summarizes what actions was taken, by whom, and why

  • Auto-generates a draft report

  • Suggests labels for future reference

  • Updates baseline if drift is "Intended + Acceptable"

Human Tasks:

  • Reviews and edits the AI-generated reports

  • Approves and publishes the final report

  • Flags any errors or missing information for future AI improvement

Recorder

Generating Reports

By framing each AI touch point within a clear human–AI collaboration model, I ensured that AI never overstepped—it advised, assisted, and monitored, but always respected the user's judgment and control.

Redesigning Workflows with Embedded AI Support

Despite its limitations, AI was still a powerful accelerator. With its support, I was able to refine and deliver 9 workflows in just one week. Each one tied to a drift category and designed to support users from detection through resolution and documentation.

Use a Larger Screen to See the Complete Workflow

Results & Impacts

What I Delivered – and What Comes Next

I delivered 12 detailed workflows for Drift Management—9 standard workflows aligned with the 3-phase model, and 3 AI-enhanced workflows focused on drift classification, resolution suggestions, and post-action documentation.

To ensure quality and feasibility, I reviewed them with UX leads from all 11 enterprise platforms. The feedback confirmed strong alignment with platform capabilities, technical constraints, and user roles—clearing the path for adoption across products.

Next Steps

But the workflows are only the beginning.

The next phase is about turning workflows into design:

  • Designing standard UI patterns and interaction models to support each task and role

  • Embedding AI touch points in a way that’s clear, explainable, and user-controlled

  • Running usability testing and validation across real scenarios and representative platforms

  • Delivering design guidelines and component kits so future teams can adopt and extend with confidence

This project began as an effort to align. Now, it's evolving into a scalable, intelligent AI-UX system—empowering teams to act consistently and helping IT pros manage complexity with clarity and trust.

Thanks for Visiting!

Want to collaborate, geek out about future, or just say hi?

Let’s connect →

Thanks for Visiting!

Want to collaborate, geek out about future, or just say hi?

Let’s connect →

Thanks for Visiting!

Want to collaborate, geek out about future, or just say hi?

Let’s connect →