Introduction
Imagine a dashboard that flags communication issues during coaching sessions. It looks precise, objective, and data-driven. But what if, quietly, it over-flags one group of employees while consistently under-flagging another? You might never notice the imbalance on the surface, yet the results would shape careers.
Download Your Copy of Agentic AI vs. Generative AI: What’s the Difference
This is exactly the risk highlighted by a recent UCL study on AI bias amplification. Researchers found that when people relied on biased AI recommendations, they didn’t just see the skew; they absorbed it. Even after the AI was removed, participants continued making judgments that mirrored the same bias.
Professor Tali Sharot (UCL Psychology & Language Sciences; Max Planck UCL Centre for Computational Psychiatry and Ageing Research; and Massachusetts Institute of Technology) explained:
“People are inherently biased, so when we train AI systems on sets of data that have been produced by people, the AI algorithms learn the human biases that are embedded in the data. AI then tends to exploit and amplify these biases to improve its prediction accuracy.
Here, we’ve found that people interacting with biased AI systems can then become even more biased themselves, creating a potential snowball effect wherein minute biases in original datasets become amplified by the AI, which increases the biases of the person using the AI.”
Sharot’s warning is clear: minute biases in the data don’t just stay small. AI amplifies them, and then humans amplify them further. AI doesn’t only risk making biased mistakes itself. If left unchecked, it can actually teach humans to carry those mistakes forward, amplifying inequalities rather than correcting them.
Download Your Copy of Needs Analysis in Action: A Step-by-Step Sample Report
And this isn’t an isolated case. Joy Buolamwini and Timnit Gebru’s Gender Shades project showed that facial recognition tools misclassified darker-skinned women nearly 47% of the time, compared with error rates of less than 1% for lighter-skinned men. More recently, the London School of Economics found that AI used in social care casework downplayed women’s health issues compared to men’s, even when the case notes were identical.
These examples underscore a hard truth: AI outputs can look neutral on the surface but still create unequal outcomes in practice. So how do organizations strike the right balance; leveraging AI’s efficiency without losing fairness and trust?
The Human-AI Task Scale: A Framework for Balance
For learning leaders and consultants, the challenge isn’t whether to use AI. It’s deciding where and how much to trust it. At the Chicago eLearning & Technology Showcase last month, Josh Cavalier shared a useful framework: Human-AI Task Scale — a spectrum ranging from fully human-driven tasks to agentic ecosystems with distributed oversight. His message was clear: humans must always be part of the formula.
AI can be a partner, but it should never replace oversight, context, or coaching. The real opportunity in learning and development isn’t to automate judgment, but to use AI as a supporting tool within well-designed human systems. Consultants and leaders need to decide where on the Human-AI Task Scale each workflow belongs and then build in the safeguards to keep it fair.
5 Ways to Reduce Bias in AI Feedback Workflows
1. Design with Diversity in Mind
AI learns from the data it’s given. If that data reflects only one industry, role, or communication style, the system will treat everything else as “wrong.” Consultants help organizations test AI across different contexts to make sure it’s not unfairly flagging one group more than another.
Diversity in this context means more than race. It includes industry norms, job roles, language backgrounds, gendered communication patterns, and even generational styles. Consultants can help organizations curate training data and test AI outputs against this broader spectrum, ensuring the system reflects real workplace variety instead of a single “standard voice.”
Consultants can help organizations test AI across different scenarios, ensuring the system reflects a broad range of voices and industries, not just one “standard.”
2. Keep Humans in the Loop
AI should never be the final word in performance feedback. Configure systems so that human reviewers — whether managers, instructional designers, or training consultants — validate AI outputs before they reach employees. This not only reduces risk but also reframes AI as a conversation starter, not a verdict.
Configure workflows so outputs are validated by people before shaping decisions.
For example, LMS administrators might set up dashboards for regular sample reviews, while facilitators could use AI-generated notes as discussion prompts. Leaders should frame outputs as reflection starters (“Do you notice this pattern yourself?”) rather than judgments.
3. Audit Regularly and Transparently
Bias often hides in patterns you don’t notice day to day. Set up quarterly reviews of AI outputs across demographics, departments, and contexts. Learning professionals can guide the creation of audit checklists and dashboards that reveal where the system may be leaning too heavily in one direction. Share those findings openly so employees trust that the system is being monitored.
4. Define What “Fair Feedback” Looks Like
AI doesn’t know your organization’s values — it only knows the data it was trained on. Work with consultants to create clear rubrics for feedback: specific, actionable, balanced, and aligned with organizational goals. Embedding these standards into AI workflows helps ensure outputs reflect your definition of fairness, not just a generic algorithm.
5. Equip Leaders to Coach, Not Relay
Even the most carefully designed AI system will make mistakes. That’s why leaders need to be trained to interpret, question, and expand on AI feedback. Consultants can design leadership development sessions where managers practice using AI outputs as springboards for coaching conversations — keeping the human connection at the center.
Why This Matters
AI is powerful, but it doesn’t absolve humans of responsibility. Left unchecked, bias in AI can quietly shape decisions, feedback, and even culture — just as the UCL study showed. Instructional design consultants and learning and development leaders must design systems that minimize bias and maximize fairness, using frameworks like the Human-AI Task Scale to keep humans in control.
This isn’t just a technology question; it’s a trust question. When consultants and leaders work together to blend automation with human oversight, AI becomes what it should be: an enhancer of growth, not a shortcut that undermines it.
How TrainingPros Can Help
AI is changing the way learning gets designed and delivered—but the real impact doesn’t come from automation alone. It comes from thoughtful design, clear oversight, and leaders who know how to use insights to build trust and growth. If your team is exploring how to integrate AI into coaching, feedback, or custom learning solutions, TrainingPros can connect you with experienced instructional design consultants, eLearning developers, and corporate training consultants who know how to balance innovation with human judgment.
At TrainingPros, we match organizations with experienced consultants who lead with strategy, then help you identify the tools and methods that actually support your business goals. Whether you’re rethinking onboarding, scaling leadership development, or trying to make sense of your tech stack, we can help you shift from reactive to results-driven.
We’ve been named a Top 20 Staffing Company by Training Industry, a Smartchoice® Preferred Provider by Brandon Hall Group, and a Champion of Learning by ATD–honors that reflect our commitment to providing top-tier, problem-solving talent.
When you have more projects than people™, let TrainingPros connect you with the right consultant to help you cut through the noise and focus on what matters most.
- 0share
- LinkedIn0
- Twitter0
- Facebook0
- Love This0