5 Core Values That Should Guide Your AI Decision Making

5 Core Values to Guide Your AI Decision Making - JD Dillon | TrainingPros Insider Training
Play Video about 5 Core Values to Guide Your AI Decision Making - JD Dillon | TrainingPros Insider Training
  • Introduction
  • Video Transcript

Our lives are being altered by technology like never before. We are at the beginning of a transformational shift in how we relate to and interact with technology in the workplace.

Artificial Intelligence (AI) is driving change at an unprecedented rate across virtually every industry. In fact, the convergence of AI-enabled tech with Learning and Development (L&D) has the potential to redefine workplace learning as we know it.

Soon enough AI is going to be embedded in every tool, device and platform we use. It is truly a game-changer because it is not just another tool to help you design, develop, and manage your organization's learning function.

"We’ve had these moments over the last 20 years where there are these fundamental shifts, and sometimes they were felt iteratively and slowly over time. Sometimes, they happen a little bit more quickly and a little more loudly.

I think we’re in that next moment in terms of a transformational shift in how we relate to technology and how technology helps us do our job." - JD Dillon

Therefore, as L&D practitioners, it is incumbent upon us to not allow ourselves to be distracted by what technology can do. Instead, we need to be working along side our organizations to ensure, where AI is concerned, future decision making is not done in a vacuum.

To do that, we need to establish what core values we will use to guide decisions about where and how AI will be used. These guiding values will ultimately determine how successful your organization is at implementing technology that positively shapes people's experience of work.

In this video, the Chief Learning Architect at Axonify and Founder of LearnGeek, JD Dillon, breaks down five core values he believes have the power to help frame AI conversations in a meaningful way:

  1. Strategic
  2. Transparent
  3. Equitable
  4. Seamless
  5. Compliant

Regardless of your role in developing your organization's AI transformation strategy, you must be actively involved in the most critical conversations within your organization so important decisions aren't being made for you but by you.

Are you looking for skilled Instructional Designers or eLearning Developers? TrainingPros has the consultant you need. When you have more projects than people, let us find you the right L&D consultant to start your project with confidence.  Contact us here

Video Transcript: 

 

Speaker: JD Dillon, CLO at Axonify & Founder of LearnGeek

Determining Your Core Values

We get distracted by what technology can do, and we make decisions without having a core set of values to guide that decision making.

That's why I think right now it is critical for us as L&D to work with our peers within our organizations, to determine what are the guiding values we're going to use to make decisions when it comes to how AI is applied, and how AI informs the experience of work.

These are the values that I work through when I think about how we're building technology to shape the experience of work.

Value #1: Think Strategically

Number one, thinking strategically. We're solving problems, we're not implementing platforms.

This isn't about just throwing technology into the workplace. Thinking about the problems that limit people's ability on the job today, and how we can use technology to solve those problems in more meaningful ways.

Value #2: Transparency

Number two is transparent. We have to make sure people understand how technology's impacting their work experience.

This means avoiding the black box problem, where technology just spits out information and you don't know why. You see that a lot in AI right now, where you ask a question, it gives you an answer.

You're not sure where that answer came from. If you can trust that, that's one thing if you're trying to write a poem that sounds real fun to put on a greeting card.

It's a completely different thing where we're talking about people's livelihoods and their ability to stay safe on the job.

Making sure that we involve the audience, so that people can trust that the technology's helpful, is going to be reliable, and that they understand how their data and technology are being used to inform decisions.

That includes everything from recommendations on what courses people take, to recommendations on who gets the next promotion.

We have to be transparent when it comes to how we apply technology. 

Value #3: Equitable

Three, equitable. Technology can usher in a new level of fairness in the workplace if we do this right.

How many of you are building training content in a limited number of languages, just because that's the resources that you have?

That fundamentally limits opportunity for people who might not speak those languages as a preferred or first language.

Technology can help us change that, overcome that, and foster equity, and solve a lot of longstanding problems, as long as we prioritize it.

Value #4: Seamless

Next one is seamless. Tech should make work easier, not more complex.

When we work in a silo, and L&D is over here and operations is over there, we make things harder on people. The last thing we need is HR to have a digital assistant, L&D to have a digital assistant, operations to have a digital assistant.

Now, you're talking to these five different chatbots in your work every day, all of which are trained on different data, have different capabilities, and different rules.

If we work in a silo, that's what's going to happen. We need to, again, partner with the organization at large to understand what's the digital experience of work becoming, and then where do we play a role?

Value #5: Compliance

Then last one, compliance. We have to pay attention to the changing regulatory landscape, especially if you're a large distributed complex company.

We're going to see a patchwork of AI- related regulation, both inside your company as well as outside of your company.

The reality of the situation is that we are a highly regulated use case for AI, because the work we do directly impacts people's livelihood, how much money they can make. If you look at the European Union's proposed AI laws, we are a high category of regulation, as compared to some other applications.

We have to make sure that our strategy, any technology we use, aligns with the changing regulatory landscape. That's why talking to your legal team and talking to your IT team before we make decisions or implement anything is critical.

As the rules change, and if you're a big company, even just in the US, New York, California, and North Dakota may have three meaningfully different set of rules around how AI can be used and impact people in the workplace.

We have to make sure that we're building a strategy to align to that, and not getting false started because of the complexity around the rules. 

That's the set of values that I use or the set of criteria that I walk through as I talk with organizations about shaping, broadly, technology strategy, and I think they all apply to this AI conversation in very meaningful ways.

Picture of Patrick Owens

Patrick Owens

Patrick Owens was a member of the TrainingPros Digital Communications team for many years. His favorite day of the week is Tuesday and has deemed it a special time for tacos. Always looking for adventure Patrick uses his downtime to explore and create.
TrainingPros Blog Rings Logo Icon | When You Have More Projects Than People...

You Might Also Like

Search

Follow Us

12.8kFollowers
865Followers
349Fans
812Subscribers
76Followers
14.9kTotal fans
Written By
Patrick Owens was a member of the TrainingPros Digital Communications team for many years. His favorite day of the week is Tuesday and has deemed it a special time for tacos. Always looking for adventure Patrick uses his downtime to explore and create.

Recent Posts