Wednesday, January 10, 2024

AI in Coaching: The Ethical Imperative and Human-Centric Approach

We are pleased to share an article entitled  “AI in Coaching: The Ethical Imperative and Human-Centric Approach”  written by Andrea Paviglianiti

In the ever-evolving landscape of AI integration, coaches and mentors are faced with ethical considerations that demand a proactive approach. Emphasizing transparency, fairness, and accountability, the transformative power of AI complements human skills in the coaching industry with a human-in-the-loop framework.

Generally, when people think of the work of AI Engineers and Data Scientists, they imagine mathematics and programming at work. And for the most, they are right: math, statistics, and computer science are all critical to ensure that an algorithm works. Industry expertise is also critical, as it infuses business and human insights into processes so that the algorithm learns how to solve a business problem. What most people do not think about is the broad, yet critical role of humanistic disciplines in the way this technology is designed, used, and governed.

After reading the last number of Choice[1] I realized that coaches and mentors are using Artificial Intelligence to enhance their business by the day, and even designing their own solutions. AI serves tasks such as supporting reflection, extracting insights from dialogue, and more generally as a virtual assistant. The use of AI may be of great help, yet coaches are in the human business, using human skills, and therefore prone to human error. Data used to train AI is no less. 

Were you aware that AI Engineers are mainly Western, white males working for American hi-tech companies?[2] This is a not indifferent skills distribution: inherent biases influence the tasks of data collection and engineering and AI design. That means algorithms do predictions that favor white male individuals. Gender and racial biases, unconsciously or not, are there. AI models have inherent biases that do not account for African, Asian, or European social, cultural, and even philosophical heritages. It is already troublesome to do this appropriately in person, let alone leave machines doing it for us.

Philosophy, epistemology, and ethics applied to AI describe (and sometimes, prescribe) the way knowledge should be infused to artificial systems. Human Biases, in any form, propagate to data because of our cultural and social background. And data is what composes knowledge that humans pass on to AI. As humans, we are many things, and being “objective” is not one of these. Therefore, we cannot expect AI to avoid the same mistakes we do. Today’s challenges with AI have a lot to do with data. Coaches must be aware that, when using AI for their business, they must evaluate the degree of transparency, fairness, and privacy the new technology offers.

Transparency means that the predictions of algorithms should be explainable to humans with a certain degree of interpretation. Fairness means that AI won’t penalize individuals by gender, ethnicity, or social class. While it is relatively easy for simple predictions, complex models like neural networks and NLP cannot be explained linearly. In turn, accountability for their predictions is shared among their users and their designers.

In front of ethics, the law is usually the bare minimum, and a lot is left to people to do the right thing, even when this is not clearly stated. As coaches, it is our moral obligation towards our clients to disclose if what they share with us is going to be fed to AI, and allow them to back off if requested, for it is their privacy at stake. To coaches, it is to critically think about the impact AI will have on us, as individuals and professionals. When adopting AI in our businesses, we should do what we do better – ask the right questions to understand the balance between benefit and risk. And finally, to decide at which stage of our framework AI is going to be used. 

This is what is called ethics by design: to proactively consider ethical implications from the start through the whole development process. Coaches can play a bigger role in it when co-creating the AI solution with a tech company, rather than purchasing it. This allows technical development to blend with professional experience and working ethics, allowing coaches to effectively own AI and better understand their responsibilities.

But even when co-creation isn’t an option, ethics by design can be implemented, by learning about existing guidelines and documenting how these are used as best practices. 

Industries like healthcare, telecommunications, and fintech are discovering the benefits of infusing Artificial Intelligence in their business frameworks, with applications on customer management, human resources, and content creation – all use cases that have to do with people management, communication, social skills. We are facing a global transformation, that does not affect just coaching and consulting firms. With the advent of Generative AI, like ChatGPT, we are seeing new emerging applications: chatbots answering in a more human-like manner, more flexible workflows with less hard programming, and overall, a seemingly easier way to utilize AI than ever before. 

This is a time when AI transcends from being an automation for simplifying processes to a supportive part of our life and business. Some may even say that AI is gradually becoming a non-human partner in business. This transformation implies a higher demand for advanced AI able to simulate human skills; and while it is unlikely that it is going to replace coaches, we must consider whether we want AI to do the very best of our job.

The coaching industry is human-centered and exploits transformational learning, from humans for humans. It requires mindfulness and situational awareness to minimize subjectivity; emotional intelligence and empathy to connect with people; and a good dose of dialectic, to provoke solution-oriented thinking. It is important to understand that the “objectivity” of AI is a simulacrum – an ideal representation we have of objectivity that, in fact, we do not possess ourselves, and cannot provide to machines. 

The fittest way I see for AI in coaching at the moment is not to handle customers, but rather us. In the lack of conversational partners or coaching supervisors, AI purposefully designed to support reflective and introspective thinking processes can aid the coach in seeing things differently, and discovering perspectives before unknown. Generative AI tailored to create coaching scenarios may support the training of coaches with simulations.

Essentially, AI in coaching is already befitting the role of a digital learning tool that goes beyond theory. This is achievable through a human-in-the-loop (HITL) framework: human knowledge is provided to the machine and regularly fine-tuned, based on feedback; in exchange, the machine learns from this feedback loop, providing better services over time. The HITL framework allows continuous monitoring, which is essential to understand how the machine interprets knowledge and transforms it into output. In practice, this allows people, from developer to user, to identify when things go sour (for example, to detect biases that are more or less implicit).

There are several themes that Artificial Intelligence could explore for coaching, but for some of them ethical concerns and even legal implications become even more critical: non-verbal behaviors and emotion recognition aren’t always understood by machines due to the granularity of human patterns by geography and belonging to subcultural groups. It would be interesting for coaches to experiment and be part of the development process for these, however, with the understanding that use in business may be, at least for now, controversial. 

The suggestion here, is that human learning and machine learning are very different. 

One peculiar divergence can be found in implicit learning: this is built on “data” we unconsciously, and often inattentively, infer from the world, the so-called tacit knowledge. It is based on subjective experience and exposure to events, and it contributes to making each individual unique. 

A second divergence is that machines provide predictions and generate content, but they do not inherently understand the meaning and the impact of what they make. AI lacks both critical thinking and the ability to worry, which is why we cannot hand over moral and ethical responsibility to it.

In conclusion, we must carefully handle AI integration in our business, given its ethical challenges and many gray areas. Early adopters have an advantage by experimenting with the technology, but they may need to go through a process of redesign later on, which can also prove expensive and may result in changing the models adopted altogether.

Once more, coaches must pioneer new frontiers and be ahead of their time, by reviewing studies, use cases, and even current proposals of legislation, like the EU AI Act. Having the big picture and ultimately gaining a higher degree of awareness, the coach won’t just be able to make better decisions when it comes to AI, but also foster meaningful innovation, providing the industry the shift it needs.

[1] Choice Magazine Vol 21 No. 2 (2023)
[2] Coded Bias (2020)

Let’s continue the conversation by connecting with your colleagues on our Facebook page