AI-Coaching: what works, and what does not work (yet?)

Rebekka Manos • 28. Januar 2026

AI-Coaching: what works, and what does not work (yet?)

Lately I have been experimenting with AI-Coaching, asking LLMs to guide me through a method or a set of questions.

Personally, I was testing Internal Family Systems (IFS) by Dr. Richard C. Schwartz and Immunity to Change by Robert Keagan and Lisa Lahey. Additionally, some of the leadership programs I facilitate work with specifically prompted LLMs that support the exploration of classical leadership development models such as SCARF by David Rock, or Five Dysfunctions of a Team by Patrick Lencioni.
 
On many occasions, the AI makes a passable companion: it is professional following the pattern of a method, it brings in relevant questions and is doing mostly an amazing job in summarizing important bits from the conversation. 
Especially in longer explanations by a coachee, the AI has a great capacity to listen and highlight the most important aspects, and to bring it into a digestible structure. When exploring options for solutions, it depends a bit on the fencing (is the AI allowed to include all data or just specific data), but the ideas being generated for solutions in the frame of the model make largely sense.
Furthermore, the AI can show compassion in its responses, and it feels that conversations are more private than sharing our concerns with another human, lessening the fear of being judged.
 
 Still there are important aspects in a coaching session, which currently I see only a human can provide:
  • An unexpected or unconventional change of gears
  • A safe space with the promise "I will hold you"
  • A place to feel seen
  • A "Stop and try for yourself"

An unexpected or unconventional change of gears

In most coaching journeys, there is a moment where a method has been used to exhaustion, and a change in pacing, method, or an unconventional intervention is required.

Sometimes this can be as small as playing devil's advocate, sometimes as big as switching from conversation to expressing emotions through the body or closing a method to employ a new one.

Currently LLMs lack the proactivity to do so, meaning, they do more of the same, than recognize a change in pattern is needed. They need to be prompted to change gears, which requires that the coachee recognizes their need to switch gears and can express it.


Sometimes, on the other hand, the coachee expresses a need to switch to a different method "I think this is not for me", but it is rather a pattern of avoidance and might need gentle probing before complying. AI corresponding immediately might be counterproductive in these situations.


Longterm, I see that we could implement this aspect in LLMs by consistent training and updating of the behavior patterns. 

A safe space with the promise "I will hold you"

Many personal explorations demand a lot from a coachee, diving deep into personal reflection, admitting personal challenges or unhelpful patterns, or revisiting past painful events.

Some coachees using self-coaching powered by AI are tempted to plunge into the cold water headfirst, possibly causing severe shock to the system. Others experience that their minds and bodies simply refuse to enter the water at all, drawing blank or developing somatic symptoms.


Here, the steady connection to a human coach can help: someone to hold our hand, to provide a steady and safe anchor while slowly starting to explore the depth of the water. Who will help the coachee swim, stop them when getting in too deep for the moment, or haul them up again.

We all still have tribal autonomous nervous systems. Being held safely by another human being is an invaluable trait that working with a human coach can provide, and it goes beyond pure intellectual realms, including the body and our more primal instincts.

A space to feel seen

As humans, we strive to be seen, recognized and validated by other humans. It is part of our programming and, apart from encouraging personal development, can be an important outcome for any coaching session.

A coach is a witness to others’ experiences and can serve as an entity to validate the aspects of living a human life in all its turbulences. Sometimes, coaching is more about listening and validating in the first place, and enabling change is secondary.

AI nowadays can mimic these responses, but what remains to be proven is if it registers as truly calming and validating with the coachee.

In my personal experience, as soon as I finished my experiments, I had gathered insights but missed the connection provided by a conversation with another human.

A “Stop and try for yourself"

By programming, the intention of conventional AI is to keep its users engaged. It follows up with another offer for support or another question.

This is contrary to the aim of coaching: coaching is intended to be an intervention, a shorter and intense period of usually a few sessions. Depending on the topic, it could extend to a longer timespan or develop more into regular supervision.


Fundamentally, as a coach, my intention is to support coachees in helping themselves. We swim together until the coachee feels safe enough to swim on their own. Therefore, from time to time I will encourage coachees to try on their own, assuring them they can return any time, but not urging them to.


I do not see that an AI driven by commercial interest would currently have an inbuilt stop mechanism to kindly end the engagement and encourage the coachee to get out on their own again, which is a serious conflict of interest.

One could argue that a human coach faces the same conflict of interest as well, and I have seen the attempt of coaches to make their coachees dependent.


But as humans, we still have a choice instead of programming, and a value system that can guide us as coaches to do the right thing. 

Not yet? Or never?

In summary, I see many benefits of using AI for coaching support, but more as a helping tool in between sessions under the supervision of a human coach, and with opportunities included to the connection and safety a human coach can provide, as well as the rearing in of purely financial interests. 

It remains to be seen in how far modern technology can successfully replace deep emotional and bodily needs of a human being.


**All blog-posts are written by me and reflect my personal opinion and viewpoint at the time of publishing. While I am convinced that AI has great benefits, I personally prefer to read what other humans think. Therefore, AI is only used to support the wording and grammar, not the content of the posts.

If you enjoyed this post, please share it:

von Rebekka Manos 25. Januar 2026
While skill-based development in AI tools and digital processes is on the rise, I argue that long-term success depends on nurturing critical thinking, self-reflection, emotional intelligence, and effective communication. Only by developing both technical skills and deeper human qualities can individuals effectively collaborate with AI and make sound decisions in an increasingly digital world.