Doubt yourself and improve as a coach

For context, consider the following story.

I was having a discussion with a client today. Her best squat is 145kg, and in her last peak we squatted 142.5. In previous peaks she has gotten super beaten up, and occasionally her squat performance has been up and down, which impacted her confidence under heavy loads. Furthermore, because we linearly increased top set loads for squats all the way into the final weeks, squats got hard 3-4 weeks pre-taper and she rarely felt that she moved top sets the best that she could have. In spite of knowing this contextual information and discussing it with me at the time, it wasn’t confidence-inspiring for her.
On reflection, we determined that we could either peak more abruptly – reserving heavy squats for later in the peak, or spend more time in the range of her opener, rather than ramping up. Because of her preference for practicing and developing confidence with those loads, we opted for the latter.
Zoom ahead to today. About 4 months have passed, and we’re now in her peak for her next competition. She’s just squatted 135kg (likely to be her opener, or thereabouts) for a single for the second week in a row, and smoked it. She wanted to do more afterwards, but kept with the program and dropped back to 127kg for triples, which were “hard” but pretty good.

After her session, we’re messaging and I said something to the effect of “I think it’s been really good for you to keep your top sets easier for squats. You’re a really good grinder of them, and you could probably go heavier, but the upshot is that you’d be fatigued and the quality of your later work would suffer. I don’t think I’ve seen you handle 127 as well as you did today, and part of the reason is probably that you weren’t so fatigued from your top work. They look closer to 85% than 90% right now, which is great”.
She responded that she’s really pleased with how it moved, that I’m probably right that not going heavier has been a net positive, and that she’s feeling super confident for squats right now.

Now, if you read that story, it’s a pretty open and shut case of me being a totally kickass coach. We’ve reflected on past experiences, updated our plans in light of them, and gotten the desired result.

However, that might not be the case at all. What I want to describe today is a complex of biases that I think contributes to trainers and coaches becoming accidentally close-minded, losing their healthy scepticism, and in some cases probably leads to some pretty broscientific beliefs.

Good coaches engage in reflection, exactly as I described above. After seeing the results (or non-results) of their clients, they identify strategies to refine weaknesses and take advantage of strengths. They make inferences about what is likely to be beneficial in the future on the basis of the training and results that they have observed in the past, and put them into play.
When at a later date, we retest, if a given training approach happens to have been successful, that is taken as a vindication of our practice, and tacitly, our inferences. And that’s where the problem lies.

The feedforward cycle in which our inferences about training can be self-reinforcing.

In reality, I think it’s very important that we remain sceptical of our inferences. In fact, as I’ll explain, we probably need to be sceptical of the efficacy of our practices too – your clients can make improvements over time in spite of your practices, however well-intentioned or well-reasoned they are.

Confirmation bias describes the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories. For instance, if you believed that all Scandinavian people are blonde (basically true), and asked every blonde person that you met about their ancestry, finding that a number were indeed Scandinavian, without considering that many of the brown-haired people that you pass in the street may also have Scandinavian ancestry, you would be exhibiting confirmation bias – you are looking for examples to confirm a conclusion that you already hold. Likewise if you believe in horoscopes, and happen to be extra sensitive to occasions when your horoscope was correct whilst ignoring all the times you were predicted to find love and didn’t (RIP), then you’d be exhibiting confirmation bias.

In the case of coaching, it is very tempting to construct narratives to support our coaching practices based on the successes of our clients. One of the criticisms that Kiely levels against periodisation research in his (essential reading) paper about periodisation is that conclusions drawn about the greater efficacy of periodised training over non-periodised training may simply be confusing the benefit of variation for the benefit of a periodisation model, per se.
As coaches, we tend to have a similarly rose-tinted view of our practices, assuming that our coaching insights and conclusions have led us to optimal practices. That we are overly sensitive to stories of client success that we can attribute to our planning (ie, susceptible to confirmation bias) is part of the problem, although many good coaches do remain aware that their approaches don’t always work.
The other, even more pervasive, belief that we often exhibit is that the results (which we expect) necessarily follow from the inferences that we have made about training. That is to say, not only are we overly sensitive to examples of our own success (ie, we tend to think we are better than we are), but we also tend to think that when things work out, it’s more often due to narratives that we have constructed on the basis of our beliefs about training than it is to do with other, often more likely, things.

Cast your mind back to my opening story – my client has had what appears like a successful training cycle. Is her greater confidence in her squat singles now due to the changes in training structure that we made on the back of her last peak? Possibly.
Is it also possible that she has simply trained for 4 months and gotten stronger, or that other technical changes that she has made have contributed to her squatting better? Is it possible that she has less lifestyle stress now than in previous peaks, that she is better adapted to doing similar volumes of squatting, that her program includes more overall recovery, or that her beliefs about her squatting proficiency have changed?

All of the above are actually likely true, but as coaches we have a temptation to look for simplicity where complexity exists. We want to identify the concrete, measurable and generalizable aspects of our practice, so that we can use them on other clients, with the hope of getting them better, and I certainly don’t think that that is a bad thing to do. However, I think we need to couch these assumptions with the appropriate degree of doubt.

Being cagey in your attributions (“my client seems to be doing better this peak – we’ve made some changes to her training structure that might have helped, but she’s also come off of a productive off season block”) helps prevent you from jumping to conclusions that are too strong on the back of little evidence.
If, instead of looking for the narrative that most conformed with your opinion as a coach, you identified a list of reasonable or believable factors that might have contributed to the results that you’ve observed, and filtered that through how much improvement you’d expect to see simply from the passing of time, how different would your appraisal of your practices be?
To quote Kiely’s article, because he states it so eloquently (emphasis mine); “an unbiased evaluation of the worth of any training scheme requires that both successes and “failures” be factored into analysis. As such, the highlighting of isolated high-achieving exemplars to confirm the superiority of any planning scheme while neglecting to consider those who conformed to a similar framework yet “failed” is a fundamentally lopsided, albeit attractive argument. Furthermore, the training plan is but one facet of the multidimensional “performance” phenomenon. Did the planning methodology contribute to, or detract from, the exceptional performances of an exceptional performer? Would a different plan have led to greater achievement, a longer career, less injury, or illness? Our inability to run counterfactual alternative-reality iterations originating from common initial conditions renders such arguments unresolvable. Instead, we must rely on critical reflection, informed by evidence, contextualised against conceptual understanding, and cleared of presumption.”

The thrust of my article so far is that it is entirely possible to observe what you expect to observe as a result of your training interventions, without the reasons actually being related to your inferences or reasoning. Performance and adaptation are complicated phenomena underpinned by a multitude of processes, and overly simple explanations for your observations are rarely entirely correct.
Accidental competence doesn’t actually sound so bad on the face of it, and if your clients continue to get better then it seems like a case of “no harm done”. In reality, though, there are a few consequences to a lack of scepticism and rigour in your thinking that CAN stymie your development as a coach.

1 – It breeds overconfidence in your own expertise. The best coaches engage in continuing education and are willing to assimilate new information in their beliefs and practices surrounding training. They are willing to refer out or seek new information to solve novel problems, and don’t rely entirely on their intuitions. If your confirmation bias causes you to remember your successes and forget your “failures” and you attribute your successes only to your own planning, you’ll probably think you’re a lot better than you are. If over time that causes you to be less willing to seek new information, you’ll fall behind and very likely you’ll be faced with problems that you can’t fix, and have overly committed beliefs aligning you with training practices that might not really explain the stellar results of past clients.

2 – It narrows your thinking. Related to the above, if you believe that x intervention “worked” for y reason, without considering alternative explanations and possible/hypothetical counterfactual scenarios, your ability to generate other solutions to the similar training problems will decrease. It may be that your current approaches are situationally inappropriate for future clients, and so being flexible in your thinking can be to your benefit in later planning.

3 – It can lead to mistaken beliefs about the efficacy of your practices. Consider the story of my client – if I believed that her success was solely attributable to her avoidance of heavy singles during the majority of her peak, that belief may at some stage become limiting when for her ongoing development she does need that. For a more extreme example, consider entirely “broscientific” training approaches, such as the guy who claims that his unique training split explains his muscle growth, and not an appropriate chronic dose of training stress and recovery. When his training stalls, and perhaps altering his training frequency or the distribution of his workload is required to kickstart things again, how likely will he be to move on?

4 – It leads to difficulty assimilating conflicting evidence and anecdotes. If you are too dogmatic in your beliefs about what training works and why, when confronted with people doing dissimilar things, occasionally with great success, you’ll nearly certainly discount their practices because they don’t confirm to your narrow framework of what effective training is. On a related note, I personally discounted a number of Westside Barbell influenced training methods for a long time because I didn’t buy their reasoning. As it happens, I still don’t in most cases, but I’ve come to believe that some of their practices have merit for other reasons. Because effective training rests on a very broad bedrock of principles it’s possible for highly divergent approaches to be effective, especially given different individuals following them. If your training scope is narrow, and your beliefs about why certain training approaches work are too limiting, you might end up being that crusty old strength coach who says all exercise science is crap because one or two papers found something that runs counter to your current beliefs.

So, summing up – it’s GREAT for us to review training, and we SHOULD make inferences about what is needed in the future based off of what we see in the past. Often enough, when our later training is successful we CAN take credit for good planning. However, we CAN’T let ourselves become blinded to the flaws in our thinking. Not all of our interventions are successful, some are successful for reasons other than those that we implemented them for and some are successful in spite of poor planning.
When we assess our coaching practice, we shouldn’t just settle for the most immediate or comfortable appraisal of what happened and why – considering what we missed, what we don’t know, and what we could do better is the essence of continued development as a coach. Resist being wedded to simple explanations of complex phenomena and your eyes will open to a wide array of training lessons and tools to be used in the future, and you’ll be readier to adapt your approaches to meet the needs of the person in front of you.

If you LIKED this article, join my mailing list – it’s FREE and I can send you stuff directly to your inbox. You can also follow me on Instagram, where I frequently post training and diet analysis and advice on my stories.
I also have a podcast, Weakly Weights, available on iTunes and Podbean, where I discuss training for powerlifting in-depth. You can join our mailing list, too, for free sample programs when we discuss them.
Finally, if you like this content and want to learn more from me, check out Fitness Fundamentals – a website providing the most up to date, applicable fitness information, run by my colleague Luke Tulloch and with content written by me.

Follow me:
Leave a reply

Your email address will not be published. Required fields are marked *