The Unexpected Power of Making Mistakes
My exclusive interview with Amy C. Edmondson, renowned Harvard Business School Professor of Leadership and Management, about psychological safety and the right kind of wrong.
In the Never Normal, where change happens fast and unpredictably, how we work together is just as crucial as what we are working on. And psychological safety, a concept which I love to refer to in my keynotes, lies at the very heart of that “how”. That’s why I was really excited to interview Amy Edmondson who uncovered that groundbreaking concept, almost 25 years ago now. We talked about the seemingly paradoxical relationship between error-making and team effectiveness, the misunderstood aspects of psychological safety, its peculiar dynamics within boardrooms, its popularity in tech companies, Steve Jobs' 'toxic' leadership, remote work and her upcoming book, "Right Kind of Wrong: The Science of Failing Well".
Amy bumped into the concept of psychological safety by accident. She set out to study organizational learning, as is essential in this fast changing Never Normal world, and she was interested in team dynamics. “Organizations are too complex to learn in any formal sense, but their teams learn”, she explained.
Better teams, more mistakes
And so she embarked as a member of a team conducting a landmark study of medication errors in hospitals. She was asked to measure team effectiveness and to find out whether team effectiveness predicted error rates. Her data found that there was indeed a statistically significant link, … but in the opposite direction than she had expected. Better teams appeared to be making more mistakes, not fewer. After her initial surprise, she suspected that these teams were not actually making more errors, but rather were more open and willing to report and discuss them.
She initially called that discovery - where some teams were learning oriented and leadership was more open and willing to discuss errors, while others were not - a difference in “interpersonal climate”. But eventually a reviewer of one of her papers suggested she use the term “psychological safety”. That paper was published in 1999 and attracted academic attention, but did not yet become mainstream.
That only happened in 2015, when the findings of Google’s Project Aristotle (launched in 2012) - which researched the dynamics of effective teams - were published in a New York Times article. The Google researchers found that what really mattered was less about who is on the team, and more about how the team worked together. And psychological safety turned out to be the key characteristic of that “how”, together with other concepts like “dependability”, “structure and clarity”, “meaning” and “impact”.
Candid, rather than (just) nice
“Psychological safety is not about being nice, being comfortable or even job security. A lot of people get that wrong. In a funny way, it's almost the opposite.,” Amy explained. “Being nice usually means “don't say what you really think,” “be polite” or “hold back”. But I think it’s important to "lean in”, to be candid - in a productive and thoughtful way - rather than nice.”
As a board member of a bank and a media organization, I was curious about Amy’s take on psychological safety in that type of environment. Surprisingly, she explained that boards often display lower psychological safety, because of the strange dynamics between executives of different ranks.
“People at the top, relative to people lower down, have much more confidence that their voice is welcome”, explained Amy. “They have higher psychological safety in that sense. But when they come together, the gaps are bigger between a senior executive and a chairman of the board than for instance between a middle manager and her team. My guess is that, in high functioning boards, that gap is probably smaller. That they offer room for thoughtful and candid conversations.”
Psychological safety in technological environments
I’ve always been very much intrigued about how tech companies like Google seem especially committed to the concept of psychological safety and I really wanted to pick Amy’s brain about that. She believes the strong link between both is about A. more awareness and B. more necessity.
“Tech in general is fast moving and changing, and therefore acutely dependent on learning”, said Amy. “That’s why the industry is fully aware of the need for constant learning and development, knowing how easy it is to be left behind. And second, while there is a statistically significant relationship between psychological safety and team performance, the effect size is larger when the work is more knowledge intensive as well as more in need of problem solving and ingenuity. And that’s definitely the case in technology.”
As you probably know, I’ve always been a great fan of Steve Jobs and Apple, and he really did have the reputation of being quite the toxic leader. He pushed his people towards greatness, while there was probably very little psychological safety in his teams. But they still did achieve the almost impossible together, and so I also asked Amy about her take on that.
A timeframe problem
“Well, we don't have the counterfactual, here”, answered Amy. “We don’t’ know what would have happened if Steve Jobs had been in charge earlier in Apple’s history but as a slightly better leader. Maybe we wouldn't have had some of the problematic turns in that wonderful company's history, right? And secondly, Ed Catmull, co-founder of Pixar, has told me and others that some of what Walter Isaacson wrote in his biography is overplayed. According to Ed, there really was a “before” (before he got fired from his own company) and “after” Steve. That he came back much more thoughtful.”
“Above all, I believe that toxicity in teams is a timeframe problem. Yes, you can completely burn out a team and still achieve something amazing. But it's not a great recipe for the long haul. Now if you create the conditions for candor and learning, you will be able to build something pretty extraordinary for the long term. And I think that’s what you want to achieve.”
A piano top in the ocean
Amy also shared a wonderful metaphor about the pitfalls of working from home, when I asked her about how technologies like Zoom and Teams saved us during the pandemic.
“Buckminster Fuller opens his book “Operating Manual for Spaceship Earth” with a metaphor about a shipwreck”, she said. “If you were on a shipwreck in the middle of Atlantic Ocean and a piano top floated by, it would make a fortuitous life preserver. But this is obviously not the same as saying a piano top is the best way to design a life preserver. He went on to write that humanity was clinging to many piano tops. And I see technology-enabled work as a piano top during the pandemic. Thank goodness we had Teams and all of these platforms, but that is not necessarily to say that this is the best way to design collaborative work.”
And we did see a loss of collaboration relationships across silos during that period, as Amy explained. “Fully remote is quite difficult. During the pandemic we did great compared to what might have happened without these tools. But one could think about that as draining your savings account. We were able to keep our connections and connectedness alive and stay connected with those we already knew, but it did not work as well with those who stood further from us at work. On the other hand “how do we get our people back into the office?” might be the wrong question. We must shift from “how do we get people to do something they don't want to do?”, to “how do we help people to want to do this because it helps them, helps their colleagues and ultimately helps our customers?”. We have to get FOMO on our side; most people want to be where the action is.”
The right kind of wrong
We ended our conversation by talking about her upcoming book “Right Kind of Wrong: The Science of Failing Well”, which will be out September 5th 2023.
“Not all failure is the “good” kind, she explained. “And even though all failure is to be learned from, not all failure is to be embraced. I'm a huge fan of the "fail fast, fail often" mantra in new territory - where no one knows what to do because no one has ever done it before - but not so much in high reliability organizations. You really don’t want your local nuclear power plant to fail fast, fail often, do you? We need to have the wisdom to make the distinction between the domains where in fact 99.999% perfection is achievable and those where 5% perfection might be a stretch.”
“Intelligent failure is the home turf of scientists, inventors, innovators, celebrity chefs and elite athletes. These pioneers necessarily experience some of the right type of failures because they happen in new territory, where you can’t just search the internet for the answer. Also, intelligent failures are opportunity driven – that is they offer the possibility to make progress toward a valued goal in life or at work. Importantly, they are hypothesis informed, meaning that you've done your homework; you are not just guessing. Last but not least, intelligent failures are as small as possible. Just big enough to learn from; no bigger. You don't invest your life savings in an uncertain investment, for instance.”
“So when CEOs say “fail fast” what they mean is “please fail fast in those domains where there is no other way to get the new knowledge and do it in a way that doesn't harm human life or waste unnecessarily large amounts of resources. That’s the right kind of wrong.”
If you want to learn more about leading in the Never Normal, check out my company nexxworks' upcoming 2024 learning programs about leadership (Singapore), accelerating innovation (Ghent Never Normal Masterclass), next generation tech & business (Youth Tour in Boston), Artificial Intelligence (Munich), customer culture (Miami), business and tech trends in China (Hong Kong, Macao, Guangzhou & Shenzhen) and more here.
This first appeared on Peter's newsletter. Read that and more here.