My First Podcast: Automation Overload

My First Podcast: Automation Overload

Interview with Patrick Mendenhall: Automation & Technology Management

Image result for images for Cockpit AutomationI recently sat down with TrainingPort Lead Technical Advisor, Brent Fishlock, to discuss a number of Crew Resource Management issues.  Chief among them was a very timely topic of Automation Overload.  Automation and technology have certainly made our lives better, but they have not come without a cost.  They actually are accompanied by a whole host of challenges, many of which we did not anticipate nor prepare for.  Although not part of the original CRM subjects, Automation and Technology Management has recently become a very critical part of current CRM philosophy.

The Podcast can be found with several other engaging discussions on TrainingPort.net.  Please have a listen and feel free to share your thoughts.

Threat Management: “Expected” Threats

Threat Management: “Expected” Threats

“Thinking” as an Antidote to Complacency…

Threat and Error Management is the cornerstone of Crew Resource Management.  So much so that it is the only training element that the new Transport Canada Contemporary CRM guide (Transport Canada AC 700-042) will require it to be presented annually as opposed to the other training elements which are required every other year after initial training has been completed. This is the first of a series of posts on Threat and Error Management.

The ideal outcome of effective Threat and Error Management is to never get to the “Error Management” portion of the concept, if at all possible.  Although everythreat cannot be identified and mitigated in advance, if we maintain a vigilant watch for the most likely and the most impactful threats, we can significantly reduce the chances of reaching an undesired aircraft state.

Let us start with “Expected Threats”:  These are threats that can be anticipated in advance, such as possible wind shear, bird activity, high density altitude or obstacles that require special engine out procedures and so on.  Expected threats and their mitigations should be discussed before the event that prompts them and by “discussed,” we do not suggest a monologue.  One of the foundational assumptions of CREW Resource Management is that every person that is a part of the flight operations team has life experiences that the others do not. The briefer should encourage a dialogue and elicit the other crew members’ participation as appropriate. 

And don’t forget, its not just about the pilot stuff.  What about briefing the flight attendant(s)?  S/he may have concerns that need to be considered – for example, a passenger with a known health condition: this could be a threat that requires the crew to be more vigilant about medical emergency procedures, including a constant watch on potential divert fields, should the need arise. What about dangerous goods?  A discussion with the loader about the location, packaging and coding of the DGs would be a valuable threat consideration.

For each specific flight, always consider the most likely threat (e.g. numerous birds reported in the vicinity of the departure end of the runway) and the most impactful threat (e.g. engine failure at V1 at heavy weight).  Talk about it.  Be specific. If there is a way to eliminate the threat, do it.  Articulate your “bottom line” (e.g. “Because we are at max gross takeoff weight with gusty conditions, if there are any reports of wind speed deviations of +/- 5 knots or more, we’ll wait until the system passes to takeoff…”) and have a plan in place in the event that the threat does come to fruition.

So think about Threat Management every time you fly and at every stage of flight.  Thinking is an antidote to complacency, and the Threat and Error Management process causes us to think. Indeed, it forces us to think!  We must all use our collective noggins and consider the, “what ifs” constantly.  If your attitude is, “the company doesn’t pay me to think,” you have just made the choice to become a Bus Driver versus a Professional Aviator. The choice really is ours…our fellow pilots and employers deserve the latter.

Next up… Unexpected Threats

 

It is Always Better to Manage the Threat than to have to React to It!

It is Always Better to Manage the Threat than to have to React to It!

On October 12th of last year, we were dispatched on a flight from Atlanta to Portland.  The remnants from Hurricane Matthew had finally cleared in the Southeast at least, but the forecast for our 7:00 PM arrival in Portland was pretty much at Category I minimums.  With a couple of decent alternates available and some extra gas, off we went on our five-and-a-half hour journey to the City of Roses.

 

As we progressed across the country, the hourly ATIS updates were getting markedly better.  We were a little behind on our fuel plan, but with the weather observations trending so nicely, we weren’t too concerned.  One last ATIS just minutes prior to our descent stated 1,500 foot ceiling, 10 miles visibility and 5 knots of wind right down the runway.  This was going to turn out to be a good night after all!

 

From somewhere over Nashville until top-of-descent, I had begun to mentally prepare for an easy night.  The progressively improving ATIS reports were such a stark contrast to the forecast that they just put me in that, “…nothing to worry about…” state of mind.

 

What that last top-of-descent ATIS did not state was that we would experience moderate turbulence during most of our arrival.  The rocky descent should have been my first clue; not seeing the lights at 1,200, then 1,100, then 1,000 feet should have been a tipoff as well.  At 800 feet, with nothing to look at, I was starting to wish that I had done a better job of briefing the missed approach: “Hmmm, was that an immediate right turn or a climb/turn?”

Based on the latest ATIS, my expectation was that I would see something that looked like this by at least 1,000 feet: 

Such was not the case: somewhere between the latest ATIS and now, a squall had moved over the field and at 200 feet AGL (Cat I minimums), this is what we saw:

The GO AROUND was not picture perfect, but successful (and safe). On downwind, we started to do some fast fuel calculations and we figured that we had enough fuel for two more misses, then we would be committed to head for our alternate.

On the next approach, the squall had passed, and updated ATIS indicated that weather was back up to 1000 feet, two-miles visibility and calm winds. What could possibly go wrong?

We broke out at about 700 feet – not the 1,000 feet that had been expected – with PDX 28R in sight. At 400 feet, Tower announced, “ATTENTION ALL AIRCAFT WINDSHEAR ADVISORY IN AFFECT; LOSSES OF 20 KNOTS REPORTED ON FINAL, RUNWAY 28R.” The startle reflex took effect, and after giving my mind a moment to digest this news, I initiated a GO AROUND. Now this was getting ridiculous! Around we went – again – now with just enough fuel for one more try before activating “Plan B”.

The third approach turned out to be a non-event: the rain had passed; the windshear was gone and other than a runway change due to the wind shift, life was good again!

The worst part of that night was not that any of it happened – after all, we train for this all of the time! What made it so difficult is that we were not mentally prepared to address the obvious threats. Our reality didn’t even come close to our expectations!

So how did this turn into such a challenging night?!

Answer: COMPLACENCY! The threats that existed in the forecast when we departed Atlanta were still very present when we arrived in Portland. The steadily improving weather observation, including changing wind direction and temperature should have alerted us to the fact that a front was passing and I might want to be on the lookout for associated weather issues including windshear.

And regardless, just like we do for a simulator checkride, we should always brief the GO AROUND like we mean it – every time! If it wasn’t minimums or windshear, it could have been spacing, a runway incursion or an unstabilized approach. Lesson learned: always anticipate, respect and manage the threats, before you have to react to them!

How a Retailer has Adapted HRO Principles with Great Results

How a Retailer has Adapted HRO Principles with Great Results

My wife and I recently downsized to a brand spanking new condo.  Along with fresh paint and a few other personal touches, purchasing new window treatments was high on our list of move-in priorities.  After diligent consumer research (and yes, influenced by a fairly intense advertising campaign), we chose Blinds.com to deal with that task.  The experience was exactly as advertised: not only was the initial customer experience thorough and professional, but when we discovered that there was a slight discrepancy regarding color selection, the company accepted full responsibility for the error.  They replaced about $2,000 worth of product – without hesitation.

I’m not writing this to endorse a product or company.  In fact, I would never have gone beyond a “thank you” but when I came across an article in the May, 2014 issue of INC magazine about the tremendous success of this company and its founder, Jay Steinfeld, I had to take a closer look.

Although the article never mentions “High Reliability Organizations” (HROs), what Steinfeld mobilizes in his business are in fact, the five principles of HROs outlined  by Weike & Sutcliffe in 2007 in their book Managing the Unexpected[1]:

  • Preoccupation with Failure: when made aware of the problem, the Customer Service Department was completely focused on getting it right and unrelenting in that pursuit;
  • Reluctance to Simplify: Without getting bogged down in the details of this very unique case, it would have been a convenient solution to find a pigeon hole in which to place my problem: give the customer a “take it or leave it” resolution and move on.  This did not happen and undoubtedly cost the company in the short term; but the long term benefits of being reluctant to simplify the problem will outweigh the short term gain through greater customer satisfaction, loyalty and that invaluable word of mouth endorsement.
  • Sensitivity to Operations: Steinfeld states, "If we want to get to the truth, we have to hear the truth.”  This could be straight out of  Weike & Sutcliffe.
  • Commitment to Resilience: "I'm risk averse; I hate making mistakes," says Steinfeld. He learned to overcome that fear by recognizing that the consequences of failure are seldom catastrophic--a lesson he now shares with his employees. "'You don't have to fear the mistake. It will never kill you,' I tell them," he says.  One of the five HRO principles is accepting that yes, mistakes will happen, but a true HRO will not allow mistakes to disable it.
  • Deference to Expertise: In the interview, Steinfeld says that he tells new employees that he expects them to question his decisions.  There is no clearer example of management deferring to the expertise of those on the front lines than this invitation to challenge the boss’s decisions.

When I read the INC article, the reason for their success – on a personal and a corporate level – came clearly into focus.  Steinfeld has figured out- either intentionally - or just through luck - that the benefits of being an HRO do not need to be limited to high risk industries such as aviation or healthcare: They can be achieved in retail with very profitable results.   The principles that make an organization highly reliable can certainly be adapted with great success – and profitability – to other industries as well.  In fact, every organization can benefit from increasing reliability, as demonstrated here.

Aviation is an industry that carries enormously high risks, and we are well aware that people who work in any industry associated with life or death consequences may bristle at such comparisons.  But just as healthcare has been often resistant to accept the lessons from other high risk industries - such as aviation - this example merely bolsters the assertion that these principles work universally.

[1] Weick, Karl E., and Kathleen M. Sutcliffe. Managing the Unexpected: Resilient Performance in an Age of Uncertainty. San Francisco: Jossey-Bass, 2007.

“…just two guys in a box” – Really?

“…just two guys in a box” – Really?

by Suzanne Gordon & Patrick Mendenhall

As we have gone around the country discussing our book Beyond the Checklist: What Else Health Care Can Learn from Aviation Teamwork and Safety, we have been struck by the number of people who insist that healthcare has little to learn from aviation because the two enterprises are entirely different. Critics suggest that healthcare is far more complex than aviation. One physician in charge of simulation at a large medical school blithely opined that really “in aviation, it’s just two guys in a box.” Another physician insisted that “…flying a 747 is really no different than flying a Cessna.” On further inquiry, we learned that he had done neither. Even many who are somewhat sympathetic to our message believe that healthcare and aviation have little in common.

This idea has likely taken root because people do not understand the complexity of the global system of aviation safety in which each individual flight is embedded. People think of an airplane flight as an individual, discrete entity: Plane takes off, plane lands. Just two guys in the box get it off the ground and back on the ground, and with remarkably few glitches – this happens day in and day out. This idea is reinforced each time we look up at the sky and see this vast expanse of blue (or gray if you live in Seattle as Patrick does) with maybe the odd airplane skimming the horizon. What the individual standing on the ground does not see are the many, many airplanes that are up in the sky at 28,000 to 60,000 feet, all of which function in the same kind of interconnected system that patients in a hospital or other complex facility depend on.

For a little perspective on what is really going on “up there”, take a look at this YouTube video that shows you what is going on beyond your view in the so-called friendly skies:
typical
Typical Airway Chart

Let’s say you are in San Francisco or Seattle, or New York or London. At any given time, there may be hundreds – even thousands – of aircraft above you beyond your view. At many major airports, for most of the day a flight departs or arrives nearly every minute. The sky is mapped out in interconnecting “airways” – highways in the sky – that pilots must navigate and monitor with extreme precision in three dimensions to avoid conflicts (two or more aircraft occupying the same space at the same time). To fly safely, requires continuous coordination and cooperation between other aircraft, air traffic controllers and internal resources that include cabin crew, ground support, company dispatch and maintenance.
Add to all of this complexity, the variables of weather – which can affect the entire air traffic control system in a particular region – or even an entire hemisphere as in the case of the Eyjafjallajökull volcano in Iceland in 2010.
Untitled
Eyjafjallajökull Volcano Ash Cloud

In our book, we wrote that we think the kind of one-upmanship which pits healthcare and aviation against one another – insisting that one is potentially more lethal or more complex than the other — is ultimately unproductive and prevents people from learning needed lessons from each. As we put it in the introduction:

To focus only on the differences between the two endeavors, however, is to ignore the very important structural similarities that make the CRM model a useful and readily adaptable foundation for beneficial change in health care. No one can prove who experiences more job stress or complex responsibility, and in the end this is a spurious debate.

It’s a pretty safe bet that when Captain Sullenberger was landing his plane in the Hudson, he was not thinking, “Oh this could be so much worse: I could be a neurosurgeon!” Nor would we think that a physician rapidly managing all of the medications, actions, personnel, and supports needed to rally a “crashing” patient is thanking her lucky stars that she’s not an airline pilot. The pilot needs to deliver his or her passengers safely to their destination, and the surgeon must deliver his or her patient to wellness. If one industry can benefit from the experience of the other and reduce errors and thus enhance safety, why wouldn’t it try?

The real question is: How can the responsible parties in any industry or organization best function to protect those who depend on their skills and professional judgment for survival? We can learn from best practices and relevant models wherever and whenever they are developed and then adapt them to different set- tings in which they may be useful. What is paramount is how an institution—or, in the case of CRM, an entire global industry—learned to change for the better and for the safer and how it has sustained change over time. What did the airline industry do concretely to transform workplace relationships and create a different model of workplace hierarchy and teamwork? How did it confront power and status differentials and learn to help people speak up about safety without fear of reprisal? What strategies and tactics did it utilize, what obstacles did it confront and overcome, and what values and practices did it change—and how? We also believe that, in spite of the differences between healthcare and aviation, the principles of CRM—learning to communicate more effectively, learning to lead a team and work effectively on a team, as well as learning to manage stressful workloads and anticipate a variety of threats to safety, as well as to prevent, manage or contain error—are crucial in healthcare and can and should be taught to and learned by all who care for the sick and vulnerable.”

We think you’ll appreciate this argument even more if you consider the complexity of what happens up there while you are down here. Or what happens up there to get you back down here safely. Aviation, with all its system complexity managed to transform a toxic and dysfunctional culture over thirty years ago. We believe, as healthcare acknowledges its own similarities to where aviation was, those lessons can be similarly and very effectively applied.

Hey Chief…  People are Talking – Are You Listening?

Hey Chief… People are Talking – Are You Listening?

One of the most successful aspects of the aviation safety model is that we, as a culture, have learned how to learn from our mistakes.  I was recently invited to speak to a couple of healthcare / patient safety audiences where I was warned by one of my hosts that when it came to learning from mistakes, the culture here was very much of a “blame, shame and punish” nature.  This was a culture that did not like to admit, much less discuss their shortcomings.

Clicker
Audience Polling Keypad

Inspired by Dr. Marty Makary, author of Unaccountable: What Hospitals Won’t Tell You and How Transparency Can Revolutionize Health Care, I have recently started using an audience polling system as part of my “training arsenal”.

This device allows the user to gather data on how respondents truthfully feel about a subject without the encumbrances of – well, for lack of a better word – accountability.  Respondents have the cloak of anonymity, so they can truthfully answer a question without fear of their boss knowing how that particular individual might feel – unless of course, the answers are nearly unanimous, in which case, the boss has just been sent a very powerful message!

In the above-mentioned presentations, I incorporated this system and directed a number of questions toward audience perceptions of teamwork, communication and transparency.  The results were rather surprising to me: Perceptions of Communication and Teamwork indicated that at least half of the participants either disagreed or strongly disagreed with statements affirming the effectiveness of their own communication and teamwork culture.

Regarding transparency, in spite of what I had been told about the “blame, shame and punish” culture that existed, respondents appeared to feel “safe” in voicing their concerns.

However, when asked if they were confident that their safety concerns would actually be acted upon, 60% indicated that their concerns would not.  The actual audit question was:

Confidence that concerns will be acted upon
Confidence That My Concerns Will be Acted Upon

If I report a problem, I am confident that it will get acted upon:

  1. A.     Strongly Disagree
  2. B.     Disagree
  3. C.     Neutral
  4. D.    Agree
  5. E.     Strongly Agree

What this tells me is that the perception of the majority of respondents was that their concerns pretty much fall on deaf ears.  Only ¼ of respondents felt that their concerns would be acted upon.  That may not be the reality, but if 60% of your team feels that no action will actually result, might it be fair to assume that many may hesitate to bring up safety concerns because they are just wasting their time.

So to all you “Chiefs” out there – Chief of Medicine, Chief of Nursing, Chief of Surgery, etc –  the lesson here is that not only is it crucially important to seek input from those on the front lines, but to show those people that you are “listening” by providing feedback and taking appropriate action.

Can we Fly and Talk at the Same Time? – PLEASE?

Can we Fly and Talk at the Same Time? – PLEASE?

Lessons from Asiana 214: What All HROs Can Learn

In the July 15, 2013 edition of Aviation Week and Space Technology, John Croft comments that “…over-reliance on automation systems appears to have trumped basic flying skills and crew resource management [CRM] in the crash of Asiana 214…”  Like a child learning to walk, the aviation industry is still finding its way regarding how to manage the phenomenal levels of automation that are now available to us.

Properly managed, automation decreases workload and allows better situation awareness than ever before.  It is a wonderful thing and can make the most complex and critical tasks appear nearly effortless.  Improperly managed, it can lead us into the tragic equivalent of flying into a box canyon, from which there is no escape.

Now consider the human factor: rule number one in any study of human factors is that people make mistakes; rule number two: machines are designed and operated by people – which leads us back to rule number one – and opportunities for failure abound!  We know this all too well, yet complacency and “auto-dependency” too often override just plain common sense.  We must not forget that these brilliant engineers that gave us this amazing technology also gave us, with one little click of a button, the option to still actually fly the aircraft.  Pilots must continue to train to and maintain those basic skills because odds are, they will be called upon to use them at some time when least expected; they had better be ready!

Airline cockpits are designed with a crew – a team – in mind.  Automation has led us to a concept that has taken prominence in the pilots’ lexicon: the “monitoring” function.  Not only is the pilot flying (PF) required to constantly monitor that the aircraft is actually doing what s/he has told it to do – known as the “fact vs. fantasy” notion – but all other pilots on the flight deck are expected to fulfill the pilot monitoring (PM) function.  Unlike the days long ago passed, the PM’s involvement in this process is every bit as crucial as the PF.  The PM must function almost as if s/he is the PF.

Essential to this process is communication, a fundamental performance indicator of crew resource management: crews must be willing and able to fly AND talk.  If any member of the crew – PF or PM – sees something that is outside of their expectations they need to speak up – to talk – and to feel encouraged and empowered to do so.  Taking this one step further, if aircraft performance is outside of established parameters – such as, for example, stabilized approach criteria below 1,000 feet – any member of the crew should be expected to command the appropriate action up to and including, “GO AROUND!”

We have yet to get the whole story on Asiana 214 and should definitely withhold judgment and speculation on the myriad of facts that we do not yet know.  What we can surmise thus far is that there was an over-reliance on automation, automation was mismanaged and the crew apparently failed to actually fly the aircraft, monitor performance and when the situation had clearly deteriorated, failed to verbalize their observations with the proper level of urgency.

The lessons from this tragic event extend far beyond the crew, the airline, the manufacturer, ATC, etc…  Any high reliability organization (HRO) can and should benefit from the lessons from Asiana 214:

  • Automation is designed by humans, therefor subject to human failings
  • Operators must trust, but verify any automation mode
  • Operators must continue to train to basic skills
  • If a situation is outside of expected (or established) parameters, act
  • If something seems wrong, it likely is
  • Communicate: see something? Say something – Please!

 

See Something? SAY SOMETHING! as seen on Forbes.com

The Crucial Importance of Open and Honest Communication to High Reliability Organizations

Carmine Gallo of Forbes.com notes that the crash of Asiana flight 214 at San Francisco International Airport has put aviation safety back in the spotlight.  Mr. Gallo interviewed Beyond The Checklist: What Else Healthcare Has to Learn from Aviation Safety and Teamwork co-author Suzanne Gordon in a recently published article, ‘If You See Something, Say Something’: How Communication Training Keeps You Safer In the Sky.

The article points out the tremendous impact that Crew Resource Management (CRM) training has had on the commercial aviation industry, particularly as it pertains to opening lines of communication between all members of the crew.  Read Mr. Gallo’s article and draw your own conclusions as to how greater and better communications amongst team members could impact any HRO.

Breaking News: Nothing New to Report!

Breaking News: Nothing New to Report!

In its May, 2013 issue, Consumer Reports (CR) tells us that “Safety still lags in U.S. hospitals – our ratings show most hospitals need to improve.”  The good news in the article is that more hospitals are reporting their errors (such as hospital acquired infections, readmission & complication rates and overuse of certain procedures).

Without question, disclosing errors is essential to building this system into the true high reliability organization (HRO) that it can be.  But disclosure by itself is like one hand clapping: healthcare needs to build a system that learns from its mistakes.  Only then can it achieve the reliability that consumers expect – and deserve.

The Aviation Safety Model (ASM) that evolved over the past forty years has made commercial aviation an incredibly safe HRO.  The reason: a complete preoccupation with failure.  Rather than prescribe a certain acceptable level of failure – or acceptable “error rate” – commercial aviation encourages, and typically requires reporting of everything from anomalies to accidents.  Errors are assessed and analyzed and training is constantly developed and modified to respond to threats as they emerge.

Our book, Beyond the Checklist – What Else Healthcare Can Learn from Aviation Teamwork and Safety describes how the ASM works in aviation and how these same principles can – and must – be applied to healthcare.  There are lessons here that apply not only to aviation or healthcare, but to any high risk organization that strives to become a true high reliability organization.

The Consumer Reports article (and the associated CBS interview with CR’s Dr. John Santa) is useful and such reading should be required for anyone that will be going into a hospital – which is just about all of us at one time or another in our lives.  Unfortunately, these reports are becoming repetitious: there is really very little actually new to report!  At the equivalent rate of crashing a full 747 on each business day of the year, we are still accidentally killing patients at rates that would completely shut any other industry down overnight.

But real change will not occur until a commitment is made from the top down.  The board room needs to commit a little less to simply running a profit center and a little more to actually running a care center.  Both goals can be achieved.  The airlines did it.  So can you.  Start with holding every CEO accountable for the errors that occur under his or her watch.  When that happens, a sea change will occur that will finally move healthcare from high risk to high reliability.

For more information about Beyond the Checklist, watch the book trailer or buy a copy online!

Meet Risk Management Consultant Patrick Mendenhall

Meet Risk Management Consultant Patrick Mendenhall

Recent Interview with Blogger Marguerite Giguere

patrick mendenhallWhat is your background in aviation?
I started flying when I was 16 and got my private license at 19. I majored in Aerospace Engineering at the University of Washington, worked for Boeing in flight testing then went into the navy to fly fighters. I was active duty for 8 years. The last three years on active duty, I worked in operational flight test for the navy where I did a lot of human factors integration research. That definitely piqued my interest in the human factors aspect of aviation, like why people make decisions, and what’s intuitive and what’s not.

And you’re currently a pilot with a major airline and a partner in Crew Resource Management LLC?
I work for a major international carrier and I’ve flown mostly international routes since being hired in 1989 – almost my entire career. I joined the partnership for CRM LLC in 2006.

Where did you start to really see the importance of Crew Resource Management?
When I first started with the airline they were all three person crews. It was interesting to see what remarkably different results that the leaders got depending on their approach towards their crew. I started noticing different styles in fact, to the point that (I’m not alone in this) a lot of people actually bid to not fly with certain people because they didn’t like them or in some cases, felt unsafe.

Was that happening because it was a less pleasant working experience or because it was less safe?
Both. Generally they would go hand in hand. That’s the part of CRM that is so interesting. It really does make a difference. Leadership is so important. In the last 30 years, the aviation industry has started to realize, “Oh my gosh. The way this guy treats his crew really makes a difference in how they respond during an emergency.”

What is the advantage of the training curriculum offered by Critical CRM LLC via TrainingPort.net?
All the courses are simple. They get right down to the core of what you need to know. Scott, the founder of TrainingPort.net consulted with human factors experts and educators to determine the length that people can sit in front of a computer and maintain their focus.

This is why the modules are 15-20 minutes long?
Yes. If you want to sit down and do three or four units that’s fine, it’s your choice. So it’s very achievable. All the courses are simple, they get right down to the core of what you need to know. You’re not bored with droning on and on about something and reading a ton of text.

What’s been the feedback?
It’s been excellent! Our first attempt at placing a CRM topic online was with Threat and Air management. People loved it. They loved that it was succinct, got right to the point, and then you move on. We got direct feedback from users asking, “Do you have anymore?”
That’s what inspired us to move on and create the entire library of courses. It made us believe that teaching Crew Resource Management online was a viable concept. I still believe that when a flight department is doing training they should still do a certain amount of it as a group. When they complete the course they should still sit down and say, “Let’s talk about this.”
Learn more about the CRM LLC courses here, and find out more about our new courses in Maintenance Resource Management (MRM) and Single Pilot Resource Management (SRM)!