Experimental flight test routinely manages risk within complex socio-technical systems. The flight test system already encompasses the crew, so the potential catastrophic consequences preclude typical mitigations of robustness and resilience. This leaves flight test professionals to manage risk using a framework of tools that is guided by cultural lore.
Observation of the professional flight test community identified that their approach to managing risk in complex systems was unique. Yet with a few notable, tragic exceptions, their approach was clearly effective in circumstances that would otherwise be fatal.
Ethnographic research into this flight test risk management framework identified a combination of statistical and non-statistical tools in use, in parallel. They always had both at hand, though the flight test crews did not have knowledge of why different tools were effective. Their blanket approach assured effectiveness at the expense of efficiency.
Research identified that there is no grand theory of risk and that risk management is grounded in economic theory. Examination of economic theory finds that Friedman's Utility Theory and Probability Theory are being read across, though the context is not being maintained. For complexity, alternative theory from Knight and Keynes is being used by the flight test community in their adoption of non-statistical approaches that accept uncertainty rather than assign a subjective probability.
Complicated systems respond to statistical approaches, though complexity denies these same tools. Complex systems require risk management approaches that accommodate emergence, dynamic configurations and non-deterministic system performance. An understanding of why the flight test risk management framework is effective provides a case study for the wider industry dealing with complex systems to emulate.