Introduction
This essay examines the challenges faced by a national cyber security operations centre (SOC) in integrating an AI-based decision support system with human analysts and incident managers. Drawing on Endsley’s (2023) research on situation awareness (SA) and shared mental models in human–AI teams, the analysis focuses on the breakdowns in communication, trust, and coordination within the SOC. Key issues include incompatible understandings of priorities, automation surprises, and conflicting interpretations of data. The essay explores how gaps in individual and team SA contribute to these problems, proposes solutions to foster shared mental models, and critically evaluates their potential benefits and limitations. Finally, it outlines methods to assess improvements in team performance.
Breakdowns in Situation Awareness and Shared Mental Models
Situation awareness, defined as the perception, comprehension, and projection of environmental elements in a dynamic context, is critical in high-stakes environments like SOCs (Endsley, 2023). In the described case, analysts’ varied responses to AI risk scores—either over-relying on or ignoring them—indicate a lack of shared understanding of the AI’s role. Incident commanders’ inability to discern “what the AI thinks” suggests a transparency deficit, while the AI’s recommendations often clash with existing playbooks, revealing misaligned goals. Endsley (2023) argues that such discrepancies stem from weak shared mental models, which are collective knowledge structures enabling team members (human and AI) to anticipate and coordinate actions. Without these, automation surprises and role confusion, as seen in the SOC debriefs, undermine response effectiveness during cyber incidents.
Proposed Solutions to Enhance Shared Mental Models
To address these issues, several concrete changes are recommended. First, interface design should incorporate explainable AI features, such as visual dashboards displaying the AI’s reasoning for risk scores and recommendations. Endsley (2023) emphasises transparency as key to building trust and SA. Second, team roles and procedures must be redefined to clarify human–AI responsibilities, ensuring commanders retain final decision-making authority while leveraging AI insights. Third, training exercises simulating dynamic incidents should be implemented to align human and AI mental models. These drills can help analysts interpret AI outputs consistently and allow commanders to anticipate AI behaviour, fostering a unified approach to incident response.
Critical Evaluation of Proposals
The proposed changes offer clear benefits, such as improved coordination and reduced automation surprises through better transparency and training. However, limitations exist. Over-reliance on explainable interfaces might overwhelm analysts with data, while redefined roles could initially cause resistance due to entrenched practices. Unintended consequences, such as reduced human initiative if AI explanations are overly prescriptive, are also possible. Furthermore, training costs and the time required to adapt to new procedures may strain resources. To mitigate these risks, gradual implementation with regular feedback loops is essential.
Assessing Improvements in Team Performance
To evaluate whether shared SA and performance have improved, empirical methods like pre- and post-intervention assessments should be used. Metrics such as response time, error rates in decision-making, and team communication quality during simulated incidents can provide quantitative data. Additionally, qualitative feedback from debriefs can reveal perceptions of trust and clarity in human–AI interactions. Longitudinal studies tracking these indicators over months would offer insights into sustained improvements or emerging issues.
Conclusion
In conclusion, the SOC’s challenges stem from misaligned mental models and gaps in SA, leading to coordination failures and automation surprises. By enhancing interface transparency, clarifying roles, and implementing targeted training, shared understanding within the human–AI team can be improved. While these solutions promise better performance, they carry risks of data overload and adaptation challenges, necessitating careful monitoring. Empirical assessments combining quantitative metrics and qualitative feedback are crucial to validate progress. Ultimately, fostering shared mental models is essential for effective human–AI collaboration in dynamic cyber security environments.
References
- Endsley, M. R. (2023). Supporting Human-AI Teams: Transparency, explainability, and situation awareness. Computers in Human Behavior, 140, 107574.

