AI in cybersecurity shifts the burden from alerts to explanations, new UB research finds

Shield with a lock inside it in a futuristic space.

Artificial intelligence is often framed as a way to cut through noise in cybersecurity operations — fewer alerts, faster decisions, clearer priorities.

New research from the University at Buffalo School of Management suggests the reality is more nuanced.

Raghvendra Singh, a PhD candidate in management, examines how cybersecurity professionals work with AI-enabled tools in environments defined by “infobesity,” or extreme information overload. His findings show that while AI can reduce the volume of alerts, it introduces a different kind of strain, one centered on interpreting complex explanations, confidence scores and evidence trails.

The result is not less work, but different work.

Based on 32 interviews with professionals including threat intelligence analysts, cybersecurity analysts and chief information security officers across industries such as finance, health care and technology, Singh’s study focuses on how practitioners decide whether AI-generated insights are convincing enough to act on, and defensible enough to justify later.

That distinction matters.

In practice, analysts are no longer sorting through hundreds of low-level alerts. Instead, they are evaluating a smaller number of AI-curated cases that arrive with dense layers of supporting information. Singh identifies this shift as “explanation load,” a concept that reframes how organizations think about AI efficiency.

Explainability, often positioned as a solution, can become its own bottleneck when the information is difficult to parse or verify.

The research also finds that trust in AI systems is not fixed. It evolves in response to recent outcomes, workplace dynamics and organizational messaging. Confidence tends to rise after visible successes and decline following errors or high-profile failures, creating what Singh describes as cycles of deference and skepticism.

These fluctuations influence how teams engage with AI recommendations, particularly in high-stakes situations.

When human judgment and AI output conflict, organizations rely on structured processes to reach decisions. These include second opinions, dual approvals, escalation pathways and formal documentation when overriding AI recommendations. Such practices help convert disagreement into coordinated action while maintaining accountability.

A key factor in these decisions is defensibility.

Professionals are not solely asking whether an AI system is correct. They are also considering whether a decision can withstand scrutiny from regulators, legal teams or internal audits. This emphasis shapes when AI recommendations are accepted and when they are set aside.

Singh’s work contributes to a broader effort at the School of Management to examine how AI functions in real organizational settings. Rather than focusing only on model performance, this research highlights the human and institutional dynamics that influence adoption and use.

As AI tools become more common across industries, these insights carry practical implications.

Systems that generate accurate results may still fall short if their reasoning is unclear or overly complex. Effective implementation depends not only on technical capability, but also on how well outputs can be interpreted, validated and defended by the people responsible for acting on them.

The study ultimately underscores a subtle but important shift.

AI is not eliminating cognitive effort in cybersecurity operations. It is redistributing it, from scanning large volumes of data to evaluating the quality and credibility of machine-generated explanations.

For organizations investing in AI, that shift may define the difference between adoption and hesitation.

This story was written by AI and edited by a member of the UB School of Management Marketing and Communications Office.