Warning or Liability? Cybersecurity Alerts as Legal Speech Acts in Risk and Forensic Contexts

Authors

  • C. Matt Graham Department of Information Systems & Security Management, Maine Business School, University of Maine, USA Author
  • Nguyen Lam Department of Information Systems & Security Management, Maine Business School, University of Maine, USA Author

DOI:

https://doi.org/10.65879/3070-5789.2025.01.06

Keywords:

Cybersecurity alerts, Legal speech acts, Duty to warn, Affective computing, Trust, Risk perception, AI accountability

Abstract

Problem: Cybersecurity alerts are often treated as purely technical signals. Yet they also operate as communicative acts with emotional weight, shaping user behavior and potentially invoking ethical and legal responsibilities. As automated systems increasingly deliver these alerts, the stakes of how risk is communicated grow sharper.

Design/Methodology: We examined 10,000 user responses to AI-generated cybersecurity alerts using the CyberMetric-10000 dataset (collected from Reddit via the Pushshift API). Sentiment was classified with VADER, and emotional reactions were mapped using the NRC Emotion Lexicon. Interpretation drew on affective computing, human–computer interaction (HCI), and legal theory.

Key Findings: Responses revealed strong emotions, anger, fear, frustration, even to neutral alerts. These reactions shaped perceptions of trust, threat severity, and system credibility. Poorly designed alerts often failed to reassure, instead producing disengagement or distress.

Contributions: This study reframes cybersecurity alerts as digital legal speech acts, with implications under doctrines such as the duty to warn. It argues for systems that are not only technically accurate but also emotionally intelligent and legally sound. By foregrounding emotion as central to digital risk communication, the work bridges law, technology, and human experience.

References

Covarrubias JZL. Effective Communication as A Pillar of Cybersecurity: Managing Incidents and Crises in the Digital Era. Journal of Risk Analysis and Crisis Response 2025; 15(2): 34-34.

https://doi.org/10.54560/jracr.v15i2.564

[2] Bates DR, Jackson BD. New theories of product liability develop in the age of AI and increased automation. Mitchell Williams Law Blog 2021.

[3] Tschider CA. Locking down "reasonable" cybersecurity duty. Yale Law & Policy Review 2022; 41: 75.

https://doi.org/10.2139/ssrn.4038595

[4] Al-Dulaimi AOM, Mohammed MAAW. Legal responsibility for errors caused by artificial intelligence (AI) in the public sector. International Journal of Law and Management 2025.

https://doi.org/10.1108/IJLMA-08-2024-0295

[5] Conrad CD, Aziz JR, Henneberry JM, Newman AJ. Do emotions influence safe browsing? Toward an electroencephalography marker of affective responses to cybersecurity notifications. Frontiers in Neuroscience 2022; 16: 922960.

https://doi.org/10.3389/fnins.2022.922960

[6] Van Schaik P, Renaud K, Wilson C, Jansen J, Onibokun J. Risk as affect: The affect heuristic in cybersecurity. Computers & Security 2020; 90: 101651.

https://doi.org/10.1016/j.cose.2019.101651

[7] Stacey P, Taylor R, Olowosule O, Spanaki K. Emotional reactions and coping responses of employees to a cyber-attack: A case study. International Journal of Information Management 2021; 58: 102298.

https://doi.org/10.1016/j.ijinfomgt.2020.102298

[8] Budimir S, Fontaine JRJ, Huijts NMA, Haans A, Loukas G, Roesch EB. Emotional reactions to cybersecurity breach situations: Scenario-based survey study. Journal of Medical Internet Research 2021; 23(5): e24879.

https://doi.org/10.2196/24879

[9] Paudel R, Al-Ameen MN. Priming through persuasion: Towards secure password behavior. Proceedings of the ACM on Human-Computer Interaction 2024; 8(CSCW1): 1-27.

https://doi.org/10.1145/3637387

[10] Schaltegger, Ambuehl, Bosshart, Ebert. Human behavior in cybersecurity: An opportunity for risk research. Journal of Risk Research 2025; 28(8): 843-854.

https://doi.org/10.1080/13669877.2025.2539109

[11] Gerber N, Zimmermann V, von Preuschen A, Renaud K. Unpacking the social and emotional dimensions of security and privacy user engagement. Proceedings of the 21st Symposium on Usable Privacy and Security (SOUPS) 2025.

[12] Wiemken M, Hildebrandt K, Jeworutzki A, Putzar L. Emotional manipulation in phishing emails: Affective responses and human classification errors in a simulated email environment. Proceedings of PETRA 2025.

https://doi.org/10.1145/3733155.3736796

[13] Pigola A, de Souza Meirelles F. Unraveling trust management in cybersecurity: insights from a systematic literature review. Information Technology and Management 2024; 1-23.

https://doi.org/10.1007/s10799-024-00438-x

[14] Moallem A. Human behavior in cybersecurity privacy and trust. Human-Computer Interaction in Intelligent Environments 2024; 77-107.

https://doi.org/10.1201/9781003490685-3

[15] Von der Linde M, Göcke M, Hirschfeld G, Thielsch MT. Check or reject? Trust and motivation development in app-based warning systems. Safety Science 2025; 185: 106724.

https://doi.org/10.1016/j.ssci.2024.106724

[16] Reeves A, Delfabbro P, Calic D. Encouraging employee engagement with cybersecurity: How to tackle cyber fatigue. SAGE Open 2021; 11(1): 21582440211000049.

https://doi.org/10.1177/21582440211000049

[17] Thomson RH, Cassenti DN, Hawkins T. Too much of a good thing: How varying levels of automation impact user performance in a simulated intrusion detection task. Computers in Human Behavior Reports 2024; 16: 100511.

https://doi.org/10.1016/j.chbr.2024.100511

[18] Taddeo M, McCutcheon T,, Floridi L. Trusting artificial intelligence in cybersecurity is a double-edged sword. In Ethics, governance, and policies in artificial intelligence 2021; 289-297.

https://doi.org/10.1007/978-3-030-81907-1_15

[19] Weinbaum C, Knopp BM, Kim S, Shokh Y. Options for strengthening all-source intelligence: Substantive change is within reach. RAND Corporation 2022.

[20] Tilbury J, Flowerday S. Humans and automation: Augmenting security operation centers. Journal of Cybersecurity and Privacy 2024; 4(3): 388-409.

https://doi.org/10.3390/jcp4030020

[21] Zhang B, Dafoe A, Carignan D. Transparency and accountability in AI systems: Safeguarding well-being in the age of algorithmic decision-making. Frontiers in Artificial Intelligence 2024; 7: 1142134.

[22] Mohammad SM, Turney PD. NRC emotion lexicon. National Research Council, Canada 2013; 2: 234.

[23] Zhang XA, Borden J. How to communicate cyber-risk? An examination of behavioral recommendations in cybersecurity crises. Journal of Risk Research 2020; 23(10): 1336-1352.

https://doi.org/10.1080/13669877.2019.1646315

[24] Slota SC, Fleischmann KR, Greenberg S, Verma N, Cummings B, Li L, Shenefiel C. Many hands make many fingers to point: challenges in creating accountable AI. AI & Society 2023; 38(4): 1287-1299.

https://doi.org/10.1007/s00146-021-01302-0

[25] Schoenherr JR, Thomson R. When AI fails, who do we blame? Attributing responsibility in human-AI interactions. IEEE Transactions on Technology and Society 2024; 5(1): 56-66.

https://doi.org/10.1109/TTS.2024.3370095

[26] Faheem MA, Kakolu S, Aslam M. The Role of Explainable AI in Cybersecurity: Improving Analyst Trust in Automated Threat Assessment Systems. Iconic Research And Engineering Journals 2022; 6(4): 173-182.

[27] Waller SW, Brady JG, Acosta RJ, Fair J, Morse J. Consumer protection in the United States: an overview. European Journal of Consumer Law 2011.

[28] Baez HB III. Tort law in the United States 2023.

[29] Freitas MDC, Mira da Silva M. GDPR Compliance in SMEs: There is much to be done. Journal of Information Systems Engineering & Management 2018; 3(4): 30.

https://doi.org/10.20897/jisem/3941

[30] Folio JC III, Ross A, Wolfe I, Weigel NA. Seeking harmony: CISA’s proposed cyber reporting rules for critical infrastructure are an ambitious work in progress. Cyber Security: A Peer-Reviewed Journal 2025; 8(3): 255-263.

https://doi.org/10.69554/JHEV8231

[31] Habbal A, Ali MK, Abuzaraida MA. Artificial Intelligence Trust, risk and security management (AI trism): Frameworks, applications, challenges and future research directions. Expert Systems with Applications 2024; 240: 122442.

https://doi.org/10.1016/j.eswa.2023.122442

[32] Elendu C, Omeludike EK, Oloyede PO, Obidigbo BT, Omeludike JC. Legal implications for clinicians in cybersecurity incidents: A review. Medicine 2024; 103(39): e39887.

https://doi.org/10.1097/MD.0000000000039887

[33] Liu C, Babar MA. Corporate cybersecurity risk and data breaches: A systematic review of empirical research. Australian Journal of Management 2024; 03128962241293658.

https://doi.org/10.1177/03128962241293658

Downloads

Published

2025-12-08

Issue

Section

Articles