IMPACT OF CYBERSECURITY MATURITY ON THE EFFECTIVENESS OF AI-ENABLED INTERNAL AUDIT FUNCTIONS
Abstract
AI-enabled internal audit is increasingly deployed to expand risk coverage, accelerate audit cycles, and enable continuous assurance through techniques such as anomaly detection, process mining, natural language processing, predictive risk scoring, and automated control testing. However, the effectiveness of these approaches is contingent on the cybersecurity conditions that govern the integrity, availability, and observability of the underlying data and systems. This article develops and substantiates a theory-driven conceptual model explaining why and how cybersecurity maturity determines whether AI-enabled internal audit produces reliable assurance or false confidence. Drawing on dynamic capabilities theory, cybersecurity maturity is defined as a multi-dimensional capability aligned with established standards and frameworks, encompassing governance, identity and access management, data integrity, logging and telemetry, incident response, and third-party risk management. The model proposes (i) a direct positive effect of cybersecurity maturity on internal audit effectiveness, (ii) mediation through data governance maturity and security telemetry quality, (iii) moderation by AI governance and model risk management maturity, and (iv) explicit non-linear threshold effects in which AI-enabled audit effectiveness increases sharply only after minimum cybersecurity maturity conditions are achieved. The article further identifies critical failure modes—such as data corruption, log tampering, identity compromise, model drift, automation bias, and adversarial manipulation—and specifies concrete technical, governance, and audit control mechanisms to mitigate these risks. A maturity-stage application matrix is provided to guide Chief Audit Executives, CISOs, and AI governance leaders in sequencing AI-enabled audit adoption according to cyber capability readiness. The paper advances audit analytics and cybersecurity governance research by formalizing cybersecurity maturity as a foundational antecedent to trustworthy AI-enabled assurance and by clarifying when AI audit systems enhance assurance versus institutionalize misleading signals.
Keywords: Cybersecurity Maturity; Ai-Enabled Internal Audit; Audit Effectiveness; Continuous Auditing; Security Telemetry; Data Governance; Model Risk Management; AI Governance; Dynamic Capabilities; Anomaly Detection; Process Mining; Non-Linear Effects; Assurance Reliability; Automation Bias; Adversarial Machine Learning













