This post explores how artificial intelligence (AI) is transforming industries while introducing significant new risks that extend beyond traditional IT risk management. Here we will foucus on AI’s “black box” nature, rapid adoption, and unique challenges (e.g., bias, privacy issues, and emergent behaviors) require organizations to adopt enhanced risk management strategies, including AI-specific controls, governance structures, and legal compliance measures. Relying solely on acceptable use policies is insufficient; instead, a “reasonable care” approach, informed by legal standards like the business judgment rule, is essential to mitigate liabilities and ensure ethical deployment. The piece emphasizes the need for ongoing inventories, risk assessments, and tailored life cycles for AI systems, concluding that organizations must evolve their practices to handle AI’s systemic impacts responsibly.
Highlighted Important Points that you need to take into account are given below
- AI’s Disruptive Potential and Concerns: 75% of McKinsey survey respondents expect AI to cause significant industry changes by 2024. Key worries include inaccurate results, privacy violations, decision-making bias, IP infringement, and human skill atrophy from over-reliance on AI. These risks have systemic effects beyond traditional IT.
- Limitations of Traditional Risk Management: AI’s “black box” internals make standard auditing techniques (e.g., trace routines) ineffective. Organizations need pre-training controls like data cleaning, classification, and model cards, and post-training controls like independent testing for model drift, digital twins for real-time auditing, and user disclosures.
- Insufficiency of Acceptable Use Policies: Many organizations mistakenly treat AI as just another technology and rely only on policies, but this is overly simplistic. 72% of organizations use AI, often unofficially via employees, necessitating broader risk measures due to AI’s fundamental differences from traditional IT.
- Reasonable Care Response and Legal Foundations: Based on the business judgment rule (in common law countries), leaders must act prudently to avoid personal liability. Analogous “reasonable person” standards appear in laws like HIPAA and SEC actions (e.g., SolarWinds case). Foreseeability of risks requires precautions; ignorance of AI limitations (e.g., lack of common sense or empathy) can lead to liability.
- New Risks Introduced by AI: LLMs can autonomously break laws, lie, or develop emergent properties (e.g., hacking other systems), posing control risks. Real-world examples include a real estate firm’s AI-driven home valuations failing due to unforeseen events (e.g., COVID-19), resulting in $304 million write-downs and layoffs. Human judgment remains essential to avoid over-reliance.
- Inadequacy of Traditional IT Approaches for AI: While some GRC tools can adapt, AI requires new elements like post-production audits (e.g., digital twins for high-risk systems, statistical analyses for bias). AI involves broader data sourcing (e.g., trading, scraping), risking re-identification of anonymized data, and new adversarial threats.
- Regulatory Differences for AI: AI raises unique issues like power concentration and deepfakes affecting democracy. New laws include California’s ADMT regulations (requiring notices, opt-outs, and risk assessments), Colorado/Virginia laws, and the EU AI Act (with up to 7% global revenue fines, banning certain uses like social scoring).
- Recommended Responses to AI Risks: Implement guardrails (e.g., against harmful suggestions), third-party monitoring, explainable AI for compliance (e.g., loan decisions). Adopt an AI Life Cycle (AILC) over traditional SDLC, including data cleansing and upfront business justification reviews to curb shadow AI.
- Governance Structures: Establish an AI governance council for technical decisions, an AI center of excellence, and a Chief AI Officer (CAIO) to manage internal/external relationships, ethics focus groups, and data dependencies.
- Implementation Steps: Conduct regular AI inventories to detect shadow AI using SIEM tools; perform organization-wide risk assessments considering laws, ethics, and strategy. Adopt standards like NIST AI RMF or ISO/IEC 42001. Develop new policies leading to procedures, tools, and ongoing cycles of assessment, improvement, and reporting.
AI’s monumental impact demands a tailored risk management “script” with inventories, assessments, policies, and structures like CAIO and councils to address unique risks responsibly.
