Governance Insights 2025: A Preview of 2025

we discuss in Managing AI Technologies below, the CSA recently expressed its expectation that “[g] overnance and risk management practices should be paramount” for issuers that are using or developing AI technologies.

RISK OVERSIGHT FUNDAMENTALS

In light of the foregoing, we set out below some key takeaways for boards and management to consider for 2025 and beyond. — Boards and management should actively identify legal compliance and other mission-critical risks to their businesses, leveraging their internal expertise and outside advisers to help inform their decision-making and establish processes to manage and assess risk. Specialized board committees can play an important role in fulfilling this oversight function. — Directors and officers should ensure that adequate reporting and information systems are in place and that these systems cover significant risks specific to the business and industry in which the corporation operates, as well as more obvious areas of legal risks and financial reporting compliance. — Simply implementing reporting and information systems is not enough; once in place, they need to be actively monitored. Both directors and management should thoughtfully engage with the data and information produced from these reporting systems. Directors and officers cannot simply ignore “red flags” of non-compliance, particularly in areas relating to legal compliance and key operational matters. Managing AI Technologies

USE AND OVERSIGHT OF AI SYSTEMS

In December 2024, the CSA released Staff Notice and Consultation 11-348 ( Applicability of Canadian Securities Laws and the Use of Artificial Intelligence Systems in Capital Markets ), addressing the use of AI systems by issuers and in capital markets more generally. The CSA expects that issuers will adopt or adapt existing governance and risk oversight procedures (including those related to accountability and risk management) to address the incorporation of AI systems into their operations (“Policies and procedures should be designed in a way that accounts for the unique features of AI systems and the risks they pose.”) and that issuers will utilize or incorporate systems whose outputs are “explainable” (“The use of AI systems that rely on certain types of AI techniques with lower degrees of explainability, also referred to as ‘black boxes’, may challenge the concepts of transparency, accountability, record keeping, and auditability.”).

9

Governance Insights 2025

Powered by