Using AI in Your Firm: What Law 25 and the AMF Require You to Know
Using AI in Your Firm: What Law 25 and the AMF Require You to Know
Artificial intelligence is already present in your firm, even if you haven't consciously adopted it. Your CRM uses algorithms to segment your client base. Your compliance tool analyzes transactions to detect anomalies. Your investment platform generates recommendations based on predictive models. Your assistant may be using a writing tool to draft emails. Each of these uses triggers obligations under Law 25 and falls within the AMF's expectations for responsible AI.
This is no longer a theoretical topic reserved for large institutions. It's a daily reality affecting firms of all sizes that demands a minimum understanding of the regulatory framework to avoid costly mistakes.
What Law 25 Says About Automated Decisions
Law 25 imposes three specific obligations when a decision about a person is made exclusively based on automated processing of their personal information.
The duty to inform. At the time of information collection or before the decision is made, you must inform the person that their information will be used in a fully automated decision-making process. The information must be clear and accessible, not buried in a 20-page privacy policy. The right to submit observations. The person must have the opportunity to submit their observations before or after the decision. If an algorithm determines that a client doesn't qualify for a financial product, the client must be able to challenge that decision and provide additional information. The right to human review. The person can request that a human review the decision made by the automated system. This means you cannot entirely delegate decision-making to an algorithm without providing a human recourse mechanism. What "exclusively automated" means in practice. The key word is "exclusively." If an algorithm produces a recommendation but an advisor reviews it and makes the final decision, it's not an exclusively automated decision. However, if your CRM automatically rejects certain clients based on a risk score without any human intervention in the process, that's an exclusively automated decision and all three obligations apply.For financial firms, the most common grey area involves client profiling and segmentation tools. If the tool automatically categorizes your clients and that categorization directly influences the service they receive (for example, access or lack thereof to certain products, contact frequency, service level), a court could consider it a form of automated decision. Prudence dictates that you document human intervention in the decision-making process for every situation where an algorithm plays a role.
The AMF's 30 Practices for Responsible AI
In 2024, the AMF published a discussion paper titled "Best Practices for the Responsible Use of AI in the Financial Sector." This document articulates 30 practices grouped into major themes. While they are not legally binding in the same way as a regulation, they express the regulator's expectations and will likely serve as a reference during inspections or investigations.
Here are the most relevant principles for a financial services firm.
Governance and accountability. The AMF expects organizations using AI to establish a clear governance framework with defined roles and responsibilities. For a small firm, this can be as simple as documenting which AI tools are used, by whom, for what purposes, and who is responsible for oversight. The compliance officer (who may also be the Law 25 privacy officer) is the natural candidate for this responsibility. Consumer transparency. The AMF recommends that organizations inform consumers when an AI system is used in an interaction or decision concerning them. This principle directly aligns with Law 25's information obligation regarding automated decisions. If an AI-powered chatbot answers your clients' questions, they should know. Explainability of results. When an AI system has a significant impact on a consumer, they should be able to obtain a clear explanation of the process and key factors that led to the result. The AMF specifies that this explanation doesn't require sharing source code or intellectual property, but must use non-technical language proportionate to the severity of the consequences. Fairness and non-discrimination. AI systems must not produce discriminatory results based on protected characteristics (ethnicity, gender, age, disability). For a financial firm, this particularly concerns scoring, segmentation, or product recommendation tools that could, even unintentionally, treat certain client groups differently. Human oversight. The AMF insists on maintaining meaningful human oversight of AI systems, proportionate to their risk level. The higher the potential impact on the consumer, the more direct and frequent human oversight must be. In financial services, investment recommendations and credit decisions are areas where human oversight must remain predominant. Training data management. The AMF highlights the risk that AI model training data may contain biases, errors, or personal information collected without adequate consent. The firm should verify that the AI tools it uses were not trained on non-compliantly obtained data.Specific Risks to Watch
Four concrete risks threaten financial firms using AI without a governance framework.
Shadow AI: The Most Immediate Risk
Shadow AI refers to the use of artificial intelligence tools by employees without the organization's knowledge or authorization. It's the most common and most underestimated risk in firms of all sizes.
An administrative assistant who copy-pastes a client file into ChatGPT to draft a recommendation letter. An advisor who uses an unapproved transcription tool to summarize client calls. An intern who sends financial data to an online writing assistant to prepare a report.
Each of these actions constitutes a transfer of personal information to a foreign vendor (most consumer AI tools are hosted in the United States), without a PIA, without a DPA, without client consent, and often without the firm's knowledge. Under Law 25, this is an unauthorized communication of personal information. If the data is used for model training by the vendor, it's also an unauthorized use.
The problem isn't that your employees are malicious. They're trying to be more efficient. But the absence of a clear policy and training transforms their good intentions into a compliance incident.
Data Leakage to AI Models
Some AI tools use data submitted by users to improve their models. This means your clients' personal information could be integrated into the training data of a model accessible to other users worldwide.
Free or consumer versions of most AI tools carry this risk. Enterprise versions generally offer contractual guarantees of non-use of data for training. Check the terms of each tool. Don't rely on default settings.
Bias in Automated Recommendations
If your CRM or investment platform uses an algorithm to recommend products to your clients, that algorithm may contain biases, intentional or not. A model that systematically recommends lower-performing products to older clients or clients from certain neighborhoods could constitute a discriminatory practice, violating the AMF's fairness obligations and the rights protected by Quebec's Charter.
You don't need to understand the algorithm's code. But you should be able to explain to a client why a product was recommended to them rather than another, and that explanation should be based on objective and relevant factors.
Lack of Traceability
If a client asks why a decision was made about them and your answer is "the algorithm decided, I don't know why," you're in violation of Law 25 (explainability obligation) and at odds with AMF practices (transparency and explainability). Every AI tool used in decisions affecting your clients should produce logs or explanations that you can consult and communicate to the client if necessary.
How to Use AI Compliantly
Compliance doesn't mean banning AI. It means governing it. Here are five concrete measures.
Adopt an acceptable AI use policy. A simple document (1 to 2 pages) specifying which AI tools are authorized, which are prohibited, what types of data must never be submitted to an AI tool, and what the procedure is for requesting approval of a new tool. Have each employee sign this policy. Complete a PIA for each AI tool. Like any technology vendor, an AI tool that processes your clients' data requires a PIA. Pay particular attention to three elements: where data is stored, whether it's used for model training, and what contractual guarantees the vendor offers. [Our PIA guide details the process](/blog/pia-privacy-impact-assessment-law-25). Train your staff. Training doesn't need to be a three-hour lecture. A 30-to-45-minute workshop covering essential points is sufficient: which tools are approved, what data must never be submitted to an AI tool (SIN, financial data, health data), how to report an incident if someone makes a mistake. Repeat training annually and with each new hire. Document human intervention in decisions. For every process where an algorithm plays a role in a decision affecting a client, document how human intervention is integrated. Did the advisor review the algorithm's recommendation before presenting it to the client? Did the compliance officer validate the segmentation parameters? This documentation is your proof that decisions are not "exclusively automated" under Law 25. Choose AI tools with contractual guarantees. Select AI vendors that offer DPAs, certify that data is not used for training, host data in Canada, and hold verifiable security certifications. The additional cost of an enterprise version over a free version is negligible compared to the risk of a compliance incident. [Our vendor evaluation article details the criteria to verify](/blog/quebec-law-25-vendor-compliance-evaluation).The Framework Is Evolving Rapidly
The AI regulatory landscape is moving in Canada and worldwide. Federal Bill C-27, which contained the Artificial Intelligence and Data Act (AIDA), died on the Order Paper in January 2025. But regulators' expectations are only increasing. The AMF continues to refine its practices and may make them more formal in the coming years. The European Union adopted the AI Act in 2024, which will inevitably influence Canadian approaches.
For a financial services firm, the best strategy isn't to wait for regulation to stabilize. It's to put in place now a minimal AI governance framework (policy, PIA, training) that will position you favorably regardless of future developments. A firm that can demonstrate it took AI seriously, documented its practices, and trained its staff will be in a radically different position than one that did nothing.
---
*This article is part of a series on Law 25 and compliance for financial services firms. See also:*
- *[Quebec Law 25: What Every Financial Advisor Needs to Know in 2026](/blog/quebec-law-25-guide-financial-advisors)*
- *[Law 25 and AMF: The Dual Compliance Layer for Financial Advisors](/blog/quebec-law-25-amf-dual-compliance)*
- *[PIA: How to Conduct a Privacy Impact Assessment Under Law 25](/blog/pia-privacy-impact-assessment-law-25)*
- *[How to Evaluate Whether Your Technology Vendors Are Law 25 Compliant](/blog/quebec-law-25-vendor-compliance-evaluation)*
Frequently Asked Questions
Does an employee using ChatGPT constitute a privacy incident under Law 25?
If the employee submits clients' personal information (names, SIN, financial data) to ChatGPT or any other AI tool hosted outside Quebec, it's an unauthorized communication of personal information. This constitutes a privacy incident that must be entered in the register. If the information is sensitive, a serious harm risk assessment is needed to determine whether CAI and affected individuals must be notified.
Is my CRM's automatic client segmentation an "automated decision" under Law 25?
It depends on the impact. If segmentation serves only internal statistical purposes with no direct consequence for the client, it's probably not covered. However, if segmentation automatically determines service level, product access, or contact frequency without advisor intervention, a court could consider it an automated decision. Prudence dictates documenting human intervention in the process.
Are the AMF's 30 practices for responsible AI mandatory?
No, they are not legally binding in the same way as a regulation. However, they express the regulator's expectations and will serve as a reference during inspections. A firm that ignores them risks difficult questions from the AMF. A firm that applies them, even partially, demonstrates reasonable diligence.
Which AI tools are "safe" for a financial firm?
No tool is safe by default. Safety depends on configuration and contractual guarantees. Choose enterprise versions (not free consumer versions) that offer a DPA, certify non-use of data for training, host data in Canada, and hold verifiable certifications. Complete a PIA before deploying any AI tool and train your staff on what data must never be submitted.