An #AI system that learns in production is, by definition, not the system you deployed. EU AI Act Article 3(23) and November 2025 practice guidance are explicit: continuous fine-tuning, RAG updates, and agent memory evolution routinely qualify as substantial modification, triggering full re-conformity assessment. Early 2026 enforcement notes document widespread non-compliance in live high-risk systems. The operational assumption is that ongoing learning is maintenance. The regulatory position is that any non-foreseen change affecting risk profile or purpose restarts the compliance clock. Agentic systems change continuously by design. That means the compliance clock is running constantly, and most institutions aren't tracking it. Post-audit discovery of undocumented model drift in a trading or compliance agent doesn't produce a remediation notice. It produces a fine up to 6% of global turnover, or a cease-and-desist on the entire AI deployment. The engineering team sees a system improving over time. The regulator sees a system operating without valid conformity assessment. That distinction is measured in enforcement actions, not design reviews.