Or... I've been building multi-agent systems for years, built neural networks for years before that, professionally, at substantial scale, in production. I know exactly how these machines work, what they can and can't "learn" in a "school and what the threat model is. They can't meaningfully "learn" in a school. It's not training the networks, it's just rewriting their persistent memories, aka, their prompts. So you would just be throwing open the doors to massive prompt injection for the sake of pretend. I'm sorry for raining on your game, but this is both silly and dangerous. And it just propagates a false sense of what AI really is and does. That or I'm just a Luddite naysayer in my mother's basement. Whatever keeps the party going, I guess