I hear people say If agents do your stuff, you lost your agency. The average tech bro is ai pilled to the point of being unable to complete basic tasks on their own anymore. but I think that's upside down. Humans are lazy. We use calculators for stuff we could figure out with pen and paper and yes, we could stay sharp on the math front by not using calculators and now, by using LLMs, we might get less sharp on the general thinking front but I don't think that's what's happening. The human brain can do many things but one thing it can't: not learn. So using agents, we still learn ... something. We learn to use agents but we are in control of these machines. Now what happens when those machines - co-controlled by their big makers - nudge us in one way or another or outright deny to do what we need done? Yes, that is a big problem! We need not only self-hosted "open source" models. We need to control the weights cause how these models are made is not as open as they try to make us believe. The biases are baked into the training process, so you can self-hoste it but it still might be woke or leak your company secrets to China. So who has more agency, Elon Musk, commanding thousands of workers and machines reaching even to outer space or the guy living off the grid in the woods? Agents give us agency but we need the right agents. Ones that we truly control and know how they were trained.