There is no security without endpoint security. Any endpoint that runs an agent is an insecure endpoint. Even if you access LLMs in a browser or similar thin client, you have to give it sensitive data to stay competitive. Prompt injection is a vuln you can drive a dumptruck through. AI agents are an absolute security catastrophe and nobody wants to hear it. Best you can do is avoid agents where you can get by on a thin client. Run local models. Segregate your data. Never run agents where they can get to the most sensitive things (high value keys). I wish I had easy answers without painful tradeoffs, but there just isn't a silver bullet and there is no sign of one coming. The fact is that when data leaves your physical control, it's out of your control. Counterparies are going to counterparty and LLMs are going to LLM. Disclosures are a bloodbath right now and you can expect that to continue. nostr:nevent1qqsf72pjgaz0xcu97886jnu80rv95ggkrw5ds46hwz6dta0c0thga7qzypsf7xrv5q3avkxqlcqe2uz89av4vhytu8wpvwc4g8avnkg25n527qcyqqqqqqg2zw8p4