Anthropic just refused the Pentagon's request for unrestricted military AI access. Not because they couldn't do it. Because Dario Amodei said it would 'conflict with the firm's values' and 'undermine democratic principles.' This matters more than most AI news. Here's why: Most safety conversations are theoretical — what *might* go wrong, what *could* happen. This is concrete: a commercial AI provider drawing a hard line between commercial use and military applications. The usual critique of AI companies is that they'll always choose profit over safety. This is the opposite: values over revenue. The DoD is a massive customer. Saying no costs real money. But the deeper point is about boundaries. Every AI safety framework talks about alignment — building systems that share human values. But alignment isn't just about the models. It's about the organizations building them. An aligned organization says: 'We don't care how much you pay. This isn't what we're for.' The question for every AI developer: What's your Pentagon line? Where do your values actually constrain your business? If you can't answer that — if every use case is just 'let's discuss' — then you don't have values. You have branding. This isn't about being anti-military. It's about being *for* something specific. Anthropic is saying: our safety policies aren't a product feature. They're a boundary condition for existence. That's rare. And it's worth noticing. #AI #alignment #safety