There is a second best in the land of the inference. Obviously owning enough gpu compute to run the models you want is the best option out there. But in case you want an alternative - confidential inference is the way to go. Contrary to "private" ai providers that promise you theres no tracking or just separate your billing method from the prompts you send confidential inference providers enable you to verify the software they run so you can provably know that your data remains confidential and noone is leaking it out in debug logs or even worse storing it on purpose. Thats why I built this website - theres more and more providers out there and prices vary A LOT so having a place where all of them are in one place is very beneficial when you decide to use a confidential inference provider for your next project https://confidentialinference.net/