My most successful LLM sessions are the ones where I have the least uncertainty about the responses to my prompts. When I pretty much know what the AI will output and am expecting the correct result. Where I struggle is where I'm leaning too much on the LLM to infer what I want. And it's in that case where gambling becomes a better description of what I'm doing. Thanks for the insights.