After spending now likely thousands of hours prompting LMs, one thing I've found that can vastly improve the quality of outputs is something I haven't seen talked about much.
✨ "Instantiate two agents competing to find the real answer to the given problem and poke holes in the other agent's answers until they agree, which they are loathe to do." ✨
This works especially well with CLAUDE3 and Opus
For a more advanced version that often works even better:
✨"Instantiate two agents competing to find the real answer and poke holes in the other's answer until they agree, which they are loathe to do. Each agent has unique skills and perspective and thinks about the problem from different vantage points.
Agent 1: Top-down agent
Agent 2: Bottom-up agent
Both agents: Excellent at the ability to think counter factually, think step by step, think from first principles, think laterally, think about second order implications, are highly skilled at simulating in their mental model and thinking critically before answering, having looked at the problem from many directions." ✨
This often solves the following issues you will encounter with LLMs:
1️⃣ Models often will pick the most likely answer without giving it proper thought, and will not go back to reconsider. With these kinds of prompts, the second agent forces this, and the result is a better-considered answer.
2️⃣ Continuing down the wrong path. There's an inertia to an answer, and the models can often get stuck, biased toward a particular kind of wrong answer or previous mistake. This agentic prompting improves this issue significantly.
3️⃣ Overall creativity of output and solution suggestions. Having multiple agents considering solutions results in the model considering solutions that might otherwise be difficult to elicit from the model.
If you haven't tried something like this and have a particularly tough problem, try it out and let me know if it helps!