Researchers from Google Deepmind and the University of Southern California have proposed a new self-discover prompting framework to enhance the reasoning capabilities of large language models (LLMs). This approach goes beyond existing prompting techniques and has been found to improve the performance of models like OpenAI’s GPT-4 and Google’s PaLM 2. The framework involves LLMs self-discovering task-intrinsic reasoning structures to solve problems. It looks at multiple atomic reasoning modules and composes them into an explicit reasoning structure for LLMs to follow during decoding. This approach works with significantly less inference compute, making it beneficial for enterprises. The researchers tested the approach with various models and found notable performance improvements, with gains of up to 32% compared to other techniques. The self-discover approach achieved high accuracy across different reasoning tasks and showed potential for pushing the boundaries of problem-solving and advancing general intelligence in LLMs.
