Limitations of prompt engineering
Prompt engineering is a powerful field with tools and best practices for optimizing the use of LLMs and similar models. However, there are several challenges:
- Evaluation: Effective prompting combines best practices with creativity, making prompt quality hard to evaluate. Moreover, prompts are often brittle; a prompt that works well on one LLM may perform poorly on another.
- Latency and costs: While LLMs are continually improving in latency and cost, they remain significantly slower and more expensive than typical software systems. The iterative nature of prompt development also adds to these costs.
- Prompt complexity and context window limits: Although context windows are expanding, complex prompts demand more tokens, leading to a trade-off between prompt instructions and contextual information. This challenge is compounded by token-based cost structures for inputs and outputs.