Emerging Applications in Generative AI
In the preceding chapters, we examined a large number of applications using LLMs. We explored how they are built from transformer units and generate realistic text with large context windows, as well as the importance of understanding and optimizing prompts for effective usage. While they can be tuned for a number of specialized tasks, either through re-training or through data augmentation techniques such as RAG, they are remarkable in being able to solve a diversity of problems through a single common architecture.
However, this is a large and ever-expanding field; the number of publications on Google Scholar matching a search for “Large Language Models” is 53,600, of which 26,700 were published since 2022! This is astonishing for a field that essentially started in earnest in 2017 with the development of transformers and has experienced exponential growth since the release of OpenAI’s ChatGPT in late 2022, which is...