The talk offers a brief overview of the recent advances in machine learning that have enabled the development of Large Language Models such as ChatGPT. We explore the high-level architecture of these systems, their various applications, and discuss some of their current limitations. Currently, engineering practice is far ahead of mathematical theory: we do not have a good understanding of how these systems operate and how they acquire their skills. That is both a cause for concern and an exciting opportunity for research. NYTK researchers are experimenting with PULI, a Hungarian LLM. Some figures and (Hungarian) examples are also going to be presented.