In-context learning in large language models (LLMs) relies on the model's ability to understand the task at hand from the context provided by input examples2. Research has shown that the structure of prompts, the model's size, and the order of examples significantly impact the results, while accurate examples are not always necessary to be effective. This suggests that the model can still learn from noisy or imperfect examples, as long as the context is clear and the model has enough capacity to process the information.
Factors influencing decision boundaries in LLMs include model size, pretraining data, model architecture, prompt format, number of in-context examples, and quantization levels. These factors can affect the smoothness and generalizability of decision boundaries in Large Language Models (LLMs).
In-context learning in transformers is evaluated using three main methods: theoretical understanding through linking with gradient descent, practical understanding by examining factors like label space and input text distribution, and learning to learn in-context using meta-training frameworks like MetaICL. These methods help analyze decision boundaries in binary classification tasks and improve the smoothness of decision boundaries.