Tokens are a big reason today's generative AI falls short
Are there emerging alternatives to traditional tokenization?
Yes, there are emerging alternatives to traditional tokenization. One example is byte-level state space models like MambaByte, which can ingest far more data than transformers without a performance penalty by doing away with tokenization entirely. These models work directly with raw bytes representing text and other data, offering competitive performance on language-analyzing tasks while better handling noise like swapped characters, spacing, and capitalized characters. However, these models are still in the early research stages.
What are the common types of tokens in AI models?
In AI models, common types of tokens include word tokens, subword tokens, and character tokens. Word tokens represent whole words, while subword tokens are used for parts of words, typically in languages where words can be broken down into smaller, more meaningful units4. Character tokens are used for individual characters in a word.
What challenges do non-English languages face in tokenization?
Non-English languages face challenges in tokenization due to differences in sentence structure, character encoding, and word separation. For example, some languages don't use spaces to separate words, which complicates the tokenization process. Additionally, logographic languages like Chinese require each character to be treated as a distinct token, leading to high token counts. Agglutinative languages, where words are made up of small meaningful elements called morphemes, also result in increased token counts. These factors can lead to longer processing times and decreased model performance for non-English languages.