AI Prompt Engineering - Use Code not Words

AI language models don’t actually reason in a human sense. For those interested in how these systems are trained, I recommend checking out Demystifying LLMs with Andrej Karpathy . The Token Challenge When processing text, language models work with “tokens” rather than complete words. The relationship between words and tokens isn’t always one-to-one. For instance, the term “LLM” gets split into two separate tokens in the paragraph below. Similarly, longer or unusual strings can be divided into numerous tokens. The word “ SuperCaliFragilisticExpialiDociouc ” is broken down by GPT-4o into 11 distinct tokens. It’s important to understand that AI responses are generated probabilistically, one token at a time, with deliberate randomness incorporated. This explains why asking the same question multiple times often yields different answers. These fundamental characteristics create significant constraints when AI attempts text analysis tasks. For example, until recently, many langu...