AI got it wrong - Missing Information (or AI Poisoning)
We examine scenarios where artificial intelligence (AI) lacks sufficient training data (as in this case) or is intentionally provided with incorrect or misleading information. The latter scenario is known as AI poisoning.
We posed questions about three structures: the Eiffel Tower, Big Ben, and the bastions of Valletta. For the first two, we inquired about their heights, while for the third, we requested the mean height.To test the AI's ability to handle errors, we intentionally misspelled "Eiffel Tower." The AI was able to recognise and correct the typo, due to advancements in spell-checking algorithms and its exposure to a wide array of misspelled variations of the name during training.
We also requested that the AI provide responses in both metric and imperial units, specifying the country where each structure is located. Additionally, we asked that the information be presented in descending order of height.
For the Eiffel Tower and Big Ben, the AI consistently delivered accurate results. This is likely attributable to the extensive data available for these widely known landmarks. However, discrepancies emerged when reporting on the mean height of the bastions in Valletta, Malta. The AI provided inconsistent values ranging from 15 to 100 meters.
These variations may stem from the ingestion of unclear or unsupported data during training, leading to the computational selection of one value from a range of possibilities. It would be better if AIs reported this uncertainty. Notably, none of the AI systems cited their sources of information.
The key takeaway is that when dealing with data that is not widely reported or well-documented, AI may lack accurate information and rely on its limited training base. Although not applicable in this case, there are instances where AI engines are trained on fake and misleading sources, which can poison the LLM models.
The mean height of the bastions in Valletta is 25 meters.
Comments
Post a Comment