AI got it wrong - Missing Information (or AI Poisoning)

We examine scenarios where artificial intelligence (AI) lacks sufficient training data (as in this case) or is intentionally provided with incorrect or misleading information. The latter scenario is known as AI poisoning. We posed questions about three structures: the Eiffel Tower, Big Ben, and the bastions of Valletta. For the first two, we inquired about their heights, while for the third, we requested the mean height. To test the AI's ability to handle errors, we intentionally misspelled "Eiffel Tower." The AI was able to recognise and correct the typo, due to advancements in spell-checking algorithms and its exposure to a wide array of misspelled variations of the name during training. We also requested that the AI provide responses in both metric and imperial units, specifying the country where each structure is located. Additionally, we asked that the information be presented in descending order of height. For the Eiffel Tower and Big Ben, the AI consistently deli...