Reliability Issues in AI Search

Posted by: kaycie
It was as recently as March of this year that we first noticed from our client analytics that people were using artificial intelligence tools in searches, and their use is growing quickly. The most popular tool by far has been ChatGPT.
AI tools like ChatGPT, gather information from multiple sources—websites, directories, reviews, and business profiles. If a business’s phone number or address varies across those sources, the model may repeat outdated or incorrect data, leading to user confusion or mistrust. But, as we discovered, even when all the information is correct, there can still be a problem. Let me illustrate.
I asked ChatGPT to give me a list of the top cosmetic dentists in Baton Rouge. A client we manage was listed first in a list of three. ChatGPT asked if I would like help scheduling a consultation with that office and I said, “yes.” To my surprise, I was given an incorrect address and phone number for that office. I quickly did a search across various search engines for client name/address/phone and couldn’t find the source of the incorrect data. So I asked ChatGPT to recheck the contact information and the reply I got was something to the effect of, “upon checking, we see the accurate information for that business is…” and then the correct information was given. I persisted in asking where the bogus information came from and after several attempts, I received this reply in ChatGPT:
“Where the Incorrect Info Came From: The prior details (8894 Airline Hwy & (225) 346‑2422) were my (ChatGPT) mistake—likely conflated with another local provider during preparation. That information doesn’t match any reputable listing for Dr. XYZ and appears to have been an internal error. I appreciate that you caught it!”
During another test for a different client, I also received an incorrect email address. With persistence, I received the same “oops, we got it wrong” type of reply.
I went on to check other clients. I checked 38 clients and 4 of them had some sort of bogus info:
- 1 had wrong address and phone (Brooksher- noted above)
- 2 had incorrect emails for contact info
- 1 gave me the wrong dentist when I asked for a particular dentist – I asked for Auburn Dental and got info for Auburn Dental Center, which is a different office
There was another interesting instance of inaccurate data reported by Lily Ray, a well-known SEO professional. In a LinkedIn post she shares an experience where Google’s AI Overview referred to her as a 9-year-old born in 2015. What confused Google’s Gemini AI tool was that she had mentioned in an FAQ on her website that she has a dog who was born in 2015 and is 9 years old. When the post used the word “she,” Gemini interpreted that as referring to Lily. A human would have been able to figure out from the context that “she” referred to the dog.
As we enter this new world of AI search, we may need to double check the data being delivered and help the AI tools get it right.