I Asked AI Chatbots to Help Me Shop. They All Failed

0

I test products and write reviews for my job. So I asked ChatGPT, Bard, and Bing Chat to recommend headphones—and I…

Clarity of sourcing is going to be increasingly important in the future, as will creating real consequences for AI being wrong or being used to mislead consumers. The consequences for me and my colleagues being bad at our jobs is that everyone disagrees with us, advertisers flee, and we lose our credibility. But in a world where AI is parsing our words to create its own recommendations, it seems plausible that bad opinions could more easily leak—or be manipulated—into the system.

Sridhar Ramaswamy of AI-based search startup Neeva notes that using ChatGPT will require independent verification. “The default for ChatGPT is that you can’t really believe the answers that come out. No reasonable person can figure out what is true and what is fake,” Ramaswamy says. “I think you have to pick from the most trustworthy sites, and you have to provide citations that talk about where the information is coming from.”

Some Things Borrowed

And yes, I can see a future in which much press-release journalism, in which outlets report announcements from politicians or companies, could be farmed out to AI to write. Some publishers are already writing stories with generative AI to cut labor costs—with the expected hilarious results, though as generative AI gets better, it will surely improve at basic reporting. 

But what does this all mean to you, the consumer of future AI-generated best-of lists? Who cares if we’re living through our Napster moment! It’s easy to not ask too many questions about provenance when you’re getting every song you want. Even so, right now I’d say it’s not worth trusting any AI-generated recommendations, unless, like Bing, they cite and link to sources. 

Angela Hoover from AI-based search startup Andi says all search results should prominently feature the sources they’re pulling from. “Search is going to be visual, conversational, and factually accurate. Especially in the age of generative search engines, it’s more important than ever to know where the information is coming from.”

When it comes to asking AI for recommendations and information in the human realm, it will require human inputs. Generative AI just imitates the human experience of holding and using a product. If outlets begin to replace their product reviews, buying guides, and best-of rankings with AI-generated lists, for example, that’s less overall information for it to parse and generate from. One can imagine that certain product categories online, especially in more niche products, will increasingly look even more like echo chambers for consumers than they’re currently critiqued for being. 

By combining search and AI, it is important that we rely on existing search rankings and other methods that are often helpful to sort out bad sources. I simply ignore certain review sites online, and Amazon ratings in general, because they’re fraught with issues like fake reviews. If AI doesn’t have the same level of discretion, and if those of us at major review outlets don’t chime in, or chime in less because AI is taking our jobs, I don’t see a rosy outcome for consumers.

Source

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *