On top of that, he claims that Pearl is significantly less likely to provide false information than many other AI search engines—which he believes can overcome the “tidal wave” of lawsuits based on the bad answers it provides. “Other players create incredible technology. I call them Ferrari or Lamborghini,” Kurtzig said. “We’re building the first Volvo-safety. Kurtzig seems sure that Pearl will still be happy with the protection of Section 230. I asked AI if he agreed. Pearl said it likely qualifies as “interactive computer services” under Section 230, which means it’d be shielded from which it is considered as a publisher, as Kurtzig suspected. However, the situation of Pearl is unique because it produces content using AI. After all, when I asked to speak with a lawyer, it rerouted me to JustAnswer, where it asked me to provide the answer I wanted verified. I said that I should go back and copy the answer, because there were several paragraphs, but when I returned to the Pearl website, the conversation was not there and it was reset to a new conversation When I tried again, this time, I received an uncertain answer to the human-reality check, and I received a measly 3! Pearl recommended I find out the opinion of a real expert, porting me to its subscription page. I’ve been given a log-in so I don’t have to pay when I try the tool. Then connect me with one of the “legal eagle” experts. Unfortunately, the lawyer’s answer is no clearer than AI. He noted that there is an ongoing legal debate about how section 230 will apply to AI search engines and other AI tools, but when I asked him to give a specific argument, he gave a strange answer stating that “most use shell companies or associations to file .” When I asked for a sample from such a shell company – wondering what it had to do with the public debate about Section 230 – the “legal eagle” asked if I wanted them to make a package. More confused, I said yes. I got a pop-up window indicating that my expert wanted to charge me an additional $165 to dig up the information. I refused, frustrated. I then asked Pearl about WIRED’s history. The AI responses are serviceable, though they’re mostly the same as what you’ll find on Wikipedia. When I asked for TrustScore ™, I was once again faced with 3, suggesting that it is not a very good answer. I chose the option to connect with other human experts. This time, probably because the question was about the media and not a direct legal or medical topic, it took some time for the expert to appear – more than 20 minutes. When he did, the expert (never established what media bona fides he was, although his profile indicated he had been working with JustAnswer since 2010) gave the same answer as the AI. Since I’m doing a free test, it doesn’t matter, but I’d be annoyed if I’d paid a subscription fee just to get mediocre answers from humans and AI. For the last stab at using the service, I went for a direct question: how to refinish the kitchen floor. This time, things went much smoother. The AI produces a reasonable answer, similar to a very basic YouTube tutorial transcript. When I asked human experts to determine the TrustScore ™, they gave 5. It seemed quite accurate, of course. But-as someone who really wants to DIY refinish the old pine boards of the kitchen-I think that if I’m really looking for guidance, I’ll rely on other online communities of human voices, which don’t charge $28 a month: YouTube and Reddit. If you end up trying Pearl, or any other new AI search product, and you have an unforgettable experience, let me know how it went in the comments below the article. You can also contact me by email at kate_knibbs@wired.com. Thanks for reading, and stay warm!