來源:scientific
Months after the chatbot ChatGPTwowed the worldwith its uncanny ability towrite essaysand answer questions like a human, artificial intelligence (AI) is coming to Internet search.
Three of the world’s biggest search engines — Google, Bing and Baidu — last week said they will be integrating ChatGPT or similar technology into their search products, allowing people to get direct answers or engage in a conversation, rather than merely receiving a list of links after typing in a word or question. How will this change the way people relate to search engines? Are there risks to this form of human–machine interaction?
【資料圖】
Microsoft’s Bing uses the same technology as ChatGPT, which was developed by OpenAI of San Francisco, California. But all three companies are using large language models (LLMs).LLMs create convincing sentences by echoing the statistical patterns of textthey encounter in a large database. Google’s AI-powered search engine, Bard, announced on 6 February, is currently in use by a small group of testers. Microsoft’s version is widely available now, although there is a waiting list for unfettered access. Baidu’s ERNIE Bot will be available in March.
Before these announcements, a few smaller companies had already released AI-powered search engines. “Search engines are evolving into this new state, where you can actually start talking to them, and converse with them like you would talk to a friend,” says Aravind Srinivas, a computer scientist in San Francisco who last August co-founded Perplexity — an LLM-based search engine that provides answers in conversational English.
The intensely personal nature of a conversation — compared with a classic Internet search — might help to sway perceptions of search results. People might inherently trust the answers from a chatbot that engages in conversation more than those from a detached search engine, says Aleksandra Urman, a computational social scientist at the University of Zurich in Switzerland.
A2022 study1by a team based at the University of Florida in Gainesville found that for participants interacting with chatbots used by companies such as Amazon and Best Buy, the more they perceived the conversation to be human-like, the more they trusted the organization.
That could be beneficial, making searching faster and smoother. But an enhanced sense of trust could be problematic given that AI chatbots make mistakes. Google’s Bardflubbed a questionabout theJames Webb Space Telescopein its own tech demo, confidently answering incorrectly. And ChatGPT has a tendency to create fictional answers to questions to which it doesn’t know the answer — known by those in the field as hallucinating.
A Google spokesperson said Bard’s error “highlights the importance of a rigorous testing process, something that we’re kicking off this week with our trusted-tester programme”. But some speculate that, rather than increasing trust, such errors, assuming they are discovered, could cause users to lose confidence in chat-based search. “Early perception can have a very large impact,” says Sridhar Ramaswamy, a computer scientists based in Mountain View, California and chief executive of Neeva, an LLM-powered search engine launched in January. The mistakewiped $100 billion from Google’s valueas investors worried about the future and sold stock.
Compounding the problem of inaccuracy is a comparative lack of transparency. Typically, search engines present users with their sources — a list of links — and leave them to decide what they trust. By contrast, it’s rarely known what data an LLM trained on — is it Encyclopaedia Britannica or a gossip blog?
“It’s completely untransparent how [AI-powered search] is going to work, which might have major implications if the language model misfires, hallucinates or spreads misinformation,” says Urman.
If search bots make enough errors, then, rather than increasing trust with their conversational ability, they have the potential to unseat users’ perceptions of search engines as impartial arbiters of truth, Urman says.
She has conducted as-yet unpublished research that suggests current trust is high. She examined how people perceive existing features that Google uses to enhance the search experience, known as‘featured snippets’, in which an extract from a page that is deemed particularly relevant to the search appears above the link, and‘knowledge panels’— summaries that Google automatically generates in response to searches about, for example, a person or organization. Almost 80% of people Urman surveyed deemed these features accurate, and around 70% thought they were objective.
Chatbot-powered search blurs the distinction between machines and humans, says Giada Pistilli, principal ethicist at Hugging Face, a data-science platform in Paris that promotes the responsible use of AI. She worries about how quickly companies are adopting AI advances: “We always have these new technologies thrown at us without any control or an educational framework to know how to use them.”
This article is reproduced with permission and wasfirst publishedon February 132023.
還在苦苦尋找優秀經典的名言嗎?為大家整理的關于法律的名言警句
關于法律的名言警句(精選220句)在日常學習、工作或生活中,大家都有令自己印象深刻的名言吧,巧用名言有助于我們正確對待學習、生活、成長
怎樣寫方案才更能起到其作用呢?整理的項目合作實施方案
項目合作實施方案5篇為保證事情或工作高起點、高質量、高水平開展,往往需要預先進行方案制定工作,方案指的是為某一次行動所制定的計劃類
楚辭影響最大的作品 《離騷》全文對照翻譯
《離騷》全文對照翻譯《離騷》是屈原的代表作,創作于楚懷王時期屈原遭讒被疏之時,是楚辭影響最大的作品。下面是《離騷》全文對照翻譯...
寫申請書時理由總是不夠充分?為大家整理的退學申請書
退學申請書(精選12篇)在一步步向前發展的社會中,申請書在現實生活中使用廣泛,申請書不同于其他書信,是一種專用書信。寫申請書時理由總是
都有哪些類型的話語呢?為大家收集的感恩老師的話精選150句
感恩老師的話在生活、工作和學習中,越來越多人喜歡發表話語,話語是特定社會語境中人與人之間從事溝通的具體言語行為。那么都有哪些類...