謝鈺洪
After being fed dozens of hours of video of a growing child exploring his world, an artificial intelligence model could more often than not associate words—ball, cat and car, among others—with their images, researchers report in the Feb. 2 Science. This AI feat, the team says, offers a new window into the mysterious ways that humans learn words.
The new model keeps things simple, and small — a departure from many of the large language models, or LLMs, that underlie todays chatbots. Those models learned to talk from enormous pools of data. But thats not how humans learn words. “The input to a child isnt the entire internet like some of these LLMs. Its their parents and whats being provided to them,” Vong says.
To narrow the inputs down from the entirety of the internet, Vong and his colleagues trained an AI program with the actual experiences of a real child, an Australian baby named Sam. The researchers AI program — a type called a neural network — used about 60 hours of Sams recorded experiences, connecting objects in Sams videos to the words he heard caregivers speak as he saw them. The researchers gave the model a word— crib, for instance. Then the model was asked to find the picture that contained a crib from a group of four pictures. The model landed on the right answer about 62 percent of the time. Random guessing would have yielded correct answers 25 percent of the time.
To see how well an AI program learned words from video and audio input, researchers used a test like this one. From each set of four images, the model had to identify the one image that contained a specific object. In multiple tests of a set of 22 words, the model chose the right object more than 60 percent of the time.
Some ideas of language learning hold that humans are born with specialized knowledge that allows us to soak up words, says Evan Kidd, a psycholinguist who was not involved in the study. The new work, he says, is “an elegant demonstration of how infants may not necessarily need a lot of in-built specialized cognitive mechanisms to begin the process of word learning.”
(材料來自Sciencenews 網(wǎng)站,有刪改)
1. What can we learn about the AI model from the first two paragraphs?
A. It played a role in the babys growth.
B. Its a machine accessible to children.
C. It is based on current large language models.
D. It holds clues for humans language learning.
2. Why was an Australian babys experience used?
A. To reduce the data scale.
B. To stress the unique training.
C. To show the authentic learning process.
D. To increase the accuracy and credibility.
3. What was the model supposed to do in the study?
A. Find the image of a crib.
B. Learn new words like a baby.
C. Match pictures with correct words.
D. Analyse a babys learning pattern.
4. Whats Evan Kidds attitude toward the study?
A. Unclear.B. Approving.
C. Dismissive. D. Tolerant.
1. D。解析:細節(jié)理解題。材料第一段提到“該團隊表示,這一人工智能壯舉為了解人類學(xué)習(xí)單詞的神秘方式提供了一扇新的窗口”,由此可知,這個AI模型有助于人們了解人類如何學(xué)習(xí)詞匯。D選項“它為人類的語言學(xué)習(xí)提供了線索”與材料內(nèi)容相符,故選D。
2. A。解析:細節(jié)理解題。材料第三段的第一句提到“為了縮小整個互聯(lián)網(wǎng)的輸入范圍,Vong和他的同事們用一個真正的孩子,一個名叫Sam的澳大利亞嬰兒的實際經(jīng)歷測試了一個人工智能程序”,由此可知,使用一個名叫Sam的澳大利亞嬰兒的實際經(jīng)歷是為了縮小數(shù)據(jù)范圍。A選項“縮小數(shù)據(jù)范圍”與材料內(nèi)容相符,故選A。
3. C。解析:細節(jié)理解題。材料第三段的倒數(shù)第二、三句提到“例如,研究人員給這個模型提供了一個詞——嬰兒床。然后,該模型被要求從四張照片中找到一張包含嬰兒床的照片”,因此C選項“將圖片與正確的單詞匹配”與材料內(nèi)容相符,故選C。
4. B。解析:觀點態(tài)度題。材料第五段的最后一句提到“他說,這項新工作優(yōu)雅地展示了嬰兒不一定需要大量內(nèi)置的專業(yè)認知機制來開始單詞學(xué)習(xí)過程”,說明Kidd認可該項研究。B選項“同意的”與材料內(nèi)容相符,故選B。