UC Merced Magazine | Volume XX, Issue VI

(Continued from page 21)

Surprise, Surprise Language — how we acquire it, how we use it, how we understand it — is a big part of cognitive information research. Ryskin and Professor David Noelle are using LLMs to estimate, as a person listens to a sentence, how surprising the next word is. e research uses LLMs, trained on tens of billions of words and phrases harvested from the internet, to estimate the probabilities of which words can follow another. at information is compared to how a subject responds to a sentence like “I’ll take my co ee with non-fat dog.” at last word triggers pulses that are picked up by sensors on the subject’s scalp. “It turns out there are various parts of the brain where activity seems to be sensitive to that level of surprise,” Noelle said. Furthermore, the processing power of LLMs allows researchers to draw parallels between machine and human language processing. “In English, word order matters. ese learning systems are powerful enough that they can pull out these irregularities just from being forced to predict, predict, predict,” Noelle said. What’s Next? How About a Teamup? Looking to the future, imagine a team of LLMs working together to solve a big problem. One is trained in chemistry, another in engineering, a third in biology. Link them up with some human researchers. Give them all a task. Aim high. “So the prompt would be like, ‘We’re trying to cure cancer. Get to work on it. Talk to us,’” said Kello, imagining a machine-human hybrid team tackling some of our biggest issues. “LLMs could provide the thinking but we would provide the intelligence,” Kello said. “We don’t seem far from this happening. And that just blows my mind.”

6WXG\LQJb KRZ $, processes knowledge can provide insights into how the human mind works.

22

UC MERCED MAGAZINE // ucmerced.edu

Made with FlippingBook Annual report maker