||| FROM THE TELEGRAPH |||
The launch of a new artificial intelligence model has brought humans closer to understanding dolphins, experts claim.
Google DeepMind’s DolphinGemma is programmed with the world’s largest collection of vocalisations from Atlantic spotted dolphins, recorded over several years by the Wild Dolphin Project.
It is hoped the recently launched large language model will be able to pick out hidden patterns, potential meanings and even language from the animals’ clicks and whistles. Dr Denise Herzing, the founder and research director of the Wild Dolphin Project, said: “We do not know if animals have words.
“Dolphins can recognise themselves in the mirror, they use tools, so they’re smart but language is still the last barrier.
“So feeding dolphin sounds into an AI model will give us a really good look at if there are patterns, subtleties that humans can’t pick out.
“You’re going to understand what priorities they have, what they are talking about,” she said.
“The goal would someday be to ‘speak dolphin’, and we’re really trying to crack the code. I’ve been waiting for this for 40 years.”
Mothers often use specific noises to call their calves back, while fighting dolphins emit burst-pulses, and those courting or chasing sharks make buzzing sounds.
For decades, researchers have been trying to decode the chatter, but monitoring pods over vast distances has proven too difficult for humans to detect patterns.
‘Human and dolphin communication’
The new AI is programmed to search through thousands of sounds that have been linked to behaviour to try and find sequences that could indicate words or language.
Dr Thad Starner, a Google DeepMind research scientist, said: “By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins’ natural communication – a task previously requiring immense human effort.
“We’re not just listening any more. We’re beginning to understand the patterns within the sounds, paving the way for a future where the gap between human and dolphin communication might just get a little smaller.
“We can keep on fine-tuning the model as we go and hopefully get better and better understanding of what dolphins are producing.”
**If you are reading theOrcasonian for free, thank your fellow islanders. If you would like to support theOrcasonian CLICK HERE to set your modestly-priced, voluntary subscription. Otherwise, no worries; we’re happy to share with you.**
Given dolphins and whales have been around far longer than humans, it would perhaps not be surprising that we humans inherited our ability to communicate with language from our common ancestors.
The article says “You’re going to understand what priorities they have, what they are talking about.” I would imagine some of what they talk about is how utterly disastrous homo sapiens sapiens have been to the lives of dolphins.
Maybe all they’ll say is “So long, and thanks for all the fish”.
But more seriously, the main value from that article is to provoke thought — on what dolphins might have to say, whether they “think” in the same way we do, what does the concept of “language” even really mean. But right now, the only thing Google/AI has achieved is the ability to transform between a set of sounds optimized for air and human vocal tracts to/from sounds for water and dolphin “vocal tracts” and then be able to replay them on demand — a fancy tape recorder that can work in both air and water. Anything more than that is the popular press jumping the gun.