||| FROM THE TELEGRAPH |||


The launch of a new artificial intelligence model has brought humans closer to understanding dolphins, experts claim.

Google DeepMind’s DolphinGemma is programmed with the world’s largest collection of vocalisations from Atlantic spotted dolphins, recorded over several years by the Wild Dolphin Project.

It is hoped the recently launched large language model will be able to pick out hidden patterns, potential meanings and even language from the animals’ clicks and whistles. Dr Denise Herzing, the founder and research director of the Wild Dolphin Project, said: “We do not know if animals have words.

“Dolphins can recognise themselves in the mirror, they use tools, so they’re smart but language is still the last barrier.

“So feeding dolphin sounds into an AI model will give us a really good look at if there are patterns, subtleties that humans can’t pick out.

“You’re going to understand what priorities they have, what they are talking about,” she said.

“The goal would someday be to ‘speak dolphin’, and we’re really trying to crack the code. I’ve been waiting for this for 40 years.”

Mothers often use specific noises to call their calves back, while fighting dolphins emit burst-pulses, and those courting or chasing sharks make buzzing sounds.

For decades, researchers have been trying to decode the chatter, but monitoring pods over vast distances has proven too difficult for humans to detect patterns.

‘Human and dolphin communication’

The new AI is programmed to search through thousands of sounds that have been linked to behaviour to try and find sequences that could indicate words or language.

Dr Thad Starner, a Google DeepMind research scientist, said: “By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins’ natural communication – a task previously requiring immense human effort.

“We’re not just listening any more. We’re beginning to understand the patterns within the sounds, paving the way for a future where the gap between human and dolphin communication might just get a little smaller.

“We can keep on fine-tuning the model as we go and hopefully get better and better understanding of what dolphins are producing.”

READ FULL ARTICLE



**If you are reading theOrcasonian for free, thank your fellow islanders. If you would like to support theOrcasonian CLICK HERE to set your modestly-priced, voluntary subscription. Otherwise, no worries; we’re happy to share with you.**