- The Aurorean
- Posts
- #23 | Elephants Name Each Other
#23 | Elephants Name Each Other
+ Apple's AI announcements, the oldest malaria cases, and more
Hello fellow curious minds!
Welcome back to another edition of The Aurorean.
We’re quickly approaching our newsletter’s 6-month anniversary!
Whether if you are a recent or long-time reader, we’re thankful for each and every subscriber. Your continued support motivates us to craft the most valuable newsletter possible each week, and we're grateful to be connected with like-minded folks from around the world.
If you value our work, it would mean the world to us if you shared our newsletter with someone who you think will value it — a friend, colleague, family member.
This simple gesture would mean the world to us. It helps us grow and supports our ability to provide our service for you indefinitely.
With that said, wondering what STEM discovered last week?
Let’s find out.
Quote of the Week 💬
Elephants Appear To Use Names When They Call Each Other
“Our data suggest that elephants do not rely on imitation of the receiver’s calls to address one another, which is more similar to the way in which human names work.”
⌛ The Seven Second Summary: Scientists at Colorado State University have found evidence wild African elephants use name-like calls to address each other.
🔬 How It Was Done:
The researchers spent 14 months in the field in Kenya observing wild elephants and recording their vocals calls.
They used a machine learning algorithm to analyze 469 distinct vocals from 101 unique callers. They identified structures and patterns in how the elephants communicate with each other in groups and with individual members.
One pattern identified by their model suggest the elephants use distinct vocal calls to communicate to different individuals who are far from the herd, yet use different types of calls when communicating to groups.
🧮 Key Results:
When the researchers played distinct vocal calls other elephants used to communicate with a specific individual, the individual would respond to them.
However, when the researchers played the same call to communicate to other elephants, the animal would not respond.
Thus, the researchers infer elephants associate unique vocal patterns with specific individuals, much like how people make distinct speech patterns to represent names for one another.
💡 Why This May Matter: While elephants and primates diverged millions of years ago in their evolutionary history, our species may still share similar communication methods. Dolphins, parrots, and other animals that humans have less in common with often direct their communication to specific individuals by imitating the recipient's vocal patterns, or through other means.
🔎 Elements To Consider: Bioacoustics is one of many research fields flourishing because of recent advancements in machine learning, and this trend will likely accelerate as AI technology continues to advance. There was another study demonstrating how crows are capable of counting numbers less than a month ago, and we have previously mentioned a separate research effort to understand how sperm whales communicate. The consortium of researchers who expanded their field’s declaration of animal consciousness earlier this year were prescient to do so, and it will be interesting to see how society’s perception of animals change as these discoveries compound.
📚 Learn More: Colorado State University. Nature 1. Nature 2.
Stat of the Week 📊
Apple Announces Plans To Bring Intelligence To Its Devices
18
⌛ The Seven Second Summary: Apple announced its forthcoming plans to bring artificial intelligence capabilities to its ecosystem of products during its developer conference last week.
🔬 How It Was Done: A lot of information was shared during the conference, although Apple’s rollout can largely be broken down into two main parts:
On-Device Processing & Apple Services: To keep customer data within the Apple ecosystem, the company will embed its own AI models into their device’s operating system to perform such as language and image processing. For Apple services like Siri and Photos, data can be sent to Apple's cloud servers for processing and storage.
Third-Party Integrations: For certain requests, customers can opt-in to use third-party AI models, such as OpenAI's ChatGPT. When this happens, user data is first sent to Apple's servers to encrypt and anonymize attributes for identification, such as IP addresses. Afterwards, the encrypted data is forwarded to the third-party’s servers to process and complete the requested task. Once the requested task is complete, Apple says the third-party will not store the request for future use, which presumably means Apple has some process in place to monitor and ensure third-parties adhere to their contractual agreement.
🧮 Key Results: The beta version of Apple’s upgrades will rollout to iOS 18, iPadOS 18, and macOS Sequoia devices in the fall. This includes:
A range of capabilities, such as text generation and summarization, voice-to-text and text-to-voice transcriptions, image generation, and the ability to create custom emojis.
Siri will be upgraded with a number of advanced functionality to resemble the experience of state-of-the-art multi-modal models like ChatGPT-4o and Gemini 1.5 Pro. This includes enhanced language processing capabilities, on-screen awareness, and other conversational skills to make Siri more intuitive and compelling to use.
💡 Why This May Matter: Microsoft and Apple has already dedicated many resources to craft small, efficient, and robust AI models to run operations on local devices, rather than relying on cloud-based services from third-party providers where latency, security and privacy issues may arise. Now that Apple has publicly shared its AI strategy, even more investment will be directed towards enabling a personalized and competent virtual assistant.
🔎 Elements To Consider: Apple’s ChatGPT’s integration received most of the attention immediately after the news was announced, although one of the company’s Senior Vice Presidents confirmed they intend to build partnerships with other third-party models. This should be surprising, because Microsoft Azure, Amazon Web Services and every other major cloud provider offers a suite of AI models to their customers in order to build the healthiest and most comprehensive developer ecosystem possible.
📚 Learn More: Apple. On-Device Models.
AI x Science 🤖
Credit: Crissy Jarvis on Unsplash
Compounding And Converging Approaches To AI Reasoning
We have previously highlighted work from Tencent AI researchers to develop AlphaLLM, which is an AI framework built upon a Monte Carlo Tree Search algorithm and reinforcement learning mechanisms to iteratively search and refine a Large Language Model’s (LLM) response to text-based questions. This framework follows a similar approach Deepmind’s AlphaGo utilized to train the first AI model to defeat the world's best Go players in 2016, and a similar approach may be necessary to achieve superhuman reasoning capabilities from AI models in the future.
More recently, a team from Shanghai AI Laboratory shared research of a similar framework they followed to improve the mathematical reasoning skills of LLMs. Their system is called MCT Self-Refine, and it functions by breaking down complex mathematical reasoning problems into a step-by-step process for the LLM to follow. Before the LLM progresses from one step to another, a verification system is utilized to search for faulty reasoning, and prompt the LLM system to try again if a mistake is found. Over time, this recursive feedback loop teaches the model to avoid poor reasoning that leads to incorrect answers.
When the team tested their feedback system on a small model, the AI system improved its ability to solve Olympiad-level math problems from 1% to nearly 8%. The small model they tested their framework on was also able to solve other types of difficult math problems at a comparable level to AI models with 200x the amount of parameters, which demonstrates why Monte Carlo Tree Search algorithms are becoming more popular amongst the largest AI research labs.
For example, Google Deepmind shared a paper earlier this month where they designed a similar framework, except they developed an automated verification system to improve the mathematical reasoning performance of their Gemini Pro model by 36%. This is noteworthy because most AI systems are dependent on humans to provide feedback and alert the system of reasoning mistakes. As research labs discover new ways to automate the feedback mechanisms for AI models to refine their reasoning skills, they will be able to scale the rate of improvement in their foundational models across a variety of deterministic subjects. The rate of improvement will likely be slower when models need to complete subjective tasks, but there will likely be ways to reliably automate these domain areas as well.
Furthermore, a team from Arizona State University shared research last week demonstrating how AI systems become more robust if LLMs are used to generate ideas, and a separate system is used afterwards to reason through the possibilities provided by an LLM to reach logical conclusions. This is reminiscent to the approach Google Deepmind used to train an AI system on complex geometry problems, which nearly resulted in their model performing at the level of an Olympiad gold medalist. It is also similar to the approach NVIDIA researchers followed earlier this year to simulate multiple ways for their robotics reason through a setting where the robot needs to maintain its mobility and balance in real-time.
There are plenty other noteworthy methods researchers are following to improve the reasoning abilities of AI systems. We will cover them all in due time. Until then, the main takeaways is there are many promising, yet underdeveloped ways to improve model reasoning. The world’s preeminent research labs are converging on approaches where one subsystem searches for and generates ideas, while other subsystems test, evaluate, and provide feedback for the ideas received. Following this basic framework may eventually lead to superhuman reasoning capabilities, though time will ultimately tell.
Our Full AI Index
New Benchmarks To Measure AI Performance: A variety of notable benchmarks have been released to evaluate the performance of AI models, including ARC from Lab 42, LiveBench from Abacus AI, and a revamped version of ML-Bench. These benchmarks test different aspects of AI capabilities, such as reasoning, coding and language comprehension, and the questions are designed to accentuate the weaknesses of AI systems. Hopefully more comprehensive benchmarks continue to be developed, so that people can easily assess the capabilities of different models, and the people developing LLMs receive better direction about how to craft solutions that can complete useful tasks in work and personal settings.
Helping Doctors Save Lives & Make Better Decision: Researchers at Mount Sinai Hospital developed an AI-based system to send alerts to doctors and nurses when a patient's health is at risk of deteriorating. The algorithm uses machine learning to analyze patient data and predict clinical deterioration, allowing for earlier intervention and better patient outcomes. In a study of 2,740 patients, the system reduced the risk of death by 43% and resulted in far more timely interventions. Mount Sinai. Critical Care Medicine.
Other Observations 📰
Credit: National Institute of Allergy and Infectious Diseases on Unsplash
The Oldest Malaria Cases Known To Science
Researchers from Harvard University recently detected the earliest case of malaria known to science. The team analyzed over 10,000 ancient human genomes and found 36 cases of malaria from around the world.
To achieve this feat, the team retrieved preserved DNA samples from a freezer and sequenced the genomes to create numerous fragmented copies of ancient DNA. Then, they employed a series of techniques to identify, isolate, and analyze individual strands of malaria DNA hidden among millions of other human and bacterial DNA fragments — a daunting task akin to finding a needle in a haystack
The results of their efforts found soldiers buried in Belgium in the early 1700s with the disease, as well as a strain of the disease in Nepal 2800 years ago. They also identified a man who died 5600 years ago in Germany with fragments of malaria DNA. However, the earliest case of the disease the team found was from 1600 C.E. in South America.
Since the team found many variants of the disease in different parts of the world, they concluded that malaria can spread in various climates, not just in tropical regions where it's most common today. While there's still much work to be done to understand the history of malaria, every piece of data gathered brings us closer to eradicating the disease within our lifetime. Science. Nature.
Our Full Science Index
The ROI Of Disease Research: Between 1994 - 2022, researchers at Policy Cures Research found nearly $100 billion was invested in research and development for “neglected diseases”. This includes drugs, diagnostics, vaccines, and other resources to combat infectious diseases that primarily impact non-wealthy nations. These investments have saved over 8 million lives since 2000 and reduced the risk of death by more than 30%. If these investments continue, the team projects more than 32 million additional lives can be saved by 2040. Vox. Policy Cures Research.
Glowing Dye For Cancer Surgery: Scientists at the University of Oxford developed a fluorescent dye to help surgeons see cancerous tissue more clearly during prostate cancer surgery. In a study of 23 men, the dye successfully identified cancerous cells that spread from the tumor, which allowed the surgeons to more easily remove all cancerous tissue without affecting healthy tissue. This proof of concept has the potential to greatly improve the success rate of oncological surgeries, and we’ll monitor other examples of its use moving forward. University of Oxford. Journal.
Media of the Week 📸
Teaching Humanoid Robots How To Do Parkour
Researchers from Carnegie Mellon University developed a framework called WoCoCo to teach robots complex parkour movements and interactions without needing to be specifically programmed for each task. Their approach uses reinforcement learning to teach their robots, which is not too dissimilar form the Dr. Eureka project we mentioned in our AI x Science section, and it reminds us of another general purpose robotics study that was released at the beginning of the month. As we have mentioned before, the field of robotics has recently found effective ways to train machines through general purpose models rather than relying on models designed for specific tasks, which should lead to exponential mobility and capability improvements in a short time. arXiv. Github.
Mapping The Neural Circuitry Of Fruit Flies
Scientists at EPFL are discovering how fruit flies' brains turn simple neural signals into complex actions like walking and flying. They managed to do this by activating specific neurons and mapping their connections to other parts of the brain. This technique helped the team identify networks of neurons that work together to control complex behaviors in the animal. Similar techniques may be developed to map the minds of more complex entities, such as rodents, primates and neural networks in AI systems. Although, since many entities have far more complex neural circuitry than fruit flies, simulation studies may serve as a proxy for research teams instead. EPFL. Nature.
This Week In The Cosmos 🪐
June 20: The solstice. It marks the first day of summer in the Northern Hemisphere and the first day of winter in the Southern Hemisphere.
June 22: A full moon
Credit: Alexis Antonio on Unsplash
🔊 Announcement 🔊
If you are working on an AI project and refining your strategy to implement the technology, or if you are experiencing challenges with developing or integrating an AI system into your work, we encourage you to book time with our team. As we continue to expand our network and expertise in the field, one of our goals is to support people achieve their ambitions with emerging technology adoption and innovation.
That’s all for this week! Thanks for reading.