Scientists have just discovered the oldest documented case of live birth: that by a "terrible-headed lizard" that lived 250 million years ago.

Professor Jonathan Aitchison of the University of Queensland, the author of a new study documenting the intriguing fossil found in China, said the specimen unexpectedly provided the first evidence for live birth in an animal group previously thought to exclusively lay eggs.

"Live birth is well known in mammals, where the mother has a placenta to nourish the developing embryo," Aitchison said.

"Live birth is also common among lizards and snakes, where the babies sometimes 'hatch' inside their mother and emerge without a shelled egg."

Advertisement

Until recently it was thought the third major group of living land vertebrates, crocodiles and birds - part of the wider group Archosauromorpha - only laid eggs.

"Indeed, egg-laying is the primitive state, seen at the base of reptiles, and in their ancestors such as amphibians and fishes."

The new fossil is an unusual, long-necked marine animal called Dinocephalosaurus that flourished in shallow seas of South China in the Middle Triassic Period.

The creature was a fish eater, snaking its long neck from side to side to snatch its prey.

Its fossil was one of many astonishingly well-preserved specimens from new Luoping biota locations in southwestern China.

You won't believe your eyes and ears

Visual speech mismatched with auditory speech can result in the perception of an entirely different message. Photo / 123RF
Visual speech mismatched with auditory speech can result in the perception of an entirely different message. Photo / 123RF

Seeing is not always believing - and visual speech (mouth movements) mismatched with auditory speech (sounds) can result in the perception of an entirely different message.

This mysterious illusion is known as the McGurk effect.

Now, neuroscience researchers have created an algorithm to reveal key insight into why the brain can sometimes muddle up one of the most fundamental aspects of the human experience.

The findings will be useful in understanding patients with speech perception deficits and in building computers able to understand auditory and visual speech.

"All humans grow up listening to tens of thousands of speech examples, with the result that our brains contain a comprehensive mapping of the likelihood that any given pair of mouth movements and speech sounds go together," said Dr Michael Beauchamp, professor of neurosurgery at Baylor College of Medicine in Texas.

"In everyday situations we are frequently confronted with multiple talkers emitting auditory and visual speech cues, and the brain must decide whether or not to integrate a particular combination of voice and face."

Even though our senses are constantly bombarded with information, our brain effortlessly selects the verbal and nonverbal speech of our conversation partners from this cacophony.

The McGurk effect is an example of when this goes wrong - and when mouth movements that are seen can override what is heard, causing a person to perceive a different sound than what is actually being said.

Only when the eyes are closed, and when the sound is being heard, can the correct message be perceived.

The team were able to create an algorithm model of multisensory speech perception based on the principle of "causal inference".

This means that, given a particular pair of auditory and visual syllables, the brain calculates the likelihood they are from either single or multiple talkers, and uses this likelihood to determine the final speech perception.

Their results suggested a fundamental role for a causal inference type calculation going on in the brain during multisensory speech perception.

Green tea: the secret ingredient to wearable tech?

Could green tea be the missing ingredient in wearable tech? Photo /123RF
Could green tea be the missing ingredient in wearable tech? Photo /123RF

Wearable electronics are already here - and the most prominent versions are sold in the form of watches or sports bands.

But soon, more comfortable products could become available in softer materials, made in part with an unexpected ingredient: green tea.

Researchers have just reported a new flexible and compact rechargeable energy storage device for wearable electronics that is infused with green tea polyphenols.

Powering soft wearable electronics with a long-lasting source of energy remains a big challenge.

Supercapacitors could potentially fill this role - they meet the power requirements, and can rapidly charge and discharge many times.

But most supercapacitors are rigid, and the compressible supercapacitors developed so far have run into roadblocks.

They have been made with carbon-coated polymer sponges, but the coating material tends to bunch up and compromise performance.

A team of US researchers tried something different: preparing polymer gels in green tea extract, which infuses the gel with polyphenols.

The polyphenols converted a silver nitrate solution into a uniform coating of silver nanoparticles.

Thin layers of conducting gold and poly (3,4-ethylenedioxythiophene) were then applied.

And the resulting supercapacitor demonstrated power and energy densities of 2715 watts per kilogram and 22 watt-hours per kilogram - enough to operate a heart-rate monitor, LEDs or a Bluetooth module.

The researchers tested the device's durability and found that it performed well even after being compressed more than 100 times.