If we didn't have enough to worry about, scientists have worked out how far we'd have to be from a supernova to survive.
Researchers last year revealed evidence of our planet being buffeted a few million years ago by a supernova - that's the explosion of a massive star - and it's now been estimated that the event took place an estimated 150 light years from Earth.
Still, University of Kansas physicist Professor Adrian Melott said supernova exploding at such a range probably wouldn't touch off mass extinctions here.
"People estimated the 'kill zone' for a supernova in a paper in 2003, and they came up with about 25 light years from Earth," he said.
"Now we think maybe it's a bit greater than that."
Their new estimates pushed out the range of an apocalyptic supernova to 40 or 50 light years.
"So, an event at 150 light years should have some effects here but not set off a mass extinction."
The closest potential supernova is Betelgeuse, about 600 light years away.
Space nukes' surprise spin-offs
Our Cold War history is now offering scientists a chance to better understand the complex space system that surrounds us.
Space weather - which can include changes in Earth's magnetic environment - is usually triggered by the sun's activity, but recently declassified data on high-altitude nuclear explosion tests have provided a new look at the mechanisms that set off perturbations in that magnetic system.
Now, scientists say such information can help support NASA's efforts to protect satellites and astronauts from the natural radiation inherent in space.
From 1958 to 1962, the US and USSR ran high-altitude tests with exotic code names like Starfish, Argus and Teak.
The tests, which have long since ended and detonated explosives at heights from 25km to 400km above the Earth's surface, mimicked space weather effects frequently caused by the sun.
Upon detonation, a first blast wave expelled an expanding fireball of plasma, a hot gas of electrically charged particles.
This created a geomagnetic disturbance, which distorted Earth's magnetic field lines and induced an electric field on the surface.
Some of the tests even created artificial radiation belts, akin to the natural Van Allen radiation belts, a layer of charged particles held in place by Earth's magnetic fields.
The artificially trapped charged particles remained in significant numbers for weeks, and in one case, years.
These particles, natural and artificial, can affect electronics on high-flying satellites - in fact some failed as a result of the tests.
"If we understand what happened in the somewhat controlled and extreme event that was caused by one of these human-made events," said Dr Phil Erickson, of MIT's Haystack Observatory, "we can more easily understand the natural variation in the near-space environment".
Is what you touch, what you buy?
Here's something to consider the next time you're in the supermarket: what you touch can affect what you buy.
Italian and Austrian scientists conducted a series of experiments to show that blindfolded people induced to grasp familiar products, like a bottle of Coke, under the guise of another task were later found to recognise it faster than other brands presented to them.
The researchers say that grasping an object can enable the "visual processing" and choice of other seen products of the same shape and size.
"For instance, when you're holding your mobile phone in your hand, you may be more likely to choose a KitKat than a Snickers, because the KitKat is shaped more like your phone," explained study co-author Associate Professor Zachary Estes, of Milan's Bocconi University.
"What we find is that consumers are significantly more likely to choose the product that is similar to the shape of whatever is in their hand."
When confronted with a choice between a bottle of Coke and a can of Red Bull, participants who held a bottle of Fanta were more likely to choose a bottle of Coke, but those who held a can of Fanta more often chose the can of Red Bull.
"These studies show that our hands can lead us to choose certain products."
Even dumb AI can help us
Artificial intelligence doesn't have to be super-sophisticated to make a difference in your life - it seems even dumb AI can help us.
Yale University researchers carried out experiments using an online problem-solving game that required groups of people to coordinate their actions for a collective goal.
When robotic players with built-in defects were added into the mix, this boosted the performance of both human groups (by as much as 56 per cent) and individual human players.
Further, people whose performance improved when working with the bots subsequently influenced other human players to raise their game.
The study authors say their findings could have big implications for a range of situations in which people interact with AI, from future battlefields to motorways shared with autonomous vehicles.
Believing is seeing
We treat "inferred" visual objects generated by the brain as more reliable than external images from the real world, scientists say.
To make sense of the world, humans and animals need to combine information from multiple sources, and this is usually done according to how reliable each piece of information is.
For example, to know when to cross the street, we usually rely more on what we see than what we hear: but this can change on a foggy day.
In such situations with a "blind spot", the brain fills in the missing information from its surroundings, resulting in no apparent difference in what we see.
But while this fill-in is normally accurate enough, it's mostly unreliable because no actual information from the real world ever reaches the brain.
Hence, scientists at Germany's University of Osnabruck wanted to find out if we typically handle this filled-in information differently to real, direct sensory information, or whether we treat it as equal.
After a series of experiments using shutter glasses, the researchers found that when choosing between an object we generate from information based on a blind spot and one from the real world, we show a surprisingly bias toward the former.
They concluded that understanding how we combine information from different sources gets us closer to discovering the exact mechanisms used by the brain to make decisions based on our perceptions.