Antarctica’s Denman Glacier is sinking into the world’s deepest canyon


The melting glacier could raise sea level by almost 5 feet (1.5 meters).

Denman trough (dark blue strip) sinks some 11,000 feet (3,500 meters) below sea level, and could soon become the burial plot of a massive, dying glacier.
(Image: © NASA’s Scientific Visualization Studio)

The glaciers of Antarctica are melting at unprecedented rates, and a giant canyon in the continent’s rocky underbelly could make matters much worse.

In a study published March 23 in the journal Geophysical Research Letters, researchers used more than 20 years of satellite data to monitor the ice in Denman Glacier — a 12-mile wide (20 kilometers) stream of ice in East Antarctica — along with the bedrock beneath it. The researchers found that, not only did Denman’s western flank retreat nearly 3 miles (5 km) between 1996 and 2018, but that a deep canyon below the glacier may be causing the glacier to melt faster than it can possibly recover.

Denman Glacier’s western flank flows over the deepest known land canyon on Earth, plunging at least 11,000 feet (3,500 meters) below sea level. Right now, that canyon (known as the Denman trough) is mostly cut off from the sea thanks to all the glacial ice piled inside and atop the ravine. However, as the glacier’s edge continues to retreat farther and farther down the slope, warm ocean water will pour into the canyon, battering bigger and bigger sections of the glacier and gradually turning the Denman trough into a giant bowl of meltwater with nowhere else to go.

This scenario, the researchers wrote, could kick off a runaway feedback loop of melt that ultimately returns all of Denman Glacier’s ice to the sea — risking nearly 5 feet (1.5 m) of global sea level rise.

“Because of the shape of the ground beneath Denman’s western side, there is potential for rapid and irreversible retreat, and that means substantial increases in global sea levels in the future,” lead study author Virginia Brancato, a postdoctoral fellow with NASA’s Jet Propulsion Laboratory, said in a statement.

This map shows Denman Glacier’s grounding line retreating between 1996 (the black line) and 2018 (yellow line). The large dip in the bedrock represents Denman trough, a canyon reaching a maximum depth of 11,000 feet (3,500 meters) below sea level. The glacier’s grounding line has already begun creeping down the canyon’s wall. (Image credit: AGU/ Brancato et. al)

Glaciers are giant slabs of ice sitting atop continental bedrock. Most glaciers in Antarctica, including Denman, end in large ice shelves or “tongues” that jut away from the land and into the open ocean, where their edges slowly snap into pieces and form new icebergs. The point where a glacier first leaves the bedrock and begins to float in the water is called the grounding line. The location of this line is key to a glacier’s stability; when warm ocean water melts away exposed glacial ice, the grounding line retreats farther and farther back, making nearby ice sheets less stable and more prone to melting and cracking.

In the new study, researchers used satellite data from the German Aerospace Center and the Italian Space Agency to measure how far Denman Glacier’s grounding line retreated in the 22 years between 1996 and 2018, and how much mass the glacier lost in melted ice. They saw extensive melting — Denman lost more than 268 billion tons (2.43 metric tons) of ice in those two decades — and an alarming rate of retreat on one side of the glacier only.

While there was little retreat on Denman’s eastern flank (where a rocky ridge stabilizes the grounding line), the glacier’s western flank shot back by nearly 3 miles (5 km), plunging partway down the slope of the massive Denman trough.

If current global warming trends continue, that trough could spell doom for Denman glacier, the researchers wrote. As the glacier’s grounding line continues to sink farther down the canyon (which already sits below sea level), warm ocean water will batter larger and larger chunks of the glacier’s edge, causing it to melt even faster and make the precarious ice shelf above even more vulnerable to collapse.

If that happens, it’s likely that Denman Glacier will undergo a “rapid and irreversible retreat” with “major consequences” for sea level rise, the researchers wrote in the study. This possibility should be a wake-up call to scientists who previously considered melt in East Antarctica a relatively benign threat compared to the rapidly melting Pine Island and Thwaites glaciers in West Antarctica, the authors concluded.

“The ice in West Antarctica has been melting faster in recent years, but the sheer size of Denman Glacier means that its potential impact on long-term sea level rise is just as significant,” study co-author Eric Rignot, a professor of Earth system science at the University of California, Irvine, said in the statement.

Originally published on Live Science.
By Brandon Specktor – Senior Writer




3523: Old gas blob from Uranus found in vintage Voyager 2 data


An animation shows the strange magnetic field of Uranus. The yellow arrow points toward the sun and the dark blue arrow represents the planet’s axis.
(Image: © NASA/Scientific Visualization Studio/Tom Bridgman)

Buried inside data that NASA’s iconic Voyager 2 spacecraft gathered at Uranus more than 30 years ago is the signature of a massive bubble that may have stolen a blob of the planet’s gassy atmosphere.

That’s according to scientists who analyzed archived Voyager 2 observations of the magnetic field around Uranus. These measurements had been studied before, but only using a relatively coarse view. In the new research, scientists instead looked at those measurements every two seconds. That detail showed what had previously been missed: an abrupt zigzag in the magnetic field readings that lasted just one minute of the spacecraft’s 45-hour journey past Uranus.

The tiny wobble in the Voyager 2 data represents something much larger since the spacecraft was flying so fast. Specifically, the scientists behind the new research believe the zigzag marks a plasmoid, a type of structure that wasn’t understood particularly well at the time of the flyby in January 1986.

But by now, plasmoids have earned scientists’ respect. A plasmoid is a massive bubble of plasma, which is a soup of charged particles. Plasmoids can break off from the tip of the sleeve of magnetism surrounding a planet like a teardrop.

Scientists have studied these structures at Earth and nearby planets, but never at Uranus or its neighbor Neptune, since Voyager 2 is the only spacecraft to date ever to visit those planets.

Scientists want to know about plasmoids because these structures can pull charged particles out of a planet’s atmosphere and fling them into space. And if you change a planet’s atmosphere, you change the planet itself. And Uranus’ situation is particularly complicated because the planet rotates on its side and its magnetic field is skewed from both that axis and the plane all the planets lie in.

A Voyager 2 photo of Uranus taken on Jan. 14, 1986. (Image credit: NASA/JPL-Caltech)

Because Voyager 2 flew straight through this plasmoid, scientists could use the archived data to measure the structure, which they believe was about 250,000 miles (400,000 kilometers) across and could have stretched 127,000 miles (204,000 km) long, according to a NASA statement.

Ideally, scientists would piece together more observations of Uranus’ magnetic field, enough to better understand how this phenomenon has shaped the planet over time. But that will require another spacecraft visit the strange sideways world.

The research is described in a paper published in August in the journal Geophysical Review Letters. NASA announced the finding on Wednesday (March 25).

By Meghan Bartels – Senior Writer



3522: Scientists use the Milky Way to hunt for dark matter


Scientists think that dark matter produces a bright and spherical halo of X-ray emission around the center of the Milky Way.
(Image: © Artistic rendering by Christopher Dessert, Nicholas L. Rodd, Benjamin R. Safdi, Zosia Rostomian (Berkeley Lab), based on data from the Fermi Large Area Telescope.)

Scientists studying a mysterious signal from far-off galaxies didn’t find dark matter as they’d hoped. But the inventive new technique they used to detect this strange signal, which uses our own galaxy to hunt for dark matter, could elevate the hunt for the elusive material.

For decades, scientists have been searching for dark matter, an invisible material that doesn’t interact with light but which permeates our entire universe. And a signal coming from a nearby galaxy spotted in a 2014 study gave scientists hope that this was the long-sought evidence for dark matter.

Some current models predict that dark matter particles slowly decay into ordinary matter, a process that would produce faint photon emissions that X-ray telescopes could detect. And in 2014, scientists spotted an X-ray emission from a galaxy in a dark matter hunt, as it’s known that dark matter collects around galaxies.

Researchers think that the emission, known as the “3.5 keV line” (keV stands for kilo-electronvolts), is likely made of sterile neutrinos, which have long been thought of as a candidate for dark matter, study co-author Chris Dessert, of the University of Michigan, told

Sterile neutrinos are hypothetical particles that are a close relative of the neutrino, a neutral subatomic particle with a mass very close to zero. They are released in nuclear reactions like those in nuclear plants on Earth and in the sun. Because the tiny amount of mass in neutrinos can’t be explained by the Standard Model of particle physics, some think that sterile neutrinos could make up this mystery mass that is actually dark matter.

But in this new study of objects in the Milky Way, which analyzed a mountain of raw data over the past 20 years from the XMM-Newton space X-ray telescope, researchers found evidence that this signal seen in the 2014 study wasn’t coming from dark matter. In fact, in searching for dark matter with their new technique, they didn’t see the signal at all. However, this doesn’t rule out sterile neutrinos as a strong candidate for dark matter, the researchers said.

To come to this conclusion, researchers looked for the 3.5 keV line in the sky. Since we live in the Milky Way’s dark matter halo, any observation made through the halo must have dark matter in it.

So when the team found no trace of a 3.5 keV line in the data, they determined that “the 3.5 keV line isn’t due to dark matter,” Dessert said.

Now, while the 3.5 keV signature is caused most likely by sterile neutrinos, this might seem to rule out the hypothetical particle as a candidate for dark matter. But it’s still possible that different mass sterile neutrinos, which wouldn’t put out the same signal, could explain the elusive material.

“Even if you find this evidence compelling, that that 3.5 keV line is not necessarily there or is not necessarily dark matter, that does not rule out sterile neutrinos as a dark matter candidate,” Kerstin Perez, an assistant professor of physics at the Massachusetts Institute of Technology who was not involved in this study, told There are “still a lot of different masses that sterile neutrinos could have and it could still constitute all or some of the dark matter in the universe.”

New dark matter hunting techniques

While Dessert admitted it was fairly disappointing that the researchers didn’t observe a 3.5 keV line, the technique they developed could further the search for the elusive material.

“While this work does, unfortunately, throw cold water on what looked like what might have been the first evidence for the microscopic nature of dark matter, it does open up a whole new approach to looking for dark matter, which could lead to a discovery in the near future,” co-author Ben Safdi, an assistant professor of physics at the University of Michigan, said in a statement.

“In the past, people have said, ‘Well, let’s look at a part of the sky that has a huge amount of dark matter in it and let’s see if we see [dark matter] there,'” Perez said.

But, with this team’s technique, which is similar to a technique that Perez uses in her own work, they use our place in the universe to their advantage because, “if this signal really is dark matter it should be all over the sky with some varying intensity because we live within the halo of dark matter.”

“I think that that is a really exciting way to think about these searches because it allows you to use essentially the full sky,” Perez added. “Previously we were kind of taking snapshots of the sky and looking at them kind of separately.”

While looking through the Milky Way’s dark halo for this signature helped the team to determine that the signal didn’t come from dark matter, it did have additional benefits. “Looking through the dark matter halo in the Milky Way, you’re not actually losing any sensitivity,” Dessert said.

“The previous techniques are basically you point your X-ray telescope at a cluster of galaxies or just a galaxy that has a dark matter halo, and you look for the dark matter decay signal which is going to show up as a line,” Dessert continued. He added that, with their technique in which they look through our galaxy’s dark matter halo, they are able to get better results in their search.

“The dark matter halo around our galaxy is much closer to us, and that means that you’re more likely to get the photons resulting from dark matter decay in our galaxy than you are if you’re looking at some cluster far away.”

Dessert added,  “This technique we’ve developed can be used in other searches so, for example, this 3.5 keV line.”

This work was published March 26 in the journal Science.




3518: We might be living in a gigantic, intergalactic bubble


It would explain a lot.

A Hubble Space Telescope image shows RS Puppis, one of the cepheids used to measure the expansion of the universe.
(Image: © NASA/ESA/Hubble Heritage (STScI/AURA)-Hubble/Europe Collab)

We might be living in a bubble.

That’s the conclusion of a new paper published in the journal Physics Letters B, due for print publication April 10. The paper is an attempt to resolve one of the deepest mysteries of modern physics: Why don’t our measurements of the speed of the universe’s expansion make sense? As Live Science has previously reported, we have multiple ways of measuring the Hubble constant, or H0, a number that governs how fast the universe is expanding. In recent years, as those methods have gotten more precise, they’ve started to produce H0s that dramatically disagree with one another. Lucas Lombriser, a physicist at the University of Geneva in Switzerland and co-author of the new paper, thinks the simplest explanation is that our galaxy sits in a low-density region of the universe — that most of the space we see clearly through our telescopes is part of a giant bubble. And that anomaly, he wrote, is likely messing with our measurements of H0.

It’s hard to imagine what a bubble would look like that’s on the scale of the universe. Most of space is just that anyway: space, with a handful of galaxies and their stars scattered through the nothingness. But just like our local universe has areas where matter packs closely together or spreads extra-far apart, stars and galaxies cluster together at different densities in different parts of the cosmos.

“When we look at the cosmic microwave background [a remnant of the very early universe], we see an almost perfectly homogenous temperature of 2.7 K [kelvins, a temperature scale where 0 degrees is absolute zero] of the universe all around us. At a closer look, however, there are tiny fluctuations in this temperature,” Lombriser told Live Science.

Models of how the universe evolved over time suggest that those tiny inconsistencies would have eventually produced regions of space that are more and less dense, he said. And the sort of low-density regions those models predict would be more than sufficient to distort our H0 measurements in the way that’s happening right now.

Here’s the problem: We have two main ways to measure H0. One is based on extremely precise measurements of the cosmic microwave background (CMB), which appears mostly uniform across our universe since it was formed during an event that spanned the entire universe. The other is based on supernovas and flashing stars in nearby galaxies, known as cepheids.

Cepheids and supernovas have properties that make it easy to precisely determine how far away they are from Earth and how fast they’re moving away from us. Astronomers have used them to make a “distance ladder” to various landmarks in our observable universe, and they have used that ladder to derive H0.

But as both cepheid and CMB measurements have gotten more precise in the last decade, it’s become clear that they don’t agree.

“If we’re getting different answers, that means that there’s something that we don’t know,” Katie Mack, an astrophysicist at North Carolina State University, previously told Live Science. “So this is really about not just understanding the current expansion rate of the universe — which is something we’re interested in — but understanding how the universe has evolved, how the expansion has evolved, and what space-time has been doing all this time.”

Some physicists believe that there must be some “new physics” driving the disparity — something we don’t understand about the universe that’s causing unexpected behaviors.

“New physics would of course be a very exciting solution to the Hubble tension. But new physics typically implies a more complex model that requires clear evidence and should be backed by independent measurements” Lombriser said.

Others think there’s a problem with our calculations of the cepheid ladder or our observations of the CMB. Lombriser said his explanation, which others have proposed before but his paper fleshes out in detail, falls more into this category.

“If the less complex standard physics can explain the tension, this provides both a simpler explanation and is a success for the known physics, but it is unfortunately also more boring,” he added.

Originally published on Live Science.



‘Infinite subrings’ may be next frontier for photographing black holes


Peering so deeply would require adding a space component to the Event Horizon Telescope.

The Event Horizon Telescope captured this image of the supermassive black hole and its shadow that’s in the center of the galaxy M87.
(Image: © Event Horizon Telescope Collaboration)

Black-hole photography could be even more powerful and revelatory than scientists had thought.

Last April, the Event Horizon Telescope (EHT) project unveiled the first-ever imagery of a black hole, laying bare the supermassive monster at the heart of the galaxy M87. The landmark photos have opened new doors, allowing scientists to probe exotic space-time realms like never before.

And that probing may go much deeper still in the not-too-distant future. The most prominent feature in the EHT imagery, a bright but unresolved ring around M87’s supermassive black hole, likely contains a thin “photon ring” that  is composed of an infinite sequence of subrings, a new study reports.

The intricate structure of this photon ring holds a treasure trove of information about the black hole — information that scientists can access by extending the EHT’s reach a bit, study team members said.

“Black holes are giving us this gift, this signal unlike anything that’s been studied in astronomy,” said lead author Michael Johnson, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts.

“It’s not just some cheap picture of, ‘We understand black holes better,'” Johnson told “It’s actually enabling a whole new way to measure them.”

Put a ring on it

The EHT is a network of eight radio telescopes around the world, which are linked to form a virtual instrument the size of Earth — a technique known as very-long-baseline interferometry (VLBI).

This megascope has been observing two supermassive black holes. One is the M87 beast, which lies 53.5 million light-years from Earth and is about 6.5 billion times more massive than Earth’s sun. The other is the Milky Way’s central black hole, known as Sagittarius A*, which is 26,000 light-years away and harbors “only” 4.3 million solar masses.

The EHT team looked first at M87’s black hole, which is a bit easier to resolve because it’s less variable over short timescales. The project hopes to get imagery of Sagittarius A* soon as well, EHT team members have said.

Such imagery doesn’t depict the interior of a black hole, of course; that’s impossible to pull off without being inside a black hole, because these objects gobble up light. Rather, the EHT provides a silhouette of the black hole, mapping out its event horizon, the point of no return beyond which nothing can escape.

The EHT imagery shows that the silhouette of the M87 black hole is surrounded by a bright ring of emission — photons shot out by the hot, fast-moving plasma swirling around the supermassive object. In the new study, Johnson and his colleagues suggest that this ring is a rich resource for astronomers to mine.

Einstein’s theory of general relativity predicts that embedded within the emission halo is a “photon ring,” which itself consists of a complex nest of infinite subrings, the researchers determined.

“Together, the set of subrings are akin to the frames of a movie, capturing the history of the visible universe as seen from the black hole,” Johnson and his colleagues wrote in the new paper, which was published online today (March 18) in the journal Science Advances.

Watching that “movie” could reveal key but elusive insights about black holes and the nature of gravity, the researchers said. For example, characterizing the subrings in detail could help scientists nail down a black hole’s mass and spin, the two properties that define these exotic objects.

“Once you know these two parameters about the system, we think you know everything there is to know about the black hole,” Johnson said.

EHT observations currently allow calculation of black hole masses within 10% or so of the actual value, he added, and they don’t reveal much about spin. But taking the project off Earth could change things significantly.

A telescope bigger than Earth

The EHT consortium, an international team of about 200 researchers, has long planned to push the array into the final frontier eventually, provided their funding will allow it. After all, bigger telescopes, including those linked via VLBI, are more powerful.

But this prospect has long seemed daunting, as calculations have indicated it would take at least half a dozen space-based components to appreciably improve the EHT’s resolving power, Johnson said.

The new study, however, suggests that reading the subrings won’t require such a significant outlay of resources. The researchers determined that even a single satellite — or just one properly designed instrument aboard a parent spacecraft — would likely do the trick, provided it extended the EHT’s footprint far enough out into space.

“Even, say, at geosynchronous orbit — that’s a big resolution improvement for the EHT,” Johnson said, referring to the swath of space about 22,200 miles (35,730 kilometers) above Earth’s surface. “And then, certainly, once you get out to the moon — that’s where I think we would really be looking at entirely new science.”

The subring signatures should be quite easy for a properly extended EHT to measure, he added.

“They seem almost magical,” Johnson said. “We went from this situation where it was sort of unimaginable to even increase the resolution of EHT images by a factor of two. And now we’re thinking, by adding a single space-based line that’s very long, we might be able to increase EHT resolution by a factor of 100.”

This potential milestone isn’t just around the corner, but it may not be too far off, either; Johnson said that the EHT could get a space component within 10 years or so, if everything breaks the project’s way.

Mike Wall is the author of “Out There” (Grand Central Publishing, 2018; illustrated by Karl Tate), a book about the search for alien life. Follow him on Twitter @michaeldwall. Follow us on Twitter @Spacedotcom or Facebook.

By Mike Wall – Senior Writer




3474: Melting ice in Antarctica reveals new uncharted island


Researchers are calling it Sif Island, after a Norse goddess of the Earth.

The rocky coast of Sif Island peeks out under a mound of Antarctic ice.
(Image: © Gui Bortolotto)

Pointing toward South America like an icy finger, the Antarctic Peninsula is one of the fastest-warming regions on Earth. The peninsula’s two major glaciers — the Thwaites Glacier and the Pine Island Glacier — are retreating toward the mainland faster than new ice can form, chipping away at the continent’s coasts a little more each year.

This week, all that melting ice left behind a surprise that could change maps of the region permanently: an uncharted island, long buried in ice but finally visible above sea level for the first time.

Researchers with the international Thwaites Glacier Offshore Research project discovered the island earlier this week while sailing off the coast of the Pine Island Glacier ice shelf. The small island is only about 1,150 feet long (350 meters) and mostly covered in ice, but rises from the sea with a layer of brown rock distinct from the surrounding glaciers and icebergs.

After making a brief landfall, the researchers confirmed that the island is made of volcanic granite, and even hosts a few resident seals. According to expedition member James Marschalek, a doctoral student at Imperial College London, there is no other rocky outcropping like this visible for more than 40 miles (65 kilometers) in any direction.

The researchers tentatively named the uncharted outcropping Sif Island, after a Norse goddess associated with Earth.

Exciting as the discovery is, the island’s sudden appearance is almost certainly a direct effect of the widespread glacial melt that has become typical in Antarctica in the past decade, Sarah Slack, a member of the expedition and middle school science teacher in Brooklyn, New York, wrote in a blog post.

“At first, we thought maybe an iceberg had become lodged on the outcropping years ago and then melted enough to expose the underlying rock,” Slack wrote on Feb. 26. “But now we think that the ice on the island was once part of the Pine Island Glacier ice shelf, a massive field of floating ice that extends outward into the ocean from the edge of the glacier.”

Peter Neff @peter_neff

Looks like ice retreated from the new “Sif Island” near #ThwaitesGlacier, #Antarctica since the early 2010s, based on a quick look at @googleearth timelapse.@ThwaitesGlacier @GlacierThwaites @rdlarter 

Julia Smith Wellner @houston_wellner

After being the first visitors, we can now confirm that Sif Island is made of granite and that it is covered by remnant ice shelf, and a few seals. Photos by CD Hillenbrand (BAS) and Laura Taylor (UH). @glacierthwaites @glacieroffshore @GAViglione #nbp2002 @BAS_News @UHEAS

View image on Twitter
View image on Twitter
View image on Twitter

Using satellite images from Google Earth, expedition member Peter Neff made a time-lapse model showing how the ice shelf’s steady retreat since 2011 left Sif Island detached and alone in Pine Island Bay. From above, the dollop of ice looks like just another lonely iceberg. Now that its island status has been confirmed, further study of Sif could reveal how the region’s rocky underbelly will continue responding to climate change.

It’s likely that the island emerged due to a process called glacial rebound, Lindsay Prothro, a glacial geologist at Texas A&M University-Corpus Christi who was not involved with the expedition, told When glacial ice melts, it relieves pressure on the underlying continent; in response, the continent may “rebound,” or rise up higher than it previously was. It’s unclear whether rebound hastens or slows the rate at which ice shelves break apart — hopefully, further study of Sif Island could provide some clues.

The team’s expedition is due to end on March 25. After that, a full analysis of Sif Island rock samples can commence.

Originally published on Live Science.

By Brandon Specktor – Senior Writer



‘Starter’ Earth grew in a flash. Here’s how the planet did it.


If the solar system formed in 24 hours, then proto-Earth formed in just 1.5 minutes.

An illustration of the protoplanetary disk around our sun.
(Image: © Shutterstock)

Dust from meteorites that crash-landed on Earth have revealed that Earth’s precursor, known as proto-Earth, formed much faster than previously thought, a new study finds.

An analysis of this meteorite dust showed that proto-Earth formed within about 5 million years, which is extremely fast, astronomically speaking.

Put another way, if the entire 4.6 billion years of the solar system’s existence were compressed into a 24-hour period, proto-Earth formed in just 1 minute and 30 seconds, the researchers said.

The new finding breaks with the previously held idea that proto-Earth formed when larger and larger planetary bodies randomly slammed into one another, a process that would have taken several tens of millions of years, or about 5 to 15 minutes in the fictional 24-hour timescale.

In contrast, the new idea holds that planets formed through the accretion of cosmic dust, a process in which dust attracts more and more particles through gravity. “We start from dust, essentially,” study lead researcher Martin Schiller said in a statement. Schiller is an associate professor of geochemistry at the Centre for Star and Planet Formation (StarPlan) at the University of Copenhagen’s Globe Institute, in Denmark.

With accretion, millimeter-size particles would have come together, “raining down on the growing body and making the planet in one go,” Schiller said.

Schiller and his colleagues made the finding by studying iron isotopes, or different versions of the element iron, in meteorite dust. After looking at iron isotopes in different types of meteorites, they realized that only one type had an iron profile that was similar to Earth’s: the CI chondrites, which are stony meteorites. (The “C” stands for carbonaceous and the “I” stands for Ivuna, a place in Tanzania where some CI meteorites are found.)

The dust in these CI chondrites is the best approximation out there for the solar system’s overall composition, the researchers said. In the solar system’s early days, dust like this joined with gas and both were funneled into a accretion disk orbiting the growing sun.

Over the course of 5 million years, the solar system’s planets formed. According to the new study, the proto-Earth’s iron core also formed during this time, snatching up accreted iron from the proto-planet’s mantle. Eventually, this proto-planet became the Earth we know today.

Message from Mars

Meteorites from Mars tell scientists that, in the beginning, the composition of iron isotopes in the material making up Earth were different than they were later on. This likely happened because heat from the young growing sun altered them, the researchers said.

After a few hundred thousand years passed, the area where Earth was forming became cold enough for unheated CI dust that came from farther away to become part of proto-Earth’s accretion disc.

Given that iron from this far away dust is found in Earth’s mantle today, it makes sense that “most of the previous iron was already removed into the core,” Schiller said. “That is why the core formation must have happened early.”

The other idea — that Earth formed when planetary bodies randomly collided with one another — doesn’t hold, he said. “If the Earth’s formation was a random process where you just smashed bodies together, you would never be able to compare the iron composition of the Earth to only one type of meteorite,” Schiller said. “You would get a mixture of everything.”

The new finding may also apply to other planets in the universe, the researchers noted. In essence, this means that other planets may grow much faster than previously realized. In fact, there is already evidence that this is likely the case, according to data on thousands of exoplanets in other galaxies, said study co-researcher Martin Bizzarro, a professor at StarPlan.

“Now we know that planet formation happens everywhere,” Bizzarro said in the statement. “When we understand these mechanisms in our own solar system, we might make similar inferences about other planetary systems in the galaxy.”

This process may even explain when and how often water is accreted during planet formation.

“If the theory of early planetary accretion really is correct, water is likely just a by-product of the formation of a planet like the Earth,” Bizzarro said. “Making the ingredients of life, as we know it, [is] more likely to be found elsewhere in the universe.”

The study was published online Feb. 12 in the journal Science Advances.

Originally published on Live Science.
By Laura Geggel – Associate Editor




3464: See record-high temperatures strip Antarctica of huge amounts of ice


Watch a barren, brown desert emerge from the icy continent.

Antarctica’s Eagle Island on Feb. 4 and Feb. 13, 2020. (Image: © NASA Earth Observatory)

It’s easy to forget that Antarctica is technically a desert, until you see it without snow.

A new pair of satellite images shared by NASA’s Earth Observatory makes that stark reality clear as ice. NASA’s Landsat-8 satellite snapped the two images of Eagle Island (a small island off Antarctica’s northwest tip) on Feb. 4 and Feb. 13, 2020, bookending a period of record high temperatures in the southernmost continent. Between the two images, a significant amount of the island’s glacial ice disappeared, revealing huge swaths of the barren brown rock underneath.

According to glaciologist Mauri Pelto, a professor of environmental science at Nichols College in Massachusetts, the island lost about 20% of its seasonal snow accumulation in just a few days.

“You see these kinds of melt events in Alaska and Greenland, but not usually in Antarctica,” Pelto told NASA.

The melt coincided with not one, but two record-high temperatures recorded on Antarctica this month. On Feb. 6, a research station on the northern edge of the Antarctic Peninsula (the finger of land on the continent’s northwest tip, closest to South America) recorded a new record-high temperature of 64.9 degrees Fahrenheit (18.3 degrees Celsius) — surpassing the previous record of 63.5 F (17.5 C), set in March 2015.

Days later, on Feb. 9, researchers on the nearby Seymour Island saw their thermometers hit 69.35 F (20.75 C), setting another all-time high for the continent. (For comparison, that’s about the same temperature reported in Los Angeles, on the same day. Balmy!)

As the new images show, those high temperatures caused significant melting on nearby glaciers. According to Pelto, Eagle Island lost nearly 1 square mile (1.5 square kilometers) of snowpack to the heat, creating several large ponds of bright blue meltwater at the island’s center.

While every season has its highs, this summer has been especially warm for Antarctica, Pelto said. The continent has already seen two heatwaves this season — one in November 2019 and one in January 2020 — reminding us that significant melt events like these are becoming more common as global warming continues unchecked.

Originally published on Live Science.

By Brandon Specktor – Senior Writer




World’s richest person, Jeff Bezos gives $10 billion to fight climate change


The Bezos Earth Fund will start issuing grants this summer.

(Image: © Shutterstock)

The fight against climate change is getting a big infusion of cash.

The world’s richest person, founder Jeff Bezos, announced on Monday (Feb. 17) that he’s starting an organization devoted to that pressing cause — and he’s putting in $10 billion of his own money to get it off the ground.

The new Bezos Earth Fund “will fund scientists, activists, NGOs [nongovernmental organizations] — any effort that offers a real possibility to help preserve and protect the natural world,” the billionaire wrote in an Instagram post Monday, which described climate change as “the biggest threat to our planet.”

Today, I’m thrilled to announce I am launching the Bezos Earth Fund.⁣⁣⁣
Climate change is the biggest threat to our planet. I want to work alongside others both to amplify known ways and to explore new ways of fighting the devastating impact of climate change on this planet we all share. This global initiative will fund scientists, activists, NGOs — any effort that offers a real possibility to help preserve and protect the natural world. We can save Earth. It’s going to take collective action from big companies, small companies, nation states, global organizations, and individuals. ⁣⁣⁣
I’m committing $10 billion to start and will begin issuing grants this summer. Earth is the one thing we all have in common — let’s protect it, together.⁣⁣⁣
– Jeff

“I want to work alongside others both to amplify known ways and to explore new ways of fighting the devastating impact of climate change on this planet,” Bezos added. “I’m committing $10 billion to start and will begin issuing grants this summer. Earth is the one thing we all have in common — let’s protect it, together.”

Bezos has cited environmental concerns as a big motivator for the ambitions of his spaceflight company, Blue Origin, which aims to get millions of people living and working in space. Achieving this goal will take considerable pressure off our beleaguered Earth, Bezos has stressed.

“Blue Origin believes that in order to preserve Earth, our home, for our grandchildren’s grandchildren, we must go to space to tap its unlimited resources and energy,” the company’s website reads. “Like the Industrial Revolution gave way to trade, economic abundance, new communities and high-speed transportation — our road to space opens the door to the infinite and yet unimaginable future generations might enjoy.”

But some people are calling for Bezos — who is worth about $130 billion — to do even more for our planet, as NPR noted.

“We applaud Jeff Bezos’ philanthropy, but one hand cannot give what the other is taking away,” Amazon Employees For Climate Justice said in a statement Monday, which was released via Twitter.

“The people of Earth need to know: When is Amazon going to stop helping oil & gas companies ravage Earth with still more oil and gas wells?” the statement added. “When is Amazon going to stop funding climate-denying think tanks like the Competitive Enterprise Institute and climate-delaying policy? When will Amazon take responsibility for the lungs of children near its warehouses by moving from diesel to all-electric trucking?”

Mike Wall’s book about the search for alien life, “Out There” (Grand Central Publishing, 2018; illustrated by Karl Tate), is out now. Follow him on Twitter @michaeldwall. Follow us on Twitter @Spacedotcom or Facebook

By Mike Wall – Senior Writer




What should we do if a ‘planet-killer’ asteroid takes aim at Earth?


Researchers at MIT calculated which option is best depending on the asteroid and its path through space.

An illustration shows a rocket approaching an asteroid that’s drifted too close to Earth. A scout probe orbits nearby.
(Image: © Photo collage: Christine Daniloff, MIT)

If a giant object looks like it’s going to slam into Earth, humanity has a few options: Hammer it with a spacecraft hard enough to knock it off course, blast it with nuclear weapons, tug on it with a gravity tractor, or even slow it down using concentrated sunlight.

We’ll have to decide whether to visit it with a scout mission first, or launch a full-scale attack immediately.

Those are a lot of decisions to make under existential duress, which is why a team of MIT researchers have come up with a guide, published February in the journal Acta Astronautica, to help future asteroid deflectors.

In movies, an incoming asteroid is usually a very last-minute shock: a big, deadly rock hurtling right toward Earth like a bullet out of the darkness, with only weeks or days between its discovery and its projected impact. That is a real threat, according to an April 2019 presentation by NASA’s Office of Planetary Defense that Live Science attended. But NASA believes that it’s spotted most of the largest, deadliest objects that have even a small chance of striking Earth — the so-called planet killers. (Of course, there are probably plenty of smaller rocks — still large enough to kill whole cities — that remain undiscovered.)

Because most of the large objects in Earth’s neighborhood are already being closely watched, we’ll likely have plenty of warning before one strikes Earth. Astronomers watch these space rocks as they get near Earth to see whether they’re likely to cross through one of their “keyholes.” Every Earth-threatening asteroid gets closer and further from Earth at different points in its orbit around the sun. And along that path, near Earth, it has keyholes. Those keyholes are regions of space that it has to pass through in order to end up on a collision course during its next approach to our planet..

“A keyhole is like a door — once it’s open, the asteroid will impact Earth soon after, with high probability,” Sung Wook Paek, lead author of the study and a Samsung engineer who was an MIT graduate student when the paper was written, said in a statement.

The easiest time to stop an object from hitting Earth is before it hits one of those keyholes, according to the paper. That will keep the object from getting on the route toward an impact in the first place — at which point saving Earth would require far more resources and energy, and involve much more risk.

Paek and his co-authors tossed out most of the more exotic asteroid-deflection schemes out of hand, leaving only nuclear detonation and impactors as serious options. Nuclear detonation is problematic as well, they wrote, because it’s uncertain exactly how an asteroid will behave after a nuclear explosion and because political concerns about nuclear weapons could cause problems for the mission.

In the end, they landed on three options for missions that could reasonably be prepared on short notice if a planet-killer asteroid were spotted heading toward a keyhole:

  • A “type 0” mission where a single, heavy spacecraft was fired at the incoming object, aimed using the best available information about the object’s makeup and trajectory to knock it off course.
  • A “type 1” mission where a scout is launched first and collects close-up data about the asteroid before the main impactor is launched, in order to better aim the shot for maximum effect.
  • A “type 2” mission where one small impactor is launched at the same time as the scout to knock the object a bit off course. Then all the information from the scout and the first impact are used to fine-tune a second small impact that finishes the job.

The problem with “type 0” missions, the researchers wrote, is that telescopes on Earth can only gather rough information about planet killers, which are still faraway, dim, relatively small objects. Without precise information on the object’s mass, velocity, or physical makeup, the impactor mission will have to rely on some imprecise estimates, and has a higher risk of failing to properly knock the incoming object out of its keyhole.

Type 1 missions are more likely to succeed, the researchers wrote, because they can determine the incoming rock’s mass and velocity far more precisely. But they also take more time and resources. Type 2 missions are even better, but take yet more time and resources to get underway.

The researchers developed a method for calculating which mission is best based on two factors: the time between the mission start and the date the planet killer will reach its keyhole, and the difficulty involved in properly diverting the specific planet killer.

Applying those calculations to two well-known planet-killer asteroids in Earth’s general neighborhood, Apophis and Bennu, the researchers came up with a complex set of instructions for future asteroid deflectors in the event one of those objects started heading for a keyhole.

Given enough time, they found, type 2 missions were almost always the right way to deflect Bennu. If time was short, though, a quick-and-dirty type 0 mission was the way to go. There were just a handful of instances where type 1 missions made sense.

Apophis was a different, more complicated story. If time was short, a type 1 mission was usually the best option: collect data quickly in order to properly aim the impact. Given more time, type 2 missions were sometimes better, depending how difficult it appeared to be to deflect from its course. There were no situations where a type 0 mission made sense for Apophis.

In both cases, if the time got too short, the researchers found no mission would be successful at diverting the rock.

The differences between the rocks came down to the level of uncertainty about their masses and velocities, as well as how their internal materials would react to an impact.

These same basic principles could be used to study other potential planet killers, and future studies could incorporate other options for deflecting the asteroids, including nuclear weapons, the researchers wrote. The more complex the list of options, the more difficult the calculation gets. Eventually, they wrote, it would be useful to train machine learning algorithms to make decisions based on the exact available data in any planet-killer scenario.

Originally published on Live Science.

By Rafi Letzter – Staff Writer




70,000-year-old Neanderthal remains may be evidence that ‘closest human relative’ buried its dead


The Neanderthal’s skull is squashed, and its worn teeth suggest the individual was middle aged.

The steep entrance to Shanidar Cave, where the newly discovered Neanderthal remains were unearthed.
(Image: © Graeme Barker)

Some Neanderthals may have buried their dead. That’s according to the discovery of a partial Neanderthal skeleton found deep in a cave in Iraqi Kurdistan alongside a possible grave marker.

Neanderthals, our closest extinct human relative, lived in Eurasia from about 250,000 to 40,000 years ago. The roughly 70,000-year-old bones of this newfound individual included a squashed skull and upper body, making it the most complete articulated Neanderthal skeleton to be found in more than 25 years, the researchers said.

If Neanderthals did indeed bury this individual, then perhaps some Neanderthals had mortuary practices, an idea that is still debated among anthropologists, said study co-lead researcher Emma Pomeroy, a human-bone specialist and a lecturer of the evolution of health, diet and disease in the Department of Archaeology at the University of Cambridge in England.

The so-called Neanderthal “burial debate” continues because the practice of mortuary activities suggests the capacity for symbolic thought, an ability that seems to be almost exclusively human, Pomeroy told Live Science.

“It’s evidence for perhaps compassion and care towards other members of your group, and mourning and feelings of loss,” she said. “It tells us something about the way Neanderthals were thinking; whether they experienced the kind of emotion that we do and had the kind of cognitive ability to think abstractly about the world.”

The excavation

Researchers discovered the Neanderthal’s remains in Shanidar Cave, an archaeological hotspot in the foothills of Iraqi Kurdistan. The site became famous in the 1950s, when American archaeologist Ralph Solecki unearthed the remains of 10 Neanderthal men, women and children there.

“Solecki argued that while some of the individuals had been killed by rocks falling from the cave roof, others had been buried with formal burial rites,” the researchers wrote in the new study. The latter group included the famous “flower burial,” named for the clumps of pollen grains found in the sediment, which Solecki saw as evidence for the intentional placement of flowers with the body.

While the interpretation of the flower burial remains controversial, it sparked the decades-long controversy about whether Neanderthals had the cultural sophistication to bury their dead.

In the years following Solecki’s excavations, goat herders intermittently used the cave for shelter, Pomeroy said. Then, in 2014, archaeologists returned at the invitation of the Kurdish Regional Government in Iraq. An ISIS threat, however, delayed the project until 2015.

Unfortunately, Solecki never made it back, despite many attempts. He died in March 2019 at age 101, the researchers reported.

The new team didn’t expect to find any more Neanderthal remains, but that’s exactly what they discovered. “It was really unexpected,” said Pomeroy, who joined the project at that point. “It was kind of mindblowing.”

The Neanderthal’s head was rested, pillowlike, on its curled left arm. The right arm was bent at the elbow. But everything below the Neanderthal’s waist was missing. It’s likely that the lower body was part of a large block removed by Solecki and colleagues in the early 1960s, Pomeroy said. That block is currently at Baghdad Museum, and the researchers hope to study it soon, she said.

The Neanderthal

The newfound Neanderthal, dubbed Shanidar Z, was likely an adult of middle age or older, based on its worn teeth, the researchers said.

The skeleton is currently on loan in Cambridge, where it is being conserved and digitally scanned with CT (computed tomography). Analyses of Shanidar Z’s bones and teeth will also be a gold mine for researchers; they plan to look for ancient DNA, study the Neanderthal’s dental plaque to see what it ate, and examine the chemical signatures in its teeth to see where it lived as a youth. Moreover, traces of pollen and charcoal in the sediment around the bones could provide clues about Neanderthal cooking and burial practices, Pomeroy said.

During the dig, the researchers found the tooth of another Neanderthal, as well as bones of other Neanderthal individuals beneath Shanidar Z. This raises the question of whether Neanderthals used this cave as a burial ground over the years, the researchers said, especially because Shanidar Z had a prominent rock at its head that may have served as a grave marker.

Other clues also hint that Shanidar Z was  intentionally buried. For instance, if the body had been abandoned in the cave, scavengers would have likely chomped down and left bite marks on the bones, Pomeroy said.

Moreover, “the new excavation suggests that some of these bodies were laid in a channel in the cave floor created by water, which had then been intentionally dug to make it deeper,” study senior author Graeme Barker, director of the Shanidar Cave project and professor in the Department of Archaeology at the University of Cambridge, said in a statement. “There is strong early evidence that Shanidar Z was deliberately buried.”

So far, the evidence for burial looks convincing, said João Zilhão, a professor at the Catalan Institution for Research and Advanced Studies (ICREA) at the University of Barcelona, who was not involved in the study.

“Of course it was [buried],” Zilhão told Live Science in an email. “There can be no question about that.” He noted that while some scientists question whether Neanderthals buried their dead, this line of thought is “based on captious arguments that essentially boiled down to ‘all those instances of burial are from old excavations that were not up to standards and so do not represent valid evidence.'”

But new analyses of previously studied Neanderthal sites support the idea that these beings buried their dead, including at La Chapelle-aux-Saints in southwestern France, Zilhão said.

The new study was published online Tuesday (Feb. 18) in the journal Antiquity

Originally published on Live Science.

By Laura Geggel – Associate Editor




3452: Ripples in space-time could explain the mystery of why the universe exists


A new study may help answer one of the universe’s biggest mysteries.

Inflation stretched the tiny universe into a macroscopic size and turned cosmic energy into matter. But it likely created an equal amount of matter and antimatter. It’s not clear why but the authors probe one theory that a phase transition after inflation led to a tiny bit more matter than anti-matter and also created cosmic strings which would produce slight ripples in space-time known as gravitational waves.
(Image: © R. Hurt/Caltech-JPL, NASA, and ESA Credit: Kavli IPMU – Kavli IPMU modified this figure based on the image credited by R.Hurt/Caltech-JPL, NASA, and ESA)

A new study may help answer one of the universe’s biggest mysteries: Why is there more matter than antimatter? That answer, in turn, could explain why everything from atoms to black holes exists.

Billions of years ago, soon after the Big Bang, cosmic inflation stretched the tiny seed of our universe and transformed energy into matter. Physicists think inflation initially created the same amount of matter and antimatter, which annihilate each other on contact. But then something happened that tipped the scales in favor of matter, allowing everything we can see and touch to come into existence — and a new study suggests that the explanation is hidden in very slight ripples in space-time.

“If you just start off with an equal component of matter and antimatter, you would just end up with having nothing,” because antimatter and matter have equal but opposite charge, said lead study author Jeff Dror, a postdoctoral researcher at the University of California, Berkeley, and physics researcher at Lawrence Berkeley National Laboratory. “Everything would just annihilate.”

Obviously, everything did not annihilate, but researchers are unsure why. The answer might involve very strange elementary particles known as neutrinos, which don’t have electrical charge and can thus act as either matter or antimatter.

One idea is that about a million years after the Big Bang, the universe cooled and underwent a phase transition, an event similar to how boiling water turns liquid into gas. This phase change prompted decaying neutrinos to create more matter than antimatter by some “small, small amount,” Dror said. But “there are no very simple ways — or almost any ways — to probe [this theory] and understand if it actually occured in the early universe.”

But Dror and his team, through theoretical models and calculations, figured out a way we might be able to see this phase transition. They proposed that the change would have created extremely long and extremely thin threads of energy called “cosmic strings” that still pervade the universe.

Dror and his team realized that these cosmic strings would most likely create very slight ripples in space-time called gravitational waves. Detect these gravitational waves, and we can discover whether this theory is true.

The strongest gravitational waves in our universe occur when a supernova, or star explosion, happens; when two large stars orbit each other; or when two black holes merge, according to NASA. But the proposed gravitational waves caused by cosmic strings would be much tinier than the ones our instruments have detected before.

However, when the team modeled this hypothetical phase transition under various temperature conditions that could have occured during this phase transition, they made an encouraging discovery: In all cases, cosmic strings would create gravitational waves that would be detectable by future observatories, such as NASA’s Laser Interferometer Space Antenna (LISA), the European Space Agency’s proposed Big Bang Observer and the Japan Aerospace Exploration Agency’s Deci-hertz Interferometer Gravitational wave Observatory (DECIGO).

“If these strings are produced at sufficiently high energy scales, they will indeed produce gravitational waves that can be detected by planned observatories,” Tanmay Vachaspati, a theoretical physicist at Arizona State University who wasn’t part of the study, told Live Science.

The findings were published Jan. 28 in the journal Physical Review Letters.

Originally published on Live Science.

By Yasemin Saplakoglu – Staff Writer




3428: Devastating solar storms could be far more common than we thought


These powerful storms can knock out satellites and power grids, and we may be due for one every 25 years.

On June 20, 2013, at 11:15 p.m. EDT, the sun shot out a solar flare (left side), which was followed by an eruption of solar material shooting through the sun’s atmosphere.
(Image: © NASA Goddard)

The sun constantly bombards Earth with wispy belches of plasma called solar wind. Normally, the planet’s magnetic shield soaks up the brunt of these electric particles, producing stunning auroras as they surge toward Earth’s magnetic poles. But every so often, there comes a solar sneeze powerful enough to body-slam our atmosphere.

These severe space weather events — known as solar storms — compress Earth’s magnetic shield, releasing enough power to blind satellites, disrupt radio signals and plunge entire cities into electrical blackouts. According to a study published Jan. 22 in the journal Geophysical Research Letters, they may be much more common than previously thought.

In the new study, researchers analyzed a catalog of Earth’s magnetic field changes going back to 1868; years that showed the strongest spikes in geomagnetic activity coincided with the most severe solar storms. They found that severe storms (those capable of disrupting some satellites and communications systems) occurred in 42 of the last 150 years, while the most extreme storms — “great” superstorms, which cause significant damage and disruption — occurred in six of those years, or once every 25 years.

“Our research shows that a super-storm can happen more often than we thought,” study co-author Richard Horne, a space weather researcher at the British Antarctic Survey, said in a statement. “Don’t be misled by the stats. It can happen any time. We simply don’t know when.”

Attack of the sun

For the new study, the researchers consulted the world’s oldest continuous geomagnetic index, known as the aa index.

Since 1868, the index has recorded changes in Earth’s magnetic field as observed by two research stations on opposite sides of the planet, one in Australia and the other in the U.K. Every 3 hours, ground-based sensors at each station record local changes in magnetic field activity; after combining the daily averages from each station, scientists get a general picture of magnetic field activity across the entire planet.

Because the study authors were concerned only with the most extreme solar events over the last 150 years, they focused on the top 5% of geomagnetic spikes recorded each year. With this data, the authors ranked the top 10 years with the most severe geomagnetic activity from 1868 to present day. Those years, from most to least active, were 1921, 1938, 2003, 1946, 1989, 1882, 1941, 1909, 1960 and 1958.

Unsurprisingly, most of those years were associated with powerful geomagnetic storms.

“The earliest ones would have been reported in terms of auroras (‘northern lights’) at low latitudes, and disruptions to telegraph communications,” lead study author Sandra Chapman, an astrophysics professor at the University of Warwick in England, told Live Science in an email. “As aviation and radio came into widespread use, reports centered on disruptions to those.”

A geomagnetic storm in May of 1921, for example, caused widespread radio and telegraph outages across the world, resulting in at least one telegraph operator’s instrument bursting into flames and setting his office on fire, according to a report published in 2001 in the Journal of Atmospheric and Solar-Terrestrial Physics. The northern and southern auroras (which intensify during solar storms) were also visible at far lower latitudes than usual, with one observatory claiming to detect the southern lights from the island of Samoa, just 13 degrees south of the geomagnetic equator.

More recent solar storms, such as a massive flare that swept over Earth on Halloween 2003, disrupted communications satellites and caused other spacecraft to tumble out of control. In March 1989, a gargantuan solar storm plunged the entire province of Quebec, Canada, into darkness and left millions of people without power for 12 hours.

Earth hasn’t been hit with a solar super-storm in nearly two decades (though a large, potentially damaging solar ejection passed by us in 2012). Since then, our world has become more networked and satellite-dependent; the precise impacts the next superstorm will have on our society aren’t well understood, Chapman said. Studies like this can help scientists predict the likelihood that a powerful space storm might hit Earth in a given year, which could lead to better preparedness, she added.

Powerful solar ejections occur more frequently when there are a lot of sunspots on the sun’s surface. Sunspot activity tends to peak approximately every 11 years, during a period called the solar maximum. The last solar maximum occurred in 2014.

Originally published on Live Science.
By Brandon Specktor – Senior Writer

3406: What is quantum cognition? Physics theory could predict human behavior.


Some scientists think quantum mechanics can help explain human decision-making.

(Image: © Shutterstock)

The same fundamental platform that allows Schrödinger’s cat to be both alive and dead, and also means two particles can “speak to each other” even across a galaxy’s distance, could help to explain perhaps the most mysterious phenomena: human behavior.

Quantum physics and human psychology may seem completely unrelated, but some scientists think the two fields overlap in interesting ways. Both disciplines attempt to predict how unruly systems might behave in the future. The difference is that one field aims to understand the fundamental nature of physical particles, while the other attempts to explain human nature — along with its inherent fallacies.

“Cognitive scientists found that there are many ‘irrational’ human behaviors,” Xiaochu Zhang, a biophysicist and neuroscientist at the University of Science and Technology of China in Hefei, told Live Science in an email. Classical theories of decision-making attempt to predict what choice a person will make given certain parameters, but fallible humans don’t always behave as expected. Recent research suggests that these lapses in logic “can be well explained by quantum probability theory,” Zhang said.

Zhang stands among the proponents of so-called quantum cognition. In a new study published Jan. 20 in the journal Nature Human Behavior, he and his colleagues investigated how concepts borrowed from quantum mechanics can help psychologists better predict human decision-making. While recording what decisions people made on a well-known psychology task, the team also monitored the participants’ brain activity. The scans highlighted specific brain regions that may be involved in quantum-like thought processes.

The study is “the first to support the idea of quantum cognition at the neural level,” Zhang said.

Cool — now what does that really mean?


Quantum mechanics describes the behavior of the tiny particles that make up all matter in the universe, namely atoms and their subatomic components. One central tenet of the theory suggests a great deal of uncertainty in this world of the very small, something not seen at larger scales. For instance, in the big world, one can know where a train is on its route and how fast it’s traveling, and given this data, one could predict when that train should arrive at the next station.

Now, swap out the train for an electron, and your predictive power disappears — you can’t know the exact location and momentum of a given electron, but you could calculate the probability that the particle may appear in a certain spot, traveling at a particular rate. In this way, you could gain a hazy idea of what the electron might be up to.

Just as uncertainty pervades the subatomic world, it also seeps into our decision-making process, whether we’re debating which new series to binge-watch or casting our vote in a presidential election. Here’s where quantum mechanics comes in. Unlike classical theories of decision-making, the quantum world makes room for a certain degree of … uncertainty.

Classical psychology theories rest on the idea that people make decisions in order to maximize “rewards” and minimize “punishments” — in other words, to ensure their actions result in more positive outcomes than negative consequences. This logic, known as “reinforcement learning,” falls in line with Pavlonian conditioning, wherein people learn to predict the consequences of their actions based on past experiences, according to a 2009 report in the Journal of Mathematical Psychology.

If truly constrained by this framework, humans would consistently weigh the objective values of two options before choosing between them. But in reality, people don’t always work that way; their subjective feelings about a situation undermine their ability to make objective decisions.

Heads and tails (at the same time)

Consider an example:

Imagine you’re placing bets on whether a tossed coin will land on heads or tails. Heads gets you $200, tails costs you $100, and you can choose to toss the coin twice. When placed in this scenario, most people choose to take the bet twice regardless of whether the initial throw results in a win or a loss, according to a study published in 1992 in the journal Cognitive Psychology. Presumably, winners bet a second time because they stand to gain money no matter what, while losers bet in attempt to recover their losses, and then some. However, if players aren’t allowed to know the result of the first coin flip, they rarely make the second gamble.

When known, the first flip does not sway the choice that follows, but when unknown, it makes all the difference. This paradox does not fit within the framework of classical reinforcement learning, which predicts that the objective choice should always be the same. In contrast, quantum mechanics takes uncertainty into account and actually predicts this odd outcome.

“One could say that the ‘quantum-based’ model of decision-making refers essentially to the use of quantum probability in the area of cognition,” Emmanuel Haven and Andrei Khrennikov, co-authors of the textbook “Quantum Social Science” (Cambridge University Press, 2013), told Live Science in an email.

Just as a particular electron might be here or there at a given moment, quantum mechanics assumes that the first coin toss resulted in both a win and a loss, simultaneously. (In other words, in the famous thought experiment, Schrödinger’s cat is both alive and dead.) While caught in this ambiguous state, known as “superposition,” an individual’s final choice is unknown and unpredictable. Quantum mechanics also acknowledges that people’s beliefs about the outcome of a given decision — whether it will be good or bad — often reflect what their final choice ends up being. In this way, people’s beliefs interact, or become “entangled,” with their eventual action.

Subatomic particles can likewise become entangled and influence each other’s behavior even when separated by great distances. For instance, measuring the behavior of a particle located in Japan would alter the behavior of its entangled partner in the United States. In psychology, a similar analogy can be drawn between beliefs and behaviors. “It is precisely this interaction,” or state of entanglement, “which influences the measurement outcome,” Haven and Khrennikov said. The measurement outcome, in this case, refers to the final choice an individual makes. “This can be precisely formulated with the aid of quantum probability.”

Scientists can mathematically model this entangled state of superposition — in which two particles affect each other even if they’re separated by a large distance — as demonstrated in a 2007 report published by the Association for the Advancement of Artificial Intelligence. And remarkably, the final formula accurately predicts the paradoxical outcome of the coin toss paradigm. “The lapse in logic can be better explained by using the quantum-based approach,” Haven and Khrennikov noted.

Betting on quantum

In their new study, Zhang and his colleagues pitted two quantum-based models of decision-making against 12 classical psychology models to see which best predicted human behavior during a psychological task. The experiment, known as the Iowa Gambling Task, is designed to evaluate people’s ability to learn from mistakes and adjust their decision-making strategy over time.

In the task, participants draw from four decks of cards. Each card either earns the player money or costs them money, and the object of the game is to earn as much money as possible. The catch lies in how each deck of cards is stacked. Drawing from one deck may earn a player large sums of money in the short term, but it will cost them far more cash by the end of the game. Other decks deliver smaller sums of money in the short-term, but fewer penalties overall. Through game play, winners learn to mostly draw from the “slow and steady” decks, while losers draw from the decks that earn them quick cash and steep penalties.

Historically, those with drug addictions or brain damage perform worse on the Iowa Gambling Task than healthy participants, which suggests that their condition somehow impairs decision-making abilities, as highlighted in a study published in 2014 in the journal Applied Neuropsychology: Child. This pattern held true in Zhang’s experiment, which included about 60 healthy participants and 40 who were addicted to nicotine.

The two quantum models made similar predictions to the most accurate among the classical models, the authors noted. “Although the [quantum] models did not overwhelmingly outperform the [classical] … one should be aware that the [quantum reinforcement learning] framework is still in its infancy and undoubtedly deserves additional studies,” they added.

To bolster the value of their study, the team took brain scans of each participant as they completed the Iowa Gambling Task. In doing so, the authors attempted to peek at what was happening inside the brain as participants learned and adjusted their game-play strategy over time. Outputs generated by the quantum model predicted how this learning process would unfold, and thus, the authors theorized that hotspots of brain activity might somehow correlate with the models’ predictions.

The scans did reveal a number of active brain areas in the healthy participants during game play, including activation of several large folds within the frontal lobe known to be involved in decision-making. In the smoking group, however, no hotspots of brain activity seemed tied to predictions made by the quantum model. As the model reflects participants’ ability to learn from mistakes, the results may illustrate decision-making impairments in the smoking group, the authors noted.

However, “further research is warranted” to determine what these brain activity differences truly reflect in smokers and non-smokers, they added. “The coupling of the quantum-like models with neurophysiological processes in the brain … is a very complex problem,” Haven and Khrennikov said. “This study is of great importance as the first step towards its solution.”

Models of classical reinforcement learning have shown “great success” in studies of emotion, psychiatric disorders, social behavior, free will and many other cognitive functions, Zhang said. “We hope that quantum reinforcement learning will also shed light on [these fields], providing unique insights.”

In time, perhaps quantum mechanics will help explain pervasive flaws in human logic, as well as how that fallibility manifests at the level of individual neurons.

Originally published on Live Science.
By Nicoletta Lanese – Staff Writer



3405: 2 satellites will — hopefully — narrowly avoid colliding at 32,800 mph over Pittsburgh on Wednesday


A collision would create a debris belt that would endanger spacecraft worldwide.

The The Infrared Astronomical Satellite (IRAS) orbits the Earth in this illustration.
(Image: © NASA)

Editor’s note: This story was updated 12:20 p.m. E.S.T. on Jan. 29 to reflect new information from LeoLabs about the satellites and their collision risk.

Two defunct satellites will — hopefully — zip past each other at 32,800 mph (14.7 kilometers per second) in the sky over Pittsburgh on Wednesday evening (Jan. 29).

When this article was first written Tuesday morning (Jan. 28) the odds of a collision were 1 in 100. A crash has since become five times more likely, with 1 in 20 odds. If the two satellites were to collide, the debris could endanger spacecraft around the planet.

If the satellites miss as expected, it will be a near miss: LeoLabs, the satellite-tracking company that made the prediction, said they should pass about 40 feet apart (12 meters) at 6:39:35 p.m. local time. The odds of a collision went up in large part based on the information that one of the two satellites, the Gravity Gradient Stabilization Experiment (GGSE-4), had a 60 foot (18 m) boom trailing from it, according to LeoLabs. No one knows which way the boom is facing, which complicates the calculation.

One of the satellites is called the Infrared Astronomical Satellite (IRAS). Launched in 1983, it was the first infrared space telescope and operated for less than a year, according to the Jet Propulsion Laboratory. GGSE-4 was a U.S. Air Force experiment launched in 1967 to test spacecraft design principles, according to NASA. The two satellites are unlikely to actually slam into each other, said LeoLabs CEO Dan Ceperley. But predictions of the precise movements of fairly small, fast objects over vast distances is a challenge, Ceperley told Live Science. (LeoLabs’ business model is selling improvements on those predictions.)

If they did collide, “there would be thousands of pieces of new debris that would stay in orbit for decades. Those new clouds of debris would threaten any satellites operating near the collision altitude and any spacecraft transiting through on its way to other destinations. The new debris [would] spread out and form a debris belt around the Earth,” Ceperley said.

LeoLabs uses its own network of ground-based radar to track orbiting objects. Still, Jonathan McDowell, a Harvard-Smithsonian Center for Astrophysics astronomer who tracks satellites using public data, said the near-miss prediction was plausible.

“I confirm there is a close approach of these two satellites around 2339 UTC Jan 29. How close isn’t clear from the data I have, but it’s reasonable that LEOLabs data is better,” McDowell told Live Science.

(When it’s 23:39 UTC it’s 6:39 p.m. Eastern time, which is the time zone in Pittsburgh.)

“What’s different here is that this isn’t debris-on-payload but payload-on-payload,” McDowell said. In other words, in this case two satellites, rather than debris and a satellite, are coming close to one another.

It’s pretty common for bits of orbital debris to have near misses in orbit, Ceperley said, which usually go untracked. It’s more unusual, though, for two full-size satellites to come this close in space. IRAS in particular is the size of a truck, at 11.8 feet by 10.6 feet by 6.7 feet (3.6 by 3.2 by 2.1 m).

“Events like this highlight the need for responsible, timely deorbiting of satellites for space sustainability moving forward. We will continue to monitor this event through the coming days and provide updates as available,” LeoLabs said on Twitter.

It’s still unlikely the two satellites will collide, and the odds are subject to change based on new information. When this article was first written, LeoLabs calculated 1 in 100 odds of a collision. They’ve since been revised down to 1 in 1,000, and then up to 1 in 20.

Editor’s note: This story was corrected on January 28. The date Jan. 29 is a Wednesday, not a Thursday.

Originally published on Live Science.
By Rafi Letzter – Staff Writer



3404: Physicists: Ancient life might have escaped Earth and journeyed to alien stars


A pair of researchers presented a wild new theory in a new paper

An illustration shows a comet passing in front of a star.
(Image: © NASA/JPL-Caltech)

A pair of Harvard astrophysicists have proposed a wild theory of how life might have spread through the universe.

Imagine this:

Millions or billions of years ago, back when the solar system was more crowded, a giant comet grazed the outer reaches of our atmosphere. It was moving fast, several tens of miles above the Earth’s surface — too high to burn up as a fireball, but low enough that the atmosphere slowed it down a little bit. Extremely hardy microbes were floating up there in its path, and some of those bugs survived the collision with the ball of ice. These microbes ended up embedded deep within the comet’s porous surface, protected from the radiation of deep space as the comet rocketed away from Earth and eventually out of the solar system entirely. Tens of thousands, maybe millions, of years passed before the comet ended up in another solar system with habitable planets. Eventually, the object crashed into one of those planets, deposited the microbes — a few of them still living — and set up a new outpost for earthly life in the universe.

You could call it “interstellar panspermia,” the seeding of distant star systems with exported life.

We have no idea whether this ever actually happened –.and there’s a mountain of reasons to be skeptical. But in a new paper,  Amir Siraj and Avi Loeb, both astrophysicists at Harvard University, argue that at least the first part of this story — the depositing of the microbes into a comet that gets ejected from the solar system — should have happened between one and a few dozen times in Earth’s history. Siraj told Live Science that although a lot more work needs to be done to back up the finding, it should be taken seriously — and that the paper may have been, if anything, too conservative in its estimate of the number of life-exporting events.

While the study’s concept may seem far-fetched, humanity is constantly confronted with seeming impossibilities, like Earth going around the sun, or quantum physics, or bacteria hitching a ride into the galaxy aboard a comet — that turn out to be true, Siraj said

And there’s been reason to suspect that it might be possible. A series of experiments using small rockets in the 1970s found colonies of bacteria in the upper atmosphere. Comets really do enter and leave our solar system from time to time, and Siraj and Loeb’s calculations show that it’s plausible, maybe even likely, this has happened to large comets that graze Earth. Comets are porous, and might actually shield microbes from deadly radiation some microbes can survive a remarkably long time in space.

That alone is reason for scientists to take the idea seriously, Siraj said, and for researchers from fields like biology to jump in and figure out some of the details.

“It’s a brand new field of science,” he told Live Science

However, Stephen Kane, an astrophysicist at the University of California, Riverside, told Live Science that he was deeply skeptical of the suggestion that microbes from Earth might have actually turned up alive on alien planets through some version of this process.

The first problem would occur when the comet slammed into the atmosphere, he said. Siraj and Loeb point out that some bacteria can survive extraordinary accelerations. But the precise mechanism by which the microbes would adhere to the comet is unclear, Kane said, since the aerodynamic forces around the comet might make it impossible for any microbes to reach the surface and work their way deep enough below the surface to be protected from radiation.

It’s also not clear, he said, whether any microbes would really have been up high in our atmosphere in the first place Those rocket experiments from the 1970s are old and questionable, he said, and we still don’t have a good picture of what the biology of the upper atmosphere really looks like today — let alone hundreds of millions of years ago, when comet encounters were much more common.

The biggest question, though, Kane said, is what would happen to the microbes after they landed aboard the comet. It’s plausible, he said, that some bacteria might survive decades in space — long enough to reach, say, Mars. But there’s little direct evidence that any bacteria might survive the thousands or millions of years necessary to travel to another habitable star system. And that’s really the key idea of this paper: Researchers have long suggested that debris from major collisions might blast life around between our solar system’s planets and moons. But exporting life to an alien star system likely requires a more specialized scenario.

Still, Kane said, the calculations in this study of how precisely a comet might skim through the atmosphere were new to him, and “very interesting.”

Siraj didn’t strongly challenge any of Kane’s concerns, but reframed them one by one as opportunities for further study. He wants to know, he said, precisely what the biology of the upper atmosphere looks like, and how comets might react to it. There’s reason to think that at least some bacteria might survive super-long trips through deep space, he said, based on how robust they are under extreme conditions on Earth and in orbit. But for now, it’s time for scientists across fields to jump in and start filling in the gaps, Saraj said.

Originally published on Live Science.
By Rafi Letzter – Staff Writer



Black holes shouldn’t echo, but this one might. Score 1 for Stephen Hawking?


This isn’t how black holes are supposed to behave.

Black holes are infinitely dense objects surrounded by smooth event horizons.
(Image: © Shutterstock)

When two neutron stars slammed together far off in space, they created a powerful shaking in the universe — gravitational waves that scientists detected on Earth in 2017. Now, sifting through those gravitational wave recordings, a pair of physicists think they’ve found evidence of a black hole that would violate the neat model drawn from Albert Einstein’s theory of general relativity.

In general relativity, black holes are simple objects: infinitely compressed singularities, or points of matter, surrounded by smooth event horizons through which no light, energy or matter can escape. Until now, every bit of data we’ve gleaned from black holes has supported this model.

But in the 1970s, Stephen Hawking wrote a series of papers suggesting that the borders of black holes aren’t quite so smooth. Instead, they blur thanks to a series of effects linked to quantum mechanics that allow “Hawking radiation” to escape. In the years since, a number of alternative black hole models have emerged, where those smooth, perfect event horizons would be replaced with flimsier, fuzzier membranes. More recently, physicists have predicted that this fuzz would be particularly intense around newly formed black holes — substantial enough to reflect gravitational waves, producing an echo in the signal of a black hole’s formation. Now, in the aftermath of the neutron star collision, two physicists think they’ve found that type of echo. They argue that a black hole that formed when the neutron stars merged is ringing like an echoing bell and shattering simple black hole physics.

If the echo is real, then it must be from the fuzz of a quantum black hole, said study co-author Niayesh Afshordi, a physicist at the University of Waterloo in Canada.

“In Einstein’s theory of relativity, matter can orbit around black holes at large distances but should fall into the black hole close to the event horizon,” Afshordi told Live Science.

So, close to the black hole, there shouldn’t be any loose material to echo gravitational waves. Even black holes that surround themselves with disks of material should have an empty zone right around their event horizons, he said.

“The time delay we expect (and observe) for our echoes … can only be explained if some quantum structure sits just outside their event horizons,” Afshordi said.

That’s a break from usually unshakable predictions of general relativity.

That said, data from existing gravitational wave detectors is noisy, difficult to properly interpret and prone to false positives. A gravitational wave echoing off some quantum fuzz around a black hole would be an entirely new sort of detection. But Afshordi said that in the immediate aftermath of the merger, that fuzz should have been intense enough to reflect gravitational waves so sharply that existing detectors could see it.

Joey Neilsen, an astrophysicist at Villanova University in Pennsylvania who wasn’t involved in this paper, said that the result is compelling — particularly because the echoes turned up in more than one gravitational wave detector.

“That’s more convincing than combing through data looking for a specific kind of signal and saying, ‘aha!’ when you find it,” Neilsen told Live Science.

Still, he said, he’d need to see more information before he was absolutely convinced that the echoes were real. The paper doesn’t account for other gravitational wave detections gathered within about 30 seconds of the reported echoes, Neilsen said.

“Because significance calculations are so sensitive to how you pick and choose your data, I would want to understand all those features more fully before I drew any firm conclusions,” he said.

Maximiliano Isi, an astrophysicist at MIT, was skeptical.

“It is not the first claim of this nature coming from this group,” he told Live Science.
“Unfortunately, other groups have been unable to reproduce their results, and not for lack of trying.”

Isi pointed to a series of papers that failed to find echoes in the same data, one of which, published in June, he described as a “a more sophisticated, statistically robust analysis.”

Afshordi said that this new paper of his has the advantage of being far more sensitive than previous work, with more robust models to detect fainter echoes., adding, “the finding that we reported… is the most statistically significant out of the dozen searches [I discussed], as it had the false alarm chance of roughly 2 out of 100,000.”

Even if the echo is real, scientists still don’t know precisely what sort of exotic astrophysical object produced the phenomenon, Neilsen added.

“What’s so interesting about this case is that we don’t have any idea what was left after the original merger: Did a black hole form right away, or was there some exotic, short-lived intermediate object?” Neilsen said. “The results here are easiest to make sense of if the remnant is a hypermassive [neutron star] that collapses within a second or so, but the echo presented here isn’t convincing to me that that scenario is what actually happened.”

It is possible there are echoes in the data, Isi said, which would be enormously significant. He’s just not convinced yet.

Regardless of how all the data shakes out, Neilson said, it’s clear the result here is pointing at something worth exploring further.

“Astrophysically, we’re in uncharted territory, and that’s really exciting.” he said. The paper was published Nov. 13, 2019, in the Journal of Cosmology and Astroparticle Physics.

Originally published on Live Science.
By Rafi Letzter – Staff Writer



3398: Towering dinosaur with radioactive skull identified in Utah


The 155-million-year-old specimen was headless until a radiation detector located the skeleton’s skull.

This illustration shows a pack of the newly discovered Allosaurus jimmadseni attacking a young sauropod.
(Image: © Todd Marshall)

Paleontologists have discovered the skeleton and radioactive skull of a previously unknown species of Allosaurus. The fearsome two-legged dinosaur sported 80 sharp teeth and horns over its eyes when it lived about 155 million years ago in what is now Utah.

But researchers didn’t know any of these details at first; originally, they found only the dinosaur’s skeleton but not the head. Even so, the block of rock that encased the skeleton was so massive — it weighed 6,000 lbs. (2,700 kilograms) — that paleontologists had to use explosives to remove the fossils and a helicopter to transport it.

It wasn’t until six years later, in 1996, that the headless body and its skull were reunited.

That happy reunion was made possible by Ramal Jones, a retired University of Utah radiologist. Armed with a radiation detector, he located the radioactive skull not far from its body. It’s not uncommon for dinosaur bones to be radioactive, as radioactive elements can leach into the bones over time from the surrounding sediment. Later, teams from Dinosaur National Monument excavated the dinosaur’s head, which helped researchers identify the remains as a newfound dinosaur species.

Scientists named the beast Allosaurus jimmadseni, after paleontologist James Madsen Jr. (1932-2009), recognizing him for his “herculean efforts of protecting, excavating, preparing and curating of many thousands of Allosaurus bones,” the researchers wrote in the study.

During the late Jurassic period, A. jimmadseni lived on the semiarid flood plains of western North America. This dinosaur is the oldest species of Allosaurus, outdating Utah’s better-known Allosaurus fragilis, which helped make the Allosaurus the state’s official fossil.

This illustration shows all of the bumps and dips on the fearsome face of Allosaurus jimmadseni. (Image credit: Andrey Atuchin)

This illustration shows all of the bumps and dips on the fearsome face of Allosaurus jimmadseni. (Image credit: Andrey Atuchin)

The number of bones (white) discovered from the previously unknown Allosaurus species. (Image credit: Scott Hartman)

Researchers named the newfound Allosaurus after paleontologist James Madsen Jr., shown here assembling a composite skeleton of an Allosaurus from the Cleveland-Lloyd Dinosaur Quarry in Utah. (Image credit: J. Willard Marriott Library/University of Utah)

“Previously, paleontologists thought there was only one species of Allosaurus in Jurassic North America, but this study shows there were two species — the newly described Allosaurus jimmadseni evolved at least 5 million years earlier than its younger cousin, Allosaurus fragilis,” study co-lead researcher Mark Loewen said in a statement. Loewen is a research associate at the Natural History Museum of Utah and an associate professor in the Department of Geology and Geophysics at the University of Utah.

This dinosaur was a big carnivore, measuring up to 29 feet (9 meters) long and weighing about 4,000 lbs. (1.8 metric tons). It had a narrow skull, horns in front of its eyes and a crest that ran from those horns to its nose. Each of the dinosaur’s long arms ended with three sharp claws.

“The skull of Allosaurus jimmadseni is more lightly built than its later relative Allosaurus fragilis, suggesting a different feeding behavior between the two,” Loewen noted.

Loewen and co-researcher Daniel Chure, a retired paleontologist at Dinosaur National Monument, detailed the study online Friday (Jan. 24) in the journal PeerJ.

Originally published on Live Science.
By Laura Geggel – Associate Editor



3396: Why physicists are determined to prove Galileo and Einstein wrong


Scientists tested Galileo and Einstein’s theories by dropping two objects inside this satellite named MICROSCOPE (artist’s impression).
(Image: © CNES)

In the 17th century, famed astronomer and physicist Galileo Galilei is said to have climbed to the top of the Tower of Pisa and dropped two different-sized cannonballs. He was trying to demonstrate his theory — which Albert Einstein later updated and added to his theory of relativity — that objects fall at the same rate regardless of their size.

Now, after spending two years dropping two objects of different mass into a free fall in a satellite, a group of scientists has concluded that Galileo and Einstein were right: The objects fell at a rate that was within two-trillionths of a percent of each other, according to a new study.

This effect has been confirmed time and time again, as has Einstein’s theory of relativity — yet scientists still aren’t convinced that there isn’t some kind of exception somewhere. “Scientists have always had a difficult time actually accepting that nature should behave that way,” said senior author Peter Wolf, research director at the French National Center for Scientific Research’s Paris Observatory.

That’s because there are still inconsistencies in scientists’ understanding of the universe.

“Quantum mechanics and general relativity, which are the two basic theories all of physics is built on today …are still not unified,” Wolf told Live Science. What’s more, although scientific theory says the universe is made up mostly of dark matter and dark energy, experiments have failed to detect these mysterious substances.

“So, if we live in a world where there’s dark matter around that we can’t see, that might have an influence on the motion of [objects],” Wolf said. That influence would be “a very tiny one,” but it would be there nonetheless. So, if scientists see test objects fall at different rates, that “might be an indication that we’re actually looking at the effect of dark matter,” he added.

Wolf and an international group of researchers — including scientists from France’s National Center for Space Studies and the European Space Agency — set out to test Einstein and Galileo’s foundational idea that no matter where you do an experiment, no matter how you orient it and what velocity you’re moving at through space, the objects will fall at the same rate.

The researchers put two cylindrical objects — one made of titanium and the other platinum — inside each other and loaded them onto a satellite. The orbiting satellite was naturally “falling” because there were no forces acting on it, Wolf said. They suspended the cylinders within an electromagnetic field and dropped the objects for 100 to 200 hours at a time.

From the forces the researchers needed to apply to keep the cylinders in place inside the satellite, the team deduced how the cylinders fell and the rate at which they fell, Wolf said.

And, sure enough, the team found that the two objects fell at almost exactly the same rate, within two-trillionths of a percent of each other. That suggested Galileo was correct. What’s more, they dropped the objects at different times during the two-year experiment and got the same result, suggesting Einstein’s theory of relativity was also correct.

Their test was an order of magnitude more sensitive than previous tests. Even so, the researchers have published only 10% of the data from the experiment, and they hope to do further analysis of the rest.

Not satisfied with this mind-boggling level of precision, scientists have put together several new proposals to do similar experiments with two orders of magnitude greater sensitivity, Wolf said. Also, some physicists want to conduct similar experiments at the tiniest scale, with individual atoms of different types, such as rubidium and potassium, he added.

The findings were published Dec. 2 in the journal Physical Review Letters.

Originally published on Live Science.
By Yasemin Saplakoglu – Staff Writer



3395: Mysterious particles spewing from Antarctica defy physics


What’s making these things fly out of the frozen continent?

Researchers prepare to launch the Antarctic Impulsive Transient Antenna (ANITA) experiment, which picked up signals of impossible-seeming particles as it dangled from its balloon over Antarctica.
(Image: © NASA)

Our best model of particle physics is bursting at the seams as it struggles to contain all the weirdness in the universe. Now, it seems more likely than ever that it might pop, thanks to a series of strange events in Antarctica. .

The death of this reigning physics paradigm, the Standard Model, has been predicted for decades. There are hints of its problems in the physics we already have. Strange results from laboratory experiments suggest flickers of ghostly new species of neutrinos beyond the three described in the Standard Model. And the universe seems full of dark matter that no particle in the Standard Model can explain.

But recent tantalizing evidence might one day tie those vague strands of data together: Three times since 2016, ultra-high-energy particles have blasted up through the ice of Antarctica, setting off detectors in the Antarctic Impulsive Transient Antenna (ANITA) experiment, a machine dangling from a NASA balloon far above the frozen surface.

As Live Science reported in 2018, those events — along with several additional particles detected later at the buried Antarctic neutrino observatory IceCube — don’t match the expected behavior of any Standard Model particles. The particles look like ultra high-energy neutrinos. But ultra high-energy neutrinos shouldn’t be able to pass through the Earth. That suggests that some other kind of particle — one that’s never been seen before — is flinging itself into the cold southern sky.

Now, in a new paper, a team of physicists working on IceCube have cast heavy doubt on one of the last remaining Standard Model explanations for these particles: cosmic accelerators, giant neutrino guns hiding in space that would periodically fire intense neutrino bullets at Earth. A collection of hyperactive neutrino guns somewhere in our northern sky could have blasted enough neutrinos into Earth that we’d detect particles shooting out of the southern tip of our planet. But the IceCube researchers didn’t find any evidence of that collection out there, which suggests new physics must be needed to explain the mysterious particles.

To understand why, it’s important to know why these mystery particles are so unsettling for the Standard Model.

Neutrinos are the faintest particles we know about; they’re difficult to detect and nearly massless. They pass through our planet all the time — mostly coming from the sun and rarely, if ever, colliding with the protons, neutrons and electrons that make up our bodies and the dirt beneath our feet.

But ultra-high-energy neutrinos from deep space are different from their low-energy cousins. Much rarer than low-energy neutrinos, they have wider “cross sections,” meaning they’re more likely to collide with other particles as they pass through them. The odds of an ultra-high-energy neutrino making it all the way through Earth intact are so low that you’d never expect to detect it happening. That’s why the ANITA detections were so surprising: It was as if the instrument had won the lottery twice, and then IceCube had won it a couple more times as soon as it started buying tickets.

And physicists know how many lottery tickets they had to work with. Many ultra-high-energy cosmic neutrinos come from the interactions of cosmic rays with the cosmic microwave background (CMB), the faint afterglow of the Big Bang. Every once in a while, those cosmic rays interact with the CMB in just the right way to fire high-energy particles at Earth. This is called the “flux,” and it’s the same all over the sky. Both ANITA and IceCube have already measured what the cosmic neutrino flux looks like to each of their sensors, and it just doesn’t produce enough high-energy neutrinos that you’d expect to detect a neutrino flying out of Earth at either detector even once.

“If the events detected by ANITA belong to this diffuse neutrino component, ANITA should have measured many other events at other elevation angles,” said Anastasia Barbano, a University of Geneva physicist who works on IceCube.

But in theory, there could have been  ultra-high-energy neutrino sources beyond the sky-wide flux, Barbano told Live Science: those neutrino guns, or cosmic accelerators.

“If it is not a matter of neutrinos produced by the interaction of ultra-high-energy cosmic rays with the CMB, then the observed events can be either neutrinos produced by individual cosmic accelerators in a given time interval” or some unknown Earthly source, Barbano said.

Blazars, active galactic nuclei, gamma-ray bursts, starburst galaxies, galaxy mergers, and magnetized and fast-spinning neutron stars are all good candidates for those sorts of accelerators, she said. And we know that cosmic neutrino accelerators do exist in space;  in 2018, IceCube tracked a high-energy neutrino back to a blazar, an intense jet of particles coming from an active black hole at the center of a distant galaxy.

ANITA picks up only the most extreme high-energy neutrinos, Barbano said, and if the upward-flying particles were cosmic-accelerator-boosted neutrinos from the Standard Model — most likely tau neutrinos — then the beam should have come with a shower of lower-energy particles that would have tripped IceCube’s lower-energy detectors.

“We looked for events in seven years of IceCube data,” Barbano said — events that matched the angle and length of the ANITA detections, which you’d expect to find if there were a significant battery of cosmic neutrino guns out there firing at Earth to produce these up-going particles. But none turned up.

Their results don’t completely eliminate the possibility of an accelerator source out there. But they do “severely constrain” the range of possibilities, eliminating all of the most plausible scenarios involving cosmic accelerators and many less-plausible ones.

“The message we want to convey to the public is that a Standard Model astrophysical explanation does not work no matter how you slice it,” Barbano said.

Researchers don’t know what’s next. Neither ANITA nor IceCube is an ideal detector for the needed follow-up searches, Barbano said, leaving the researchers with very little data on which to base their assumptions about these mysterious particles. It’s a bit like trying to figure out the picture on a giant jigsaw puzzle from just a handful of pieces.

Right now, many possibilities seem to fit the limited data, including a fourth species of “sterile” neutrino outside the Standard Model and a range of theorized types of dark matter. Any of these explanations would be revolutionary.hjh But none is strongly favored yet.

“We have to wait for the next generation of neutrino detectors,” Barbano said.

The paper has not yet been peer reviewed and was published January 8 in the arXiv database.

Originally published on Live Science.
By Rafi Letzter – Staff Writer



3389: Scientists uncover new mode of evolution


Scientists have discovered a form of natural selection that doesn’t rely on DNA.

(Image: © Shutterstock)

Evolution and natural selection take place at the level of DNA, as genes mutate and genetic traits either stick around or are lost over time. But now, scientists think evolution may take place on a whole other scale — passed down not through genes, but through molecules stuck to their surfaces.

These molecules, known as methyl groups, alter the structure of DNA and can turn genes on and off. The alterations are known as “epigenetic modifications,” meaning they appear “above” or “on top of” the genome. Many organisms, including humans, have DNA dotted with methyl groups, but creatures like fruit flies and roundworms lost the required genes to do so over evolutionary time.

Another organism, the yeast Cryptococcus neoformans, also lost key genes for methylation sometime during the Cretaceous period, about 50 to 150 million years ago. But remarkably, in its current form, the fungus still has methyl groups on its genome. Now, scientists theorize that C. neoformans was able to hang on to epigenetic edits for tens of millions of years, thanks to a newfound mode of evolution, according to a study published Jan. 16 in the journal Cell.

The researchers behind the study didn’t expect to uncover a well-kept secret of evolution, senior author Dr. Hiten Madhani, a professor of biochemistry and biophysics at the University of California, San Francisco, and principal investigator at the Chan Zuckerberg Biohub, told Live Science.

The group typically studies C. neoformans to better understand how the yeast causes fungal meningitis in humans. The fungus tends to infect people with weak immune systems and causes about 20% of all HIV/AIDS-related deaths, according to a statement from UCSF. Madhani and his colleagues spend their days digging through the genetic code of C. neoformans, searching for critical genes that help the yeast invade human cells. But the team was surprised when reports emerged suggesting that the genetic material comes adorned with methyl groups.

“When we learned [C. neoformans] had DNA methylation … I thought, we have to look at this, not knowing at all what we’d find,” Madhani said.

In vertebrates and plants, cells add methyl groups to DNA with the help of two enzymes. The first, called “de novo methyltransferase,” sticks methyl groups onto unadorned genes. The enzyme peppers each half of the helix-shaped DNA strand with the same pattern of methyl groups, creating a symmetric design. During cell division, the double helix unfurls and builds two new DNA strands from the matching halves. At this point, an enzyme called “maintenance methyltransferase” swoops in to copy all the methyl groups from the original strand onto the newly built half.

Madhani and his colleagues looked at existing evolutionary trees to trace the history of C. neoformans through time, and found that, during the Cretaceous period, the yeast’s ancestor had both enzymes required for DNA methylation. But somewhere along the line, C. neoformans lost the gene needed to make de novo methyltransferase. Without the enzyme, the organism could no longer add new methyl groups to its DNA — it could only copy down existing methyl groups using its maintenance enzyme.

In theory, even working alone, the maintenance enzyme could keep DNA covered in methyl groups indefinitely — if it could produce a perfect copy every single time.

In reality, the enzyme makes mistakes and loses track of methyl groups each time the cell divides, the team found. When raised in a petri dish, C. neoformans cells occasionally gained new methyl groups by random chance, similar to how random mutations arise in DNA. However, the cells lost methyl groups about 20 times faster than they could gain new ones.

Within about 7,500 generations, every last methyl group would disappear, leaving the maintenance enzyme nothing to copy, the team estimated. Given the speed at which C. neoformans multiplies, the yeast should have lost all its methyl groups within about 130 years. Instead, it retained the epigenetic edits for tens of millions of years.

“Because the rate of loss is higher than the rate of gain, the system would slowly lose methylation over time if there wasn’t a mechanism to keep it there,” Madhani said. That mechanism is natural selection, he said. In other words, even though C. neoformans was gaining new methyl groups much more slowly than it was losing them, methylation dramatically increased the organism’s “fitness,” which meant it could outcompete individuals with less methylation. “Fit” individuals prevailed over those with fewer methyl groups, and thus, methylation levels remained higher over millions of years. But what evolutionary advantage could these methyl groups offer C. neoformans? Well, they might protect the yeast’s genome from potentially lethal damage, Madhani said.

Transposons, also known as “jumping genes,” hop around the genome at whim and often insert themselves in very inconvenient places. For instance, a transposon could leap into the center of a gene required for cell survival; that cell might malfunction or die. Luckily, methyl groups can grab onto transposons and lock them in place. It may be that C. neoformans maintains a certain level of DNA methylation to keep transposons in check, Madhani said.

“No individual [methylation] site is particularly important, but overall density of methylation on transposons is selected for” over evolutionary timescales, he added. “The same thing is probably true in our genomes.”

Many mysteries still surround DNA methylation in C. neoformans. Besides copying methyl groups between DNA strands, maintenance methyltransferase seems to be important when it comes to how the yeast causes infections in humans, according to a 2008 study by Madhani. Without the enzyme intact, the organism cannot hack into cells as effectively. “We have no idea why it’s required for efficient infection,” Madhani said.

The enzyme also requires large amounts of chemical energy to function and only copies methyl groups onto the blank half of replicated DNA strands. In comparison, the equivalent enzyme in other organisms does not require extra energy to function and sometimes interacts with naked DNA, devoid of any methyl groups, according to a report posted on the preprint server bioRxiv. Further research will reveal exactly how methylation works in C. neoformans, and whether this newfound form of evolution appears in other organisms.

Originally published on Live Science.

By Nicoletta Lanese – Staff Writer



A burst of gravitational waves hit our planet. Astronomers have no clue where it’s from.


(Image: © Shutterstock)

A mysterious cosmic event might have ever-so-slightly stretched and squeezed our planet last week. On Jan. 14, astronomers detected a split-second burst of gravitational waves, distortions in space-time … but researchers don’t know where this burst came from.

The gravitational wave signal, picked up by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Virgo interferometer, lasted only 14 milliseconds, and astronomers haven’t yet been able to pinpoint the burst’s cause or determine whether it was just a blip in the detectors.

Gravitational waves can be caused by the collision of massive objects, such as two black holes or two neutron stars. Astronomers detected such gravitational waves from a neutron star collision in 2017 and from one in April of 2019, according to new findings that were presented at the meeting of the American Astronomical Society on Jan. 6.

But gravitational waves from collisions of such massive objects typically last longer and manifest in the data as a series of waves that change in frequency over time as the two orbiting objects move closer to each other, said Andy Howell, a staff scientist at Los Cumbres Observatory Global Telescope Network and an adjunct faculty member in physics at the University of California, Santa Barbara. He was not part of the LIGO research.

This new signal was not a series of waves but a burst, Howell said. One more likely possibility is that this short-lived burst of gravitational waves comes from a more transient event, such as a supernova explosion, the catastrophic ending to a star’s life.

Indeed, some astronomers have hypothesized that this could have been a signal from the Betelgeuse star, which mysteriously dimmed recently and is expected to undergo a supernova explosion. But the Betelgeuse star is still there so it’s not that scenario, Howell said. It’s also unlikely to be another supernova because they happen in our galaxy only about once every 100 years, he added.

What’s more, the burst still “seems a little too short for what we expect from the collapse of a massive star,” he said. “On the other hand, we’ve never seen a star blowing up in gravitational waves before, so we don’t really know what it would look like.” In addition, the astronomers didn’t detect any neutrinos, tiny subatomic particles that carry no charge, which supernovas are known to release.

Another possibility is that the merging of two intermediate-mass black holes caused the signal, Howell said. Merging neutron stars produce waves that last longer (around 30 seconds) than this new signal, while merging black holes might more closely resemble bursts (that last around a couple of seconds). However, intermediate black hole mergers might also release a series of waves that change in frequency.

LIGO came across this signal while specifically looking for such bursts. But “that doesn’t mean that what it found is an intermediate-mass black hole merger,” Howell told Live Science. “We don’t know what they found,” especially since LIGO hasn’t yet released the exact structure of the signal, he added.

It’s also possible that this signal was just noise in the data from the detector, Howell said. But this burst of gravitational waves was found by all three LIGO detectors: one in Washington state, one in Louisiana and one in Italy. So the probability of the LIGO detectors finding this signal by chance (meaning it’s a false alarm) is once every 25.84 years, which “gives us some indication that this is a pretty good signal,” Howell said.

There could be other explanations for this mysterious burst, too. For example, a supernova could have directly collapsed into a black hole without producing neutrinos, though such an occurrence is very speculative, Howell said. Astronomers are now pointing their telescopes to that region to try to pinpoint the source of the waves.

“The universe always surprises us,” he added. “There could be totally new astronomical events out there that produce gravitational waves that we haven’t really thought about.”

Editor’s Note: This story was updated to clarify that the signal was not a series of waves, but a burst.

Originally published on Live Science.
By Yasemin Saplakoglu – Staff Writer