Science

Elon Musk has started digging a giant underground tunnel in LA

Elon Musk, CEO of Tesla and SpaceX, has been moving quickly on the development of what he calls the Boring Company. In January of last year, he presented a proposal at the Hyperloop Pod Competition for a company that would lighten traffic through the use of underground tunnels.

One year later, Musk mentioned that construction of a tunnel could begin in Los Angeles by the end of February. Musk hasn’t fallen short of his promise – his new boring machine began to dig a tunnel as part of a demonstration at SpaceX’s Hyperloop Pod Competition last weekend.

Although the machine has only created a mere hole, his team plans to eventually construct a full-fledged tunnel.

Musk hasn’t been too clear about how the tunnel will function, but the Hyperloop could be involved – after, it is supposed to be able to work above or below ground. Musk has, at the very least, not denied the connection:

@elonmusk
So Mr. Musk, are you thinking to combine Hyperloop and Tunnels to make transportation revolution.

For those that are curious enough to see what Musks’ boring machine looks like, he released a photo of it on Twitter recently.

This might be only one part of a massive machine without the cutting head attached:

Although tunnelling machines already exist, Musk’s company would be unique, because the machines would make tunnelling faster, speeding up the digging process almost tenfold compared to conventional methods.

If Musk’s boring machine can make tunnelling quicker and easier, underground transit could also become more popular.

Tunnels would be a solution to heavy traffic, especially in urban areas, as well as a potential way to bring transportation into buildings.

Musk isn’t exactly serious about creating a separate entity called the Boring Company.

Thankfully, the project fits into Tesla’s mission to alleviate traffic which, in turn, will reduce fuel consumption and emissions.

His boring machine is also in line with SpaceX’s mission to eventually settle a colony on Mars.

It could be used to test the viability of tunnel construction on the Red Planet, creating underground habitats that would protect people from extreme cold, low pressure, and high radiation.

This article was originally published by Futurism. Read the original article.

The U.S. Navy Just Announced The End Of Big Oil And No One Noticed

the-u-s-navy-just-announced-the-end-of-big-oil-and-no-one-noticed

Surf’s up! The Navy appears to have achieved the Holy Grail of energy independence – turning seawater into fuel:

After decades of experiments, U.S. Navy scientists believe they may have solved one of the world’s great challenges: how to turn seawater into fuel.

The new fuel is initially expected to cost around $3 to $6 per gallon, according to the U.S. Naval Research Laboratory, which has already flown a model aircraft on it.

Curiously, this doesn’t seem to be making much of a splash (no pun intended) on the evening news. Let’s repeat this: The United States Navy has figured out how to turn seawater into fuel and it will cost about the same as gasoline.

This technology is in its infancy and it’s already this cheap? What happens when it’s refined and perfected? Oil is only getting more expensive as the easy-to-reach deposits are tapped so this truly is, as it’s being called, a “game changer.”

I expect the GOP to go ballistic over this and try to legislate it out of existence. It’s a threat to their fossil fuel masters because it will cost them trillions in profits. It’s also “green” technology and Republicans will despise it on those grounds alone. They already have a track record of trying to do this. Unfortunately, once this kind of genie is out of the bottle, it’s very hard to put back in.

There are two other aspects to this story that have not been brought up yet:

1. The process pulls carbon dioxide (the greenhouse gas driving Climate Change) out of the ocean. One of the less well-publicized aspects of Climate Change is that the ocean acts like a sponge for CO2 and it’s just about reached its safe limit. The ocean is steadily becoming more acidic from all of the increased carbon dioxide. This in turn poisons delicate ecosystems like coral reefs that keep the ocean healthy.

If we pull out massive amounts of CO2, even if we burn it again, not all of it will make it back into the water. Hell, we could even pull some of it and not use it in order to return the ocean to a sustainable level. That, in turn will help pull more of the excess CO2 out of the air even as we put it back. It would be the ultimate in recycling.

2. This will devastate oil rich countries but it will get us the hell out of the Middle East (another reason Republicans will oppose this). Let’s be honest, we’re not in the Middle East for humanitarian reasons. We’re there for oil. Period. We spend trillions to secure our access to it and fight a “war” on terrorism. Take away our need to be there and, suddenly, justifying our overseas adventures gets a lot harder to sell.

And if we “leak” the technology? Every dictator propped up by oil will tumble almost overnight. Yes, it will be a bloody mess but we won’t be pissing away the lives of our military to keep scumbags in power. Let those countries figure out who they want to be without billionaire thugs and their mercenary armies running the show.

Why this is not a huge major story mystifies me. I’m curious to see how it all plays out so stay tuned.

UPDATE:

People have been asking for more details about the process. This is from the Naval Research Laboratory’s official press release:

Using an innovative and proprietary NRL electrolytic cation exchange module (E-CEM), both dissolved and bound CO2 are removed from seawater at 92 percent efficiency by re-equilibrating carbonate and bicarbonate to CO2 and simultaneously producing H2. The gases are then converted to liquid hydrocarbons by a metal catalyst in a reactor system.

In plain English, fuel is made from hydrocarbons (hydrogen and carbon). This process pulls both hydrogen and carbon from seawater and recombines them to make fuel. The process can be used on air as well but seawater holds about 140 times more carbon dioxide in it so it’s better suited for carbon collection.

Another detail people seem to be confused about: This is essentially a carbon neutral process. The ocean is like a sponge for carbon dioxide in the air and currently has an excess amount dissolved in it. The process pulls carbon dioxide out of the ocean. It’s converted and burned as fuel. This releases the carbon dioxide back into the air which is then reabsorbed by the ocean. Rinse. Repeat.

Written by  of www.proudtobeafilthyliberalscum.com

Soft tissue has been discovered inside a 195-million-year-old dinosaur bone

When scientists uncover bones and other hard objects from the ground, they can sometimes find scraps of organic material attached. But nothing we’ve found comes close to the age of proteins recently discovered on a dinosaur fossil thought to date back 195 million years.

That’s around 100 million years older than fragments of collagen found in the thigh bone of a hadrosaur in 2009, and it could give us a unique look back at the biology of other dinosaurs that wandered the Earth at the time.

The discovery was made by researchers at the University of Toronto within a rib of a Lufengosaurus dinosaur, a long-necked herbivore that roamed across what’s now south-western China during the Early Jurassic period.

lufen 1024Farley Katz/Wikimedia Commons

“These proteins are the building blocks of animal soft tissues, and it’s exciting to understand how they have been preserved,” says one of the research team, palaeontologist Robert Reisz.

With the help of colleagues in China and Taiwan, researchers used a synchrotron machine to analyse fossil samples.

The device uses infrared spectroscopy, or targeted beams of light, to identify materials – in this case collagen and iron-rich proteins – without having to risk contaminating the samples.

lufen-2Robert Reisz

Previous collagen discoveries of this kind required dissolving the rest of the fossilised bone away, and the team behind the latest research says its non-invasive approach could open the way to finding even more organic remains in the future.

To find any kind of soft tissue material is very rare though, as it normally decays naturally in the ground, leaving only the bones behind.

Scientists still aren’t sure why some proteins and collagen are able to survive for so long, but in this case the researchers think the blood vessels helped to form a “closed micro-sized chamber” that isolated the material.

The researchers suggest that small, iron rich particles left over from blood flowing through the rib bones might have been the source of the haematite which bound to the proteins and helped protect them against the ravages of time.

One of the insights the new find might give us is how dinosaurs evolved into the species of birds we still see on Earth today, which is thought to have happened over the course of just 10 million years – a very short timespan in evolutionary terms.

In the meantime, Reiz says the synchrotron technique has “great future potential”, and should be able to pick up organic material even when there’s only a minuscule amount of it left behind.

That said, don’t expect a Jurassic Park-style theme park anytime soon – these scraps of matter aren’t enough to provide dinosaur DNA, which is thought to decay naturally within half a century.

Some experts, including Mary Schweitzer from North Carolina State University, say the current tests are too limited in scope for us to know conclusively what we’re dealing with on these bones, saying that further analysis is required.

Others, including Stephen Brusatte from the University of Edinburgh in the UK, think the evidence is already strong enough. Neither Schweitzer nor Brusatte were directly involved in the research.

“To find proteins in a 195-million-year-old dinosaur fossil is a startling discovery,” Brusatte told the BBC.

“It almost sounds too good to be true, but this team has used every method at their disposal to verify their discovery, and it seems to hold up.”

The findings have been published in Nature Communications.

Astronomers spot a strange, supersonic space cloud screeching through our galaxy

While focussing on the remains of an exploded star roughly 10,000 light-years away, a team of Japanese astronomers have stumbled across a mysterious cloud of molecules tearing through the Milky Way. So quickly, in fact, they’ve nick-named it the unknown phenomenon the ‘Bullet’.

The cause of this cloud’s ridiculous speed isn’t clear, but so far all signs suggest it’s been sent hurtling through space thanks to a rogue black hole.

On account of their light-sucking talent, black holes aren’t known for being all that easy to spot. They sometimes reveal themselves by stealing material from a nearby star, heating it up and forcing it to emit X-rays.

If they’re wandering alone in interstellar space, however, they tend to remain hidden.

Yet in this case, the shadowy influence of a black hole could explain why a cloud of molecules 2 light-years in size was moving forward at 120 kilometres per second (75 miles per second), and expanding at 50 kilometres per second (31 miles per second).

Weirder still, it was moving against the direction of the Milky Way’s spin.

The astronomers from Keio University in Japan used the 45 metre (148 feet) Radio Telescope at Nobeyama Radio Observatory and the ASTE Telescope in Chile to study the cloud’s surrounding supernova remnant W44. They were interested in how energy from the supernova transferred to the surrounding gas.

What they saw was a cloud which moved faster than could be accounted for by the supernova alone. “Its kinetic energy is a few tens of times larger than that injected by the W44 supernova,” says lead researcher Masaya Yamada.

In spite of what the Alien movies might tell you, sound can technically move through space if there’s plenty of energy and a high enough density of particles. Yet even within clouds of gas and dust, this particle density is far, far lower than on Earth, so waves spread pretty slowly.

The ‘Bullet’ exceeds this speed by at least two magnitudes, making it a truly supersonic space cloud.

The researchers have proposed two scenarios to explain their observations, both of which require a black hole.

In the first scenario, a rapidly expanding cloud of gas pushed by a stellar explosion would pass over a black hole sitting quietly, minding its own business. This black hole would only be about 3.5 solar masses in size, just big enough to pull the cloud over it, accelerating it to the observed speeds.

Scenario number two would require a black hole 10 times bigger. This monster would punch through the cloud as it zoomed through, dragging the cloud in its wake.

Diagram of two scenarios

Given the current sets of data, it’s hard for the team to decide which scenario is more likely. Higher resolutions of the cloud using better telescopes, such as the Atacama Large Millimeter/submillimeter Array in northern Chile, could provide more clues.

Either way, the astronomers are excited by the possibility of having a new method for spotting elusive black holes lurking in the quieter corners of our galaxy.

This research has been published in Astrophysical Journal Letters.

11 Totally Normal Things That Science Can’t Explain

11 Totally Normal Things That Science Can't Explain

Science is amazing, is it not? It can tell us the size of planets light years away. It can explain the eating habits of giant dinosaurs that have been extinct for millions of years. Science can even tell us all about particles that are far too small to see with the human eye.

But there are a lot of things — many every day things, in fact — that science cannot explain.

How do magnets work? Why does watching someone yawn make you have to yawn? Why do dogs poop the way they do? These are the questions that scientists can’t quite answer…yet.

UP FIRST: Why does lightning happen?

11 Totally Normal Things That Science Can't Explain

Why Does Lightning Happen?

Some 44,000 thunderstorms rage worldwide each day, delivering as many as 100 lightning bolts to the ground every second. That’s a lot of lightning. So much, in fact, that one would be forgiven for assuming that scientists understand why lightning happens — but they don’t.

For all we know, lightning might as well come from Zeus. Counting Ben Franklin’s kite-and-key experiment as the starting point, 250 years of scientific investigation have yet to get to grips with how lightning works.

Atmospheric scientists have a basic sketch of the process. Positive electric charges build up at the tops of thunderclouds and negative charges build up at the bottoms (except for perplexing patches of positive charges often detected in the center-bottom). Electrical attraction between these opposite charges, and between the negative charges at the bottom of the cloud and positive charges that accumulate on the ground below, eventually grow strong enough to overcome the air’s resistance to electrical flow.

Like a herd of elephants wading across a river, negative charges venture down from the bottom of the cloud into the sky below and move haltingly toward the ground, forming an invisible, conductive path called a “step leader.” The charges’ path eventually connects to similar “streamers” of positive charges surging up from the ground, completing an electrical circuit and enabling negative charges to pour from the cloud to the ground along the circuit they have formed. This sudden, enormous electric discharge is the flash of lightning.

But as for how all that happens — well, it just doesn’t make much physical sense. There are three big questions needing answers, said Joe Dwyer, a leading lightning physicist based at the Florida Institute of Technology. “First, how do you actually charge up a thundercloud?” Dwyer said. A mix of water and ice is needed to provide atoms that can acquire charge, and updrafts are required to move the charged particles around. The rest of the details are hazy.

The second point of confusion is called the “lightning initiation problem.” So the question is, “How do you get a spark going inside a thunderstorm? The electric fields never seem to be big enough inside the storm to generate a spark. So how does that spark get going? This is a very active area of research,” Dwyer said.

And once the spark gets going, the final question is how it keeps going. “After you get it started, how does lightning propagate for tens of miles through clouds?” Dwyer said. “That’s an amazing thing — how do you turn air from being an insulator into a conductor?”

UP NEXT: How do magnets work?

11 Totally Normal Things That Science Can't Explain

How Do Magnets Work?

Sure, they’re run-of-the-mill household items, but that doesn’t mean magnets are easy to understand. While physicists have some understanding of how magnets function, the phenomena that underlie magnetism continue to elude scientific explanation.

Large-scale magnetism, like the kind observed in bar magnets, results from magnetic fields that naturally radiate from the electrically charged particles that make up atoms, said Jearl Walker, a physics professor at Cleveland State University and coauthor of “Fundamentals of Physics” (Wiley, 2007).The most common magnetic fields come from negatively charged particles called electrons.

Normally, in any sample of matter, the magnetic fields of electrons point in different directions, canceling each other out. But when the fields all align in the same direction, like in magnetic metals, an object generates a net magnetic field, Walker told Live Science in 2010.

Every electron generates a magnetic field, but they only generate a net magnetic field when they all line up. Otherwise, the electrons in the human body would cause everyone to stick to the refrigerator whenever they walked by, Walker said.

Currently, physics has two explanations for why magnetic fields align in the same direction: a large-scale theory from classical physics, and a small-scale theory called quantum mechanics.

According to the classical theory, magnetic fields are clouds of energy around magnetic particles that pull in or push away other magnetic objects. But in the quantum mechanics view, electrons emit undetectable, virtual particles that tell other objects to move away or come closer, Walker said.

Although these two theories help scientists understand how magnets behave in almost every circumstance, two important aspects of magnetism remain unexplained: why magnets always have a north and south pole, and why particles emit magnetic fields in the first place.

“We just observe that when you make a charged particle move, it creates a magnetic field and two poles. We don’t really know why. It’s just a feature of the universe, and the mathematical explanations are just attempts of getting through the ‘homework assignment’ of nature and getting the answers,” Walker said.

UP NEXT: Why do dogs face north or south to poop?

11 Totally Normal Things That Science Can't Explain

Why Do Dogs Face North or South to Poop?

Did you know that dogs prefer to poop while aligned with the north-south axis of the Earth’s magnetic field? Because they totally do, but scientists can’t really explain why.

Research conducted in 2014 found that dogs preferred to poop when their bodies were aligned in a north-south direction, as determined by the geomagnetic field. (True north, which is determined by the position of the poles, is slightly different from magnetic north.)

And while dogs of both sexes faced north or south while defecating, only females preferred to urinate in a north or south direction — males didn’t show much preference while urinating.

This odd finding joins a long and growing list of research showing that animals — both wild and domesticated — can sense the Earth’s geomagnetic field and coordinate their behavior with it.

A 2008 analysis of Google Earth satellite images revealed that herds of cattle worldwide tend to stand in the north-south direction of Earth’s magnetic lines when grazing, regardless of wind direction or time of day. The same behavior was seen in two different species of deer.

Birds also use magnetic fields to migrate thousands of miles, some research suggests. A 2013 report found that pigeons are equipped with microscopic balls of iron in their inner ears, which may account for the animals’ sensitivity to the geomagnetic field.

Humans, too, might possess a similar ability — a protein in the human retina may help people sense magnetic fields, though the research into this and many other related geomagnetic phenomena is preliminary and therefore remains inconclusive.

But why do animals of all shapes and sizes seem to be ruled by Earth’s geomagnetic field? The answer remains elusive, the scientists admitted.

“It is still enigmatic why the dogs do align at all, whether they do it ‘consciously’ (i.e., whether the magnetic field is sensorial[ly] perceived) … or whether its reception is controlled on the vegetative level (they ‘feel better/more comfortable or worse/less comfortable’ in a certain direction),” the study authors wrote.

The researchers also found that when the Earth’s magnetic field was in a state of flux — it changes during solar flares, geomagnetic storms and other events — the dogs’ north-south orientation was less predictable. Only when the magnetic field was calm did researchers reliably observe the north-south orientation.

Further research is needed to determine how and why dogs and other animals sense and use the planet’s magnetic field every single day.

UP NEXT

: What causes gravity?

11 Totally Normal Things That Science Can't Explain

What Causes Gravity? 

You know gravity? That invisible force holding you (and every person and object around you) to the Earth? Well, you might learn all about gravity in a science classroom, but scientists still aren’t sure what causes it.

In the deepest depths of space, gravity tugs on matter to form galaxies, stars, black holes and the like. In spite of its infinite reach, however, gravity is the wimpiest of all forces in the universe.

This weakness also makes it the most mysterious, as scientists can’t measure it in the laboratory as easily as they can detect its effects on planets and stars. The repulsion between two positively charged protons, for example, is 10^36 times stronger than gravity’s pull between them—that’s 1 followed by 36 zeros less macho.

Physicists want to squeeze little old gravity into the standard model—the crown-jewel theory of modern physics that explains three other fundamental forces in physics—but none has succeeded. Like a runt at a pool party, gravity just doesn’t fit in when using Einstein’s theory of relativity, which explains gravity only on large scales

“Gravity is completely different from the other forces described by the standard model,” said Mark Jackson, a theoretical physicist at Fermilab in Illinois. “When you do some calculations about small gravitational interactions, you get stupid answers. The math simply doesn’t work.”

The numbers may not jibe, but physicists have a hunch about gravity’s unseen gremlins: Tiny, massless particles called gravitons that emanate gravitational fields.

Each hypothetical bit tugs on every piece of matter in the universe, as fast as the speed of light permits. Yet if they are so common in the universe, why haven’t physicists found them?

“We can detect massless particles such as photons just fine, but gravitons elude us because they interact so weakly with matter,” said Michael Turner, a cosmologist at the University of Chicago. “We simply don’t know how to detect one.”

Turner, however, isn’t despondent about humanity’s quest for gravitons. He thinks we’ll eventually ensnare a few of the pesky particles hiding in the shadows of more easily detected particles.

“What it really comes down to is technology,” Turner said.

UP NEXT: Why do cats purr?

11 Totally Normal Things That Science Can't Explain

Why Do Cats Purr?

From house cats to cheetahs, most felid species produce a “purr-like” vocalization, according to University of California, Davis, veterinary professor Leslie Lyons. Domestic cats purr in a range of situations — while they nurse their kittens, when they are pet by humans, and even when they’re stressed out. Yes, you read right: Cats purr both when they’re happy and when they’re miserable. That has made figuring out the function of purring an uphill struggle for scientists.

One possibility is that it promotes bone growth, Lyons explained in Scientific American. Purring contains sound frequencies within the 25- to 150-Hertz range, and sounds in this range have been shown to improve bone density and promote healing. Because cats conserve energy by sleeping for long periods of time, purring may be a low-energy mechanism to keep muscles and bones healthy without actually using them.

Of course, cats purr even when they aren’t injured. Many domestic cats purr to indicate hunger, for example. A recent study out of the U.K. shows that some cats have even developed a special purr to ask their owners for food. This “solicitous purr” incorporates cries with similar frequencies as those of human babies. These conniving kitties have tapped into their owners’ psyches — all for more kibble.

However, this study doesn’t explain why cats purr in all of the situations they do. And scientists aren’t likely to find out more answers until cats learn to speak human…

UP NEXT: How does the brain work?

11 Totally Normal Things That Science Can't Explain

How Does the Brain Work?

With billions of neurons, each with thousands of connections, the human brain is a complex, and yes congested, mental freeway. Neurologists and cognitive scientists nowadays are probing how the mind gives rise to thoughts, actions, emotions and ultimately consciousness, but they still don’t have all the answers.

The complex machine is difficult for even the brainiest of scientists to wrap their heads around. What makes the brain such a tough nut to crack?

According to Scott Huettel of the Center for Cognitive Neuroscience at Duke University, the standard answer to this question goes something like: “The human brain is the most complex object in the known universe … complexity makes simple models impractical and accurate models impossible to comprehend.”

While that stock answer is correct, Huettel said, it’s incomplete. The real snag in brain science is one of navel gazing. Huettel and other neuroscientists can’t step outside of their own brains (and experiences) when studying the brain itself.

“A more pernicious factor is that we all think we understand the brain—at least our own—through our experiences. But our own subjective experience is a very poor guide to how the brain works,” Huettel told Live Science in 2007.

Scientists have made some progress in taking an objective, direct “look” at the human brain.

In recent years, brain-imaging techniques, such as functional magnetic resonance imaging (fMRI) have allowed scientists to observe the brain in action and determine how groups of neurons function.

They have pinpointed hubs in the brain that are responsible for certain tasks, such as fleeing a dangerous situation, processing visual information, making those sweet dreams and storing long-term memories. But understanding the mechanics of how neuronal networks collaborate to allow such tasks has remained more elusive.

The prized puzzle in brain research is arguably the idea of consciousness. When you look at a painting, for instance, you are aware of it and your mind processes its colors and shapes. At the same time, the visual impression could stir up emotions and thoughts. This subjective awareness and perception is consciousness.

Many scientists consider consciousness the delineation between humans and other animals.

So rather than cognitive processes directly leading to behaviors (unbeknownst to us), we are aware of the thinking. We even know that we know!

If this mind bender is ever solved, an equally perplexing question would arise, according to neuroscientists: Why? Why does awareness exist at all?

UP NEXT: How do bicycles work?

11 Totally Normal Things That Science Can't Explain

How Do Bicycles Work?

The brain is a super complicated organ, so it kind of makes sense that scientists haven’t yet learned all its secrets. But surely those same scientists have figured out something as simple as a bicycle, right? Wrong: The brainiacs of the world still aren’t sure how bicycles work.

Bikes can stay upright all by themselves, as long as they’re moving forward; it’s because any time a moving bike starts to lean, its steering axis (the pole attached to the handlebars) turns the other way, tilting the bike upright again. This restorative effect was long believed to result from a law of physics called the conservation of angular momentum: When the bike wobbles, the axis perpendicular to its wheels’ spinning direction threatens to change, and the bike self-corrects in order to “conserve” the direction of that axis. In other words, the bike is a gyroscope. Additionally, the “trail effect” was thought to help keep bikes stable: Because the steering axis hits the ground slightly in front of the ground contact point of the front wheel, the wheel is forced to trail the steering of the handlebars.

But recently, a group of engineers led by Andy Ruina of Cornell University upturned this theory of bicycle locomotion. Their investigation, detailed in a 2011 article in the journal Science, showed that neither gyroscopic nor trail effects were necessary for a bike to work. To prove it, the engineers built a custom bicycle, which could take advantage of neither effect. The bike was designed so that each of its wheels rotated a second wheel above it in the opposite direction. That way, the spinning of the wheels canceled out and the bike’s total angular momentum was zero, erasing the influence of gyroscopic effects on the bike’s stability. The custom bike’s ground contact point was also positioned in front of its steering axis, destroying the trail effect. And yet, the bike worked.

The engineers know why: they added masses to the bike in choice places to enable gravity to cause the bike to self-steer. But the work showed there are many effects that go into the stability of bicycles — including gyroscopic and trail effects in the case of bikes that have them — that interact in extremely complex ways.

“The complex interactions have not been worked out. My suspicion is that we will never come to grips with them, but I don’t know that for sure,” Ruina told Live Science.

UP NEXT: Why are moths drawn to light?

11 Totally Normal Things That Science Can't Explain

Why Are Moths Drawn to Light?

“Look! That moth just flew straight into that light bulb and died!” said no one ever. We see it happen so often that it’s more likely to invoke yawns than discussion. But, surprisingly, the reason for these insects’ suicidal nosedives remains a total mystery. Science’s best guesses about why they do it aren’t even very good.

Some entomologists believe moths zoom toward artificial light sources because the lights throw off their internal navigation systems. In a behavior called transverse orientation, some insects navigate by flying at a constant angle relative to a distant light source, such as the moon. But around man-made lights, such as a campfire or your porch light, the angle to the light source changes as a moth flies by. Jerry Powell, an entomologist at the University of California, Berkeley said the thinking is that moths “become dazzled by the light and are somehow attracted.”

But this theory runs into two major stumbling blocks, Powell explained: First, campfires have been around for about 400,000 years. Wouldn’t natural selection have killed off moths whose instinct tells them to go kamikaze every time they feel blinded by the light? Secondly, moths may not even use transverse navigation; more than half of the species don’t even migrate.

Alternate theories are riddled with holes, too. For example, one holds that male moths are attracted to infrared light because it contains a few of the same light frequencies given off by female moths’ pheromones, or sex hormones, which glow very faintly. In short, male moths could be drawn to candles under the false belief that the lights are females sending out sex signals.  However, Powell points out that moths are more attracted to ultraviolet light than infrared light, and UV doesn’t look a bit like glowing pheromones.

Moth deaths: not as yawn-inducing as you might think.

UP NEXT: Why are there lefties (and righties)?

11 Totally Normal Things That Science Can't Explain

Why Are There Lefties (& Righties)?

One-tenth of people have better motor dexterity using their left limbs than their right. No one knows why these lefties exist. And no one knows why righties exist either, for that matter. Why do people have just one hand with top-notch motor skills, instead of a double dose of dexterity?

One theory holds that handedness results from having more intricate wiring on the side of the brain involved in speech (which also requires fine motor skills). Because the speech center usually sits in the brain’s left hemisphere — the side wired to the right side of the body — the right hand ends up dominant in most people. As for why the speech center usually (but not always) ends up in the left side of the brain, that’s still an open question.

The theory about the speech center controlling handedness gets a big blow from the fact that not all right-handed people control speech in the left hemisphere, while only half of lefties do. So, what explains those lefties whose speech centers reside in the left sides of their brains? It’s all very perplexing.

Research published in 2013 suggests that genes that play a role in the orientation of internal organs may also affect whether someone is right- or left-handed.

The study, published today (Sept. 12) in the journal PLOS Genetics, suggest those genes may also play a role in the brain, thereby affecting people’s handedness.

Still, the findings can’t yet explain the mystery of why a minority of people are left-handed because each gene only plays a tiny role in people’s handedness.

UP NEXT: Is yawning contagious?

11 Totally Normal Things That Science Can't Explain

Are Yawns Contagious?

In 2012, Austrian researchers won an Ig Nobel Prize for their discovery that yawns are not contagious among red-footed tortoises.

We know so much about tortoises, but human yawning? Still an enigma. The sight of a person’s gaping jaws, squinting eyes and deep inhalation “hijacks your body and induces you to replicate the observed behavior,” writes the University of Maryland, Baltimore County, psychologist Robert Provine in his new book, “Curious Behavior” (Belknap Press, 2012). But why?

Preliminary brain-scan data indicate that regions of the brain associated with theory of mind (the ability to attribute mental states and feelings to oneself and others) and self-processing become active when people observe other people yawning. Many autistic and schizophrenic people do not exhibit this brain activity, and they do not “catch” yawns. These clues suggest contagious yawning reflects an ability to empathize and form normal emotional ties with others, Provine explained.

But why should our social connections with one another circulate through yawning, as opposed to hiccupping or passing gas? No one knows for sure, and that’s because no one knows quite why we yawn. Embryos do it to sculpt the hinge of their jaws. Fully formed people do it when we’re sleepy and bored. But how does yawning ameliorate these complaints?

UP NEXT: What causes static electricity?

11 Totally Normal Things That Science Can't Explain

What Causes Static Electricity?

Static shocks are as mysterious as they are unpleasant. What we know is this: They occur when an excess of either positive or negative charge builds up on the surface of your body, discharging when you touch something and leaving you neutralized. Alternatively, they can occur when static electricity builds up on something else — a doorknob, say — which you then touch. In that case, you are the excess charge’s exit route.

But why all the buildup? It’s unclear. The traditional explanation says that when two objects rub together, friction knocks the electrons off the atoms in one of the objects, and these then move onto the second, leaving the first object with an excess of positively charged atoms and giving the second an excess of negative electrons. Both objects (your hair and a wool hat, say) will then be statically charged. But why do electrons flow from one object to the other, instead of moving in both directions?

This has never been satisfactorily explained, and a study by Northwestern University researcher Bartosz Grzybowski found reason to doubt the whole story. As detailed last year in the journal Science, Grzybowski found that patches of both excess positive and excess negative charge exist on statically charged objects. He also found that entire molecules seemed to migrate between objects as they are rubbed together, not just electrons. What generates this mosaic of charges and migration of material has yet to be determined, but clearly, the explanation of static is changing.

Mathematicians shocked to find pattern in ‘random’ prime numbers

GettyImages-478186903

Mathematicians are stunned by the discovery that prime numbers are pickier than previously thought. The find suggests number theorists need to be a little more careful when exploring the vast infinity of primes.

Primes, the numbers divisible only by themselves and 1, are the building blocks from which the rest of the number line is constructed, as all other numbers are created by multiplying primes together. That makes deciphering their mysteries key to understanding the fundamentals of arithmetic.

Although whether a number is prime or not is pre-determined, mathematicians don’t have a way to predict which numbers are prime, and so tend to treat them as if they occur randomly. Now Kannan Soundararajan and Robert Lemke Oliver of Stanford University in California have discovered that isn’t quite right.

“It was very weird,” says Soundararajan. “It’s like some painting you are very familiar with, and then suddenly you realise there is a figure in the painting you’ve never seen before.”

Surprising order

So just what has got mathematicians spooked? Apart from 2 and 5, all prime numbers end in 1, 3, 7 or 9 – they have to, else they would be divisible by 2 or 5 – and each of the four endings is equally likely. But while searching through the primes, the pair noticed that primes ending in 1 were less likely to be followed by another prime ending in 1. That shouldn’t happen if the primes were truly random –  consecutive primes shouldn’t care about their neighbour’s digits.

“In ignorance, we thought things would be roughly equal,” says Andrew Granville of the University of Montreal, Canada. “One certainly believed that in a question like this we had a very strong understanding of what was going on.”

The pair found that in the first hundred million primes, a prime ending in 1 is followed by another ending in 1 just 18.5 per cent of the time. If the primes were distributed randomly, you’d expect to see two 1s next to each other 25 per cent of the time. Primes ending in 3 and 7 take up the slack, each following a 1 in 30 per cent of primes, while a 9 follows a 1 in around 22 per cent of occurrences.

Similar patterns showed up for the other combinations of endings, all deviating from the expected random values. The pair also found them in other bases, where numbers are counted in units other than 10s. That means the patterns aren’t a result of our base-10 numbering system, but something inherent to the primes themselves. The patterns become more in line with randomness as you count higher – the pair have checked up to a few trillion – but still persists.

“I was very surprised,” says James Maynard of the University of Oxford, UK, who on hearing of the work immediately performed his own calculations to check the pattern was there. “I somehow needed to see it for myself to really believe it.”

Stretching to infinity

Thankfully, Soundararajan and Lemke Oliver think they have an explanation. Much of the modern research into primes is underpinned G H Hardy and John Littlewood, two mathematicians who worked together at the University of Cambridge in the early 20th century. They came up with a way to estimate how often pairs, triples and larger grouping of primes will appear, known as the k-tuple conjecture.

Just as Einstein’s theory of relativity is an advance on Newton’s theory of gravity, the Hardy-Littlewood conjecture is essentially a more complicated version of the assumption that primes are random – and this latest find demonstrates how the two assumptions differ. “Mathematicians go around assuming primes are random, and 99 per cent of the time this is correct, but you need to remember the 1 per cent of the time it isn’t,” says Maynard.

The pair used Hardy and Littlewood’s work to show that the groupings given by the conjecture are responsible for introducing this last-digit pattern, as they place restrictions on where the last digit of each prime can fall. What’s more, as the primes stretch to infinity, they do eventually shake off the pattern and give the random distribution mathematicians are used to expecting.

“Our initial thought was if there was an explanation to be found, we have to find it using the k-tuple conjecture,” says Soundararajan. “We felt that we would be able to understand it, but it was a real puzzle to figure out.”

The k-tuple conjecture is yet to be proven, but mathematicians strongly suspect it is correct because it is so useful in predicting the behaviour of the primes. “It is the most accurate conjecture we have, it passes every single test with flying colours,” says Maynard. “If anything I view this result as even more confirmation of the k-tuple conjecture.”

Although the new result won’t have any immediate applications to long-standing problems about primes like the twin-prime conjecture or the Riemann hypothesis, it has given the field a bit of a shake-up. “It gives us more of an understanding, every little bit helps,” says Granville. “If what you take for granted is wrong, that makes you rethink some other things you know.”

Journal reference: arxiv.org/abs/1603.03720

Mathematicians have discovered a strange pattern hiding in prime numbers

They’re not as random as we thought.

FIONA MACDONALD
15 MAR 2016

Mathematicians are pretty obsessed with prime numbers – those elusive integers that can only be divided by one and themselves. If they’re not creating cool artworks with them or finding them in nature, they’re using computers to discover increasingly larger primes.

But now a group of researchers has found a strange property of primes that’s never been seen before, and it violates one of the fundamental assumptions about how they behave – the idea that, for the most part, they occur totally randomly across integers.

The pattern isn’t actually found within the primes themselves, but rather the final digit of the prime number that comes directly after them – which the mathematicians have shown isn’t as random as you’d expect, and that’s a pretty big deal for mathematicians.

“We’ve been studying primes for a long time, and no one spotted this before,” Andrew Granville, a number theorist at the University of Montreal who wasn’t involved in the study, told Quanta magazine. “It’s crazy.”

So what are we talking about here? Our current understanding of primes suggests that, over a big enough sample, they should occur randomly, and shouldn’t be influenced by the prime number that comes before or after them.

But that’s not what Kannan Soundararajan and Robert Lemke Oliver from Stanford University in California found.

They performed a randomness check on the first 100 million primes and found that a prime ending in 1 was followed by another prime ending in 1 only 18.5 percent of the time – a far cry from the 25 percent you’d expect given that primes greater than five can only end in one of four digits: 1, 3, 7, or 9.

Furthermore, the chance of a prime ending in 1 being followed by a prime ending in 3 or 7 was roughly 30 percent, but for 9 it was only 22 percent.

In other words, the primes “really hate to repeat themselves”, said Lemke Oliver.

The obvious explanation for this is the fact that numbers have to cycle through all the other digits before they get back to the same ending. “For example, 43 is followed by 47, 49, and 51 before it hits 53, and one of those numbers, 47, is prime,” writes Jacob Aron for New Scientist.

But this doesn’t explain the magnitude of the bias the team found, or why primes ending in 3 seemed to like being followed by primes ending in 9 more than 1 or 7. Even when they expanded their sample and examined the first few trillion prime numbers, the mathematicians found that – even though the bias gradually falls more in line with randomness – it still persists.

“I was very surprised,” James Maynard from the University of Oxford told New Scientist. “I somehow needed to see it for myself to really believe it,” he says, admitting that he ran back to his office and performed the calculations himself after hearing about the work.

So what’s going on?

According to Soundararajan and Lemke Oliver, the pattern can be explained by something called the k-tuple conjecture – an old but unproven idea that describes how often pairs, triples, and larger sets of primes will make an appearance, and how close together these should occur.

Essentially, the k-tuple conjecture proposes that groups of primes don’t appear all that randomly, and Soundararajan and Lemke Oliver showed that this prediction could accurately explain the last-digit pattern they found.

Maynard agrees with this outcome, which has been published on pre-press site ArXiv.org, and hopes that it’ll be further evidence that primes do contain patterns, even if we can’t always see them.

“Mathematicians go around assuming primes are random, and 99 percent of the time this is correct, but you need to remember the 1 percent of the time it isn’t,” said Maynard. “If anything, I view this result as even more confirmation of the k-tuple conjecture.”

Despite the fact that it’s pretty exciting work, the newly spotted pattern doesn’t really provide many practical answers for number theorists – for example, there’s still the twin-prime conjecture and the Riemann hypothesis that need to be resolved.

The study also hasn’t been peer reviewed as yet, so we need to take it with a grain of salt, but it’s been placed on ArXiv so that other mathematicians can look over the work and add their own ideas and suggestions.

According to Granville, the discovery takes us one step closer to properly understanding the enigmatic primes. “Every little bit helps … I can’t believe anyone in the world would have guessed this,” he told New Scientist“You could wonder, what else have we missed about the primes?”

This physicist says consciousness could be a new state of matter

‘Perceptronium’.

BEC CREW
16 SEP 2016

Consciousness isn’t something scientists like to talk about much. You can’t see it, you can’t touch it, and despite the best efforts of certain researchers, you can’t quantify it. And in science, if you can’t measure something, you’re going to have a tough time explaining it.

But consciousness exists, and it’s one of the most fundamental aspects of what makes us human. And just like dark matter and dark energy have been used to fill some otherwise gaping holes in the standard model of physics, researchers have also proposed that it’s possible to consider consciousness as a new state of matter.

To be clear, this is just a hypothesis, and one to be taken with a huge grain of salt, because we’re squarely in the realm of the hypothetical here, and there’s plenty of room for holes to be poked.

But it’s part of a quietly bubbling movement within theoretical physics and neuroscience to try and attach certain basic principles to consciousness in order to make it more observable.

The hypothesis was first put forward in 2014 by cosmologist and theoretical physicist Max Tegmark from MIT, who proposed that there’s a state of matter – just like a solid, liquid, or gas – in which atoms are arranged to process information and give rise to subjectivity, and ultimately, consciousness.

The name of this proposed state of matter? Perceptronium, of course.

As Tegmark explains in his pre-print paper:

“Generations of physicists and chemists have studied what happens when you group together vast numbers of atoms, finding that their collective behaviour depends on the pattern in which they are arranged: the key difference between a solid, a liquid, and a gas lies not in the types of atoms, but in their arrangement.

In this paper, I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness.

However, this should not preclude us from identifying, quantifying, modelling, and ultimately understanding the characteristic properties that all liquid forms of matter (or all conscious forms of matter) share.”

In other words, Tegmark isn’t suggesting that there are physical clumps of perceptronium sitting somewhere in your brain and coursing through your veins to impart a sense of self-awareness.

Rather, he proposes that consciousness can be interpreted as a mathematical pattern – the result of a particular set of mathematical conditions.

Just as there are certain conditions under which various states of matter – such as steam, water, and ice – can arise, so too can various forms of consciousness, he argues.

Figuring out what it takes to produce these various states of consciousness according to observable and measurable conditions could help us get a grip on what it actually is, and what that means for a human, a monkey, a flea, or a supercomputer.

The idea was inspired by the work of neuroscientist Giulio Tononi from the University of Wisconsin in Madison, who proposed in 2008 that if you wanted to prove that something had consciousness, you had to demonstrate two specific traits.

According to his integrated information theory (IIT), the first of these traits is that a conscious being must be capable of storing, processing, and recalling large amounts of information.

“And second,” explains the arXiv.org blog, “this information must be integrated in a unified whole, so that it is impossible to divide into independent parts.”

This means that consciousness has to be taken as a whole, and cannot be broken down into separate components. A conscious being or system has to not only be able to store and process information, but it must do so in a way that forms a complete, indivisible whole, Tononi argued.

If it occurred to you that a supercomputer could potentially have these traits, that’s sort of what Tononi was getting at.

As George Johnson writes for The New York Times, Tononi’s hypothesis predicted – with a whole lot of maths – that “devices as simple as a thermostat or a photoelectric diode might have glimmers of consciousness – a subjective self”.

In Tononi’s calculations, those “glimmers of consciousness” do not necessarily equal a conscious system, and he even came up with a unit, called phi or Φ, which he said could be used to measure how conscious a particular entity is.

Six years later, Tegmark proposed that there are two types of matter that could be considered according to the integrated information theory.

The first is ‘computronium’, which meets the requirements of the first trait of being able to store, process, and recall large amounts of information. And the second is ‘perceptronium’, which does all of the above, but in a way that forms the indivisible whole Tononi described.

In his 2014 paper, Tegmark explores what he identifies as the five basic principles that could be used to distinguish conscious matter from other physical systems such as solids, liquids, and gases – “the information, integration, independence, dynamics, and utility principles”.

He then spends 30 pages or so trying to explain how his new way of thinking about consciousness could explain the unique human perspective on the Universe.

As the arXiv.org blog explains, “When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?”

It’s an incomplete thought, because Tegmark doesn’t have a solution. And as you might have guessed, it’s not something that his peers have been eager to take up and run with. Tegmark himself might have even hit a brick wall with it, because he’s never managed to take it beyond his pre-print, non-peer-reviewed paper.

That’s the problem with something like consciousness – if you can’t measure your attempts to measure it, how can you be sure you’ve measured it at all?

¯\_(ツ)_/¯

More recently, scientists have attempted to explain how human consciousness could be transferred into an artificial body – seriously, there’s a start-up that wants to do this – and one group of Swiss physicists have suggested consciousness occurs in ‘time slices’ that are hundreds of milliseconds apart.

As Matthew Davidson, who studies the neuroscience of consciousness at Monash University in Australia, explains over at The Conversation, we still don’t know much about what consciousness actually is, but it’s looking more and more likely that it’s something we need to consider outside the realm of humans.

“If consciousness is indeed an emergent feature of a highly integrated network, as IIT suggests, then probably all complex systems – certainly all creatures with brains – have some minimal form of consciousness,” he says.

“By extension, if consciousness is defined by the amount of integrated information in a system, then we may also need to move away from any form of human exceptionalism that says consciousness is exclusive to us.”

Here’s Tegmark’s TED talk on consciousness as a mathematical pattern:

 

 

 

Dark Matter May Be Made of Primordial Black Holes

Dark Matter May Be Made of Primordial Black Holes

This image shows the infrared background, or the infrared light not associated with known sources. It may be left over from the universe’s first luminous objects, including stars.

Credit: NASA/JPL-Caltech/A. Kashlinsky (Goddard)

Could dark matter — the elusive substance that composes most of the material universe — be made of black holes? Some astronomers are beginning to think this tantalizing possibility is more and more likely.

Alexander Kashlinsky, an astronomer at the NASA Goddard Space Flight Center in Maryland, thinks that black holes that formed soon after the Big Bang can perfectly explain the observations of gravitational waves, or ripples in space-time, made by the Laser Interferometer Gravitational-Wave Observatory (LIGO) last year, as well as previous observations of the early universe.

If Kashlinsky is correct, then dark matter might be composed of these primordial black holes, all galaxies might be embedded within a vast sphere of black holes, and the early universe might have evolved differently than scientists had thought. [Watch the LIGO documentary "LIGO, A Passion for Understanding"]

In 2005, Kashlinsky and his colleagues used NASA’s Spitzer Space Telescope to explore the background glow of infrared light found in the universe. Because light from cosmic objects takes a finite amount of time to travel through space, astronomers on Earth see distant objects the way those objects looked in the past. Kashlinsky and his group wanted to look toward the early universe, beyond where telescopes can pick up individual galaxies.

“Suppose you look at New York [City] from afar,” Kashlinsky told Space.com. “You cannot see individual lampposts or buildings, but you can see this cumulative diffuse light that they produce.”

When the researchers removed all of the light from the known galaxies throughout the universe, they could still detect excess light — the background glow from the first sources to illuminate the universe more than 13 billion years ago.

Then, in 2013, Kashlinsky and his colleagues used NASA’s Chandra X-ray Observatory to explore the background glow in a different part of the electromagnetic spectrum: X-rays. To their surprise, the patterns within the infrared background perfectly matched the patterns within the X-ray background.

“And the only sources that would be able to produce this in both infrared and X-rays are black holes,” Kashlinsky said. “It never crossed my mind at that time that these could be primordial black holes.”

Then, there was the LIGO detection. On Sept. 14, 2015, the observatory made the first-ever direct detection of gravitational waves — cosmic ripples in the fabric of space-time itself — that had been produced by a pair of colliding black holes. It marked the beginning of a new era of discovery — one in which astronomers could collect these unique signals created by powerful astronomical events and, for the first time, directly detect black holes (as opposed to seeing the illuminated material around black holes).

But Simeon Bird, an astronomer at Johns Hopkins University, speculated that the discovery could be even more significant. Bird suggested that the two black holes detected by LIGO could be primordial.

An image of the sky in infrared light, taken by NASA's Spitzer Space Telescope. The image shows the same patch of sky as seen in the image above, but without the known infrared sources removed.

An image of the sky in infrared light, taken by NASA’s Spitzer Space Telescope. The image shows the same patch of sky as seen in the image above, but without the known infrared sources removed.

Credit: NASA/JPL-Caltech/A. Kashlinsky (Goddard)

Primordial black holes aren’t formed from the collapse of a dead star (the more commonly-known mechanism for black hole formation that takes place relatively late in the universe’s history). Instead, primordial black holes formed soon after the Big Bang when sound waves radiated throughout the universe. Areas where those sound waves are densest could have collapsed to form the black holes.

If that thought makes your head spin a little, just think about spinning pizza dough into a disc. “After a while, you will notice it has these holes in the texture of the pizza dough,” Kashlinsky said. “It’s the same with space-time,” except those holes are primordial black holes.

For now, these primordial black holes remain hypothetical. But Kashlinsky, impressed by Bird’s suggestion, took the hypothesis a step further. In his new paper, published May 24 in The Astrophysical Journal Letters, Kashlinsky looked at the consequences that these primordial black holes would have had on the evolution of the cosmos. (Bird is not the first scientist to suggest thatdark matter might be made of black holes, although not all of those ideas involve primordial black holes.)

For the first 500 million years of the universe’s history, dark matter collapsed into clumps called halos, which provided the gravitational seeds that would later enable matter to accumulate and form the first stars and galaxies, Kashlinsky said. But if that dark matter was composed of primordial black holes, this process would have created far more halos.

Kashlinsky thinks this process could explain both the excess cosmic infrared background and the excess cosmic X-ray background that he and his colleagues observed several years ago.

The infrared glow would come from the earliest stars that formed within the halos. Although stars radiate optical and ultraviolet light, the expansion of the universe naturally stretches that light so that the first stars will appear, to astronomers on Earth, to give off an infrared light. Even without the extra halos, early stars could generate an infrared glow, but not to the extent that Kashlinsky and his colleagues observed, he said.

The gas that created those stars would also have fallen onto the primordial black holes, heating up to high enough temperatures that it would have sparked X-rays. While the cosmic infrared background can be explained — albeit to a lesser extent — without the addition of primordial black holes, the cosmic x-ray background cannot. The primordial black holes connect the two observations together.

“Everything fits together remarkably well,” Kashlinsky said.

Occasionally, those primordial black holes would have come close enough to start orbiting each other (what’s known as a binary system). Over time, those two black holes would spiral together and radiate gravitational waves, potentially like the ones detected by LIGO. But more observations of black holes are needed to determine if these objects are primordial, or formed later in the universe’s history.

2 billion years ago, nature built the first nuclear reactor — here’s how it worked

In 1942, physicist Enrico Fermi and a team of workers built what they thought was the first nuclear reactor in a Chicago racket ball court.

Unfortunately, nature had beaten them to the punch — by eons.

In Gabon, Africa, a spontaneous nuclear chain reaction started in a collection of uranium deposits about 2 billion years ago and kept going for hundreds of thousands of years.

uranium ore

A piece of uranium ore.

According to Scientific American, these uranium deposits first caught the attention of nuclear scientists in 1972. That’s when a worker analyzing the ore from the Oklo deposits noticed that something was off about his samples: The concentration of isotopes, or varieties of the same element with different neutron counts, was off.

The samples from Oklo were missing about 0.003% of uranium-235, the most precious of three isotopes present in natural uranium deposits.

That may sound like a tiny difference, but based on the size of the ore seam, it added up to 441 pounds of missing U-235. Since the isotope is rare yet used in both power plants and nuclear weapons, it’s by far the most valuable version of uranium, and scientists wanted to know where it had gone.

It turns out it had been broken down during ancient chain reactions in a natural, self-sustaining nuclear reactor — the first ever discovered. Eventually, 17 natural reactor sites were found near the Oklo and Okelobondo uranium mines.

So how did it work?

Nuclear fission occurs when a neutron strikes an fissile isotope, breaking it apart and releasing more neutrons, propelled by the energy of the atomic split. The neutrons then hit other atoms, which break apart, and the reaction continues.

nuclear fission atomic bombScreen grab/Amanda Macias/Business Insider

When uranium-235 decays naturally, one of its byproducts is a free neutron, which is why it could spur the reaction in the ore deposit.

For a nuclear reaction to be self-sustaining, however, it has to be surrounded by a moderator: a material that increases the likelihood of free neutrons smacking into the next atom and continuing the reaction. But it can’t be surrounded by too many materials that would absorb the extra neutrons and grind the process to a halt.

Like in most modern-day nuclear power plants, the moderator in the Oklo deposits was water. Groundwater would seep into the deposit, boil away when the reaction got too hot, and temporarily shut everything down — but when the ground cooled and water returned to the reactor, the nuclear reaction would start up again.

oklo natural nuclear chain fission reactor ccbysa3MesserWoland/Wikipedia (CC BY-SA 3.0)Nuclear reactor zones (1) in Oklo were created by porous sandstone (2) allowing water to trickle into a seam of uranium ore (3) atop a section of granite (4).

These major pulses probably lasted hundreds of thousands of years, according to the scientists studying the site.

That is, until the reactor had split so much of its uranium-235 that the reaction had no fuel to continue. Then, millennia before humans stood on two legs, built physics labs, and split the atom, the reactor quietly shut down.

Scientists still aren’t certain how many sites like those at Oklo exist in the world.

Perhaps one of the most fascinating things is what Oklo can teach the nuclear age about the disposal of nuclear waste: It’s the closest thing we have to a long-term study of a nuclear waste disposal site.