Chapter 2

Sources of Energy

All energy on earth is connected from a common source in the hydrogen atom and the history of that energy as stored in atoms and the chemical bonds of both fossil fuels and vegetation. A history of this energy puts issues in perspective—whether making ethanol or operating a nuclear reactor, man is only exercising control over processes that have been occurring for hundreds of millions of years. Stockpiles of fossil and nuclear fuels are a result of this cosmic history. An assessment of nature’s energy stockpiles reveals that there is abundant energy; the key to mankind’s prosperity is both technology and the ability to implement that technology on a timely basis. Good strategies are paramount.

Keywords

Nuclear; fuel; fossil; petroleum; tight oil; fracking

Prior to the full-scale shale oil fracking boom (~2000–present), the evidence indicated that proven, recoverable oil reserves in the United States would only power our thirst for oil short term if cut off from oil imports. In fact, 2008 brought with it a doubling of gasoline prices to over $4 per gallon in the United States and sustained an economic recession due to limited crude oil supplies. Shale oil fracking brought jobs, money, and gas prices to less than $2 per gallon by the end of 2014.

Today, because of fracking technology, the United States is a stronger country where the risk for rapid increases in fuel prices is reduced. The International Energy Agency (IEA) was correct when they reported:

The United States is in a strong position to deliver a reliable, affordable, and environmentally sustainable energy system; the IEA said today as it released a review of US energy policy. To do so, however, the country must establish a more stable and coordinated strategic approach for the energy sector than has been the case in the past.

Figure 2.1 illustrates how from 2006 to 2015 oil fracking of tight oil in the United States halted a trend of increasing oil imports peaking at more than 60% of oil liquid fuel imports. Within these 9 years, imports decreased to 40% in 2012 and an estimated less than 30% by the end of 2014.

image
Figure 2.1 Past and projected source of liquid fuels in the United States including imports [1].

In parallel with the oil fracking boom has been the emergence of a sustainable electric car (hybrid and battery powered) battery industry including but not limited to the Tesla Motors. It is the fracking industry that has provided the United States with this window of opportunity. It is the battery industry that will bring the United States into an era of energy sustainability and prosperity while decreasing greenhouse gas emissions.

The battery industry is important because it allows the large-scale crossover of energy from the electrical grid directly to the transportation market. Through this, the diversity and stability of the grid electricity market can bring diversity and stability to the vehicular energy/fuel market. The diversity of the electricity grid includes the use of wind, solar, biomass, and nuclear energies; all of which can be used while reducing greenhouse gas emissions.

The big question in the immediate future is whether the United States will have a coordinated strategic approach that would be best for the United States, or whether the US economy will be dominated by the policies of Saudi Arabia and the OPEC nations. How they decide to open or close their “oil valve” to the global market during any particular month. The answer may be with the United States building a strong and healthy economy with greater than $55/barrel oil; a price that provides incentive for a sustained fracking industry and allows the battery industries to come into their own. Canada has successfully, historically implemented policies that protected its domestic oil industry; the United States has not.

In the discussion of sources of energy, some are easily used to produce liquid fuels and some are primarily useful for producing electricity. However, this distinction becomes less important as the US electric car battery industry grows.

Cosmic History of Energy

The trajectory of the prehistory of human survival moving from “taming fire,” winning metals from ore, and extending to the Industrial Revolution brought us to the 1800s, the chemical age. By 1900, it was observed that certain chemical elements slowly released penetrating radiation that they called x-rays (they didn’t know what the radiation was!). By the 1920s, experiments demonstrated that the nucleus of an atom was positively charged and a magnetic field held negatively charged electrons in place around the nucleus forming the atom. Chemical compounds formed when two or more atoms shared electrons yielding a vast array of naturally occurring compounds and provided paths to synthesize lots of new synthetic materials.

This neat progression of understanding chemistry was interrupted in the 1930s. It was discovered there was an electronically neutral particle in the nucleus of an atom. They called it a “neutron.” In 1939, scientists radiated uranium with neutrons and observed a huge release of energy. When a target uranium atom nucleus split (they called it “nuclear fission”) there was formed a pair of atom nuclei with slightly less total mass than the target uranium atom. Two or three neutrons were also released when uranium fission occurred. These new atom nuclei quickly picked up electrons and they fit right into the period table chemists had developed. “Wow!” This suggested a way to quickly release huge amounts of energy (a powerful explosion!) by providing a uranium target atom for each neutron released leading to a “chain reaction.” Historically, Adolf Hitler’s Germany had conquered most small European countries and World War II was underway. The development of the technology to produce this “nuclear weapon” was classified “Top secret” and all further development of a nuclear bomb remained secret until well after the United States exploded two nuclear weapons over Japan, ending World War II.

The Source of Atoms

The source of the energy released by atomic fission had to come from the very small nucleus of the atom. Nearly all the mass of an atom is located in the nucleus because the negatively charged electrons surrounding the atom nucleus contain very little mass. Experiments showed that an atom nucleus was composed of just two particles—positively charged protons and neutrons. Classical physics could not explain how the huge repulsive force between the positively charged protons could be “packed” and held together in the “tiny” volume of an atom nucleus.

About 1663, Sir Isaac Newton demonstrated that “white” sunlight when passed through a glass prism would separate into a “spectrum” of colors from red to blue. Following a huge volume of experimental and theoretical research it was demonstrated that all atoms are characterized by the frequency of radiation they absorbed from the spectrum of a radiation beam or by the radiation they emit when they “cool” from excited states produced by a high-energy “pulse” of radiation. These observations are part of the demonstration that led to completing the quantum theory. These experiments involving radiation indicated that the radiation spectra—that includes sunlight—must be characterized as electromagnetic radiation. Long wavelengths are low energy carriers while short wavelengths are high-energy carriers.

Hence the model of an atom is composed of a positive charged nucleus surrounded by electrons—an electron for each proton in the nucleus—and each electron occupies a unique orbit a small distance from the atom nucleus. Classical physics would expect all the negatively charged electrons would be pulled into the positively charged atom nucleus. Quantum physics (remember, these are very small particles) says an electron can only change orbit when it absorbs a specific wavelength of “light” (spectroscopic analysis suggested this must be true). Since the quantum model of an atom is based on particles (protons, neutrons, and electrons) a “massless” particle was added to the model of the atom they called a “photon.” Each photon carries the exact energy of a single wavelength in the spectrum of light. When a beam of light passes through a “sample” containing lots of atoms, an electron will move to a higher energy orbit and this wavelength will be missing in the light beam that passed through the sample. The addition of the photon “particle” completed the theoretical model that is the powerful nuclear theory of atoms used today.

The spectral energy of adsorption or emission is unique for every atom. This theoretical model is the basis for spectroscopic analysis. Astronomers use spectra from radiation sources in outer space that yields composition information of the atoms in outer space. The atmosphere covering the earth diminished the energy of the incoming radiation just like clouds decrease the energy of sunlight. The spectrometers mounted on the orbiting satellites of the earth removed the atmospheric layer and “sharpened” the spectral measurements to more clearly identify atoms in outer space.

Observation of the spectrum from cosmic explosions verifies that the intense temperature and pressures produced during these cosmic events are the source of the nuclei of the atoms represented in the periodic table. When neutrons and protons are “mashed” very close together, there is a very short-range “strong force” (that’s its name because the nature of the force remains unknown!) that holds the protons and neutrons together to form that “tiny” atom nucleus.

What is Permanence?

The permanence scale in Figure 2.2 shows the binding energy in million electron volts (MeV), on the energy scale used for these subatomic particles) for each nucleon (a proton or a neutron) in the nucleus of an atom. The most stable atoms are those with the highest binding energy per nucleon. The nucleon-binding energy is plotted against the atomic mass number.

image
Figure 2.2 Impact of atomic mass number on permanence of atoms. H, hydrogen; He, helium; Li, lithium; C, carbon; O, oxygen; F, fluorine; Ar, argon; Fe, iron; Kr, krypton; Sn, tin; Gd, Gadolinium; Pu, plutonium; Bi, bismuth; and U, uranium.

The maximum binding energy per nucleon in Figure 2.2 occurs at mass number 54, the mass number for iron. As the atomic mass number increases from zero, the binding energy per nucleon increases because the number of protons (with positive charge) increases, and protons strongly repel each other but that “strong force” holds them together to form a atom nucleus. The binding energy of the neutron (no charge) serves to assist holding the protons in the nucleus of the atom together. The atomic mass number for each atom is approximately the total mass of the protons and neutrons in the nucleus since the electrons (equal to the number of protons) have a small mass, about 1/1837 that of a proton.

Above a mass number of about 60, the binding energy per nucleon decreases. Visualize the large atom nucleus as many protons try to get away from each other packed with a larger number of neutrons all held very close together by that “strong force.” Iron-56 is the most common iron isotope (about 92% of the mass of natural iron) and it has 26 protons and 30 neutrons. Natural uranium is composed of two isotopes: uranium-235 (about 0.073% of the mass) and uranium-238 (about 99.27%). U-238 nucleus holds together 92 protons and 146 neutrons. U-238 atoms decay very slowly. It takes about 4.5 billion years for one-half of a lump of pure U-238 to decay, so it is “almost” a stable isotope.

An additional comment: Notice that the nucleon-binding energy decreases as the atomic mass increases from 60 to 260. This is the source of the energy release in the nuclear fission process. Since the forces in the nucleus of large atoms are so carefully balanced, a small energy addition (a low-energy neutron entering the nucleus) will cause one or more of the nucleons to move just beyond the very short range of the “strong force” holding the nucleus together. This unleashes the strong repulsive forces acting between the many protons and the nucleus comes apart (flies apart) into two pieces. This releases lots of energy and two or three neutrons. The strong nuclear binding energies of the two pieces of the uranium atom form the nuclei of two smaller atoms. These “new” atom nuclei quickly collect electrons and fall exactly into the chemical periodic table. The new small nuclei are usually “radioactive” spontaneously releasing subnuclear particles or radiation and thermal energy. There is only a small fraction of these “fission products” that continue to decay slowly for more than 100 years.

The arrays of different elements in our planet, the solar system, and the galaxy reveal their history. Hydrogen is the smallest of the atoms assigned an atomic number of one. Physicists tell us that during the birth of the universe it consisted mostly of hydrogen. Stars converted hydrogen to helium, and supernovas (see box “Supernovas”) generated the larger atoms through atomic fusion.

Helium has 2 protons, lithium has 3 protons, carbon has 6 protons, and oxygen has 8 protons. The number of protons in an atom is referred to as the “atomic number” and identifies that atom. Atoms are named and classified by their atomic number. Atoms having between 1 and 118 protons have been detected and named (see box “Making New Molecules in the Lab”). The atomic mass is the sum of the mass of neutrons, protons, and electrons in an atom—the atomic mass and the atomic spacing determine the density (mass per unit volume) of natural bulk materials.

Protons are packed together with neutrons (subatomic particles without a charge) to form an atom nucleus. There are more stable and less stable combinations of these protons and neutrons. Figure 2.2 illustrates the permanence of nuclei as a function of the atomic mass number (the atomic mass is the sum of the protons and neutrons). Helium 3 (He,3) is shown to have a lower permanence than He,4. Two neutrons simply hold the two protons in He,4 together more firmly than the one neutron in He,3. In general, the number of neutrons in an atom must be equal to or greater than the number of protons, or that atom will be unstable and “decay” (release subnuclear particles or energy) moving to a more stable combination of protons and neutrons.

The wealth of information in Figure 2.2 demonstrates much about the composition of our planet. For example, how hydrogen atoms combine to form helium. It also illustrates that the “permanence” is greater for He,4 than for hydrogen (H,1). In these nuclear reaction processes, atoms tend to move “uphill” on the curve of Figure 2.2 toward more stable states. In Chapter 8, the concept of atomic stability will be discussed in greater detail, and the term “binding energy” will be defined and used in place of “permanence.”

Supernovas—The Atomic Factories of the Universe

Astronomers have observed “lead stars” that produced heavier metals like lead and tungsten. Three have been observed about 1600 light years from Earth. To paraphrase a description of the process:

Stars are nuclear “factories” where new elements are made by smashing atomic particles together. Hydrogen atoms fuse to form helium. As stars age and use up their nuclear fuel (hydrogen), helium fuses into carbon.

Carbon, in turn, “grows” to form oxygen, and the process continues to make heavier elements until the natural limit at iron. To make elements heavier than iron, a different process is needed that adds neutrons to the atomic nuclei. Neutrons can be considered atomic “ballast” that carry no electric charge but serve to stabilize the strong repulsive forces of the protons.

Scientists believe there are two places where this “atom nucleus building” can occur: inside very massive stars when they explode as supernovas and more often, in normal stars at the end of their fuel supply when the gravitational forces cause them to collapse into a small, very dense object as they burn out.

From Hydrogen to Helium.

image
Figure 2.3 Fusion of hydrogen to helium.

The most abundant atom in the universe is hydrogen. Hydrogen is the fuel of the stars. This diagram [2] illustrates how two protons join with two neutrons to produce a new stable molecule, helium. Considerable energy is released in the process, about 25 MeV of energy for every helium atom formed (570 million BTUs per gram helium formed).

To make atomic transitions to more permanent/stable atoms, extreme conditions are necessary (see box “Supernovas”). On the sun, conditions are sufficiently extreme to allow hydrogen to fuse to more stable, larger molecules. In nuclear fission reactions in a nuclear power plant or in Earth’s natural uranium deposits, large molecules break apart (fission) to form more stable smaller atoms.

The most abundant atom in the universe is hydrogen. Hydrogen is the fuel of the stars. The diagram illustrates how four protons interact to produce a new stable molecule: helium. Considerable energy is released in this process—about 25 MeV of energy for every helium atom formed (that is 570 million BTUs per gram helium formed).

Figure 2.4 is a starting point for qualitative understanding of the history of energy. Nuclear reactions are the text providing the history of energy. The nontechnical history of energy goes something like this: Once upon a time—about 14 billion years ago—there was a “big bang.” From essentially nothingness, in a very small corner of space, protons and helium were formed. Carbon, iron, copper, gold, and the majority of other atoms did not exist.

image
Figure 2.4 History of energy.

Unimaginably, large quantities of hydrogen and helium cluster together to form stars. The most massive of these stars formed supernovas. Fusion conditions were so intense in these supernovas that atoms of essentially all atomic numbers were formed. Hence, carbon, oxygen, iron, copper, gold, and the whole array of atoms that form solid objects on Earth were formed in these atomic factories.

The largest atoms formed were unstable and they quickly decayed or split by fission to form the nuclei of more stable smaller atoms. Uranium has intermediate stability. It can be induced to fission but it is stable enough to last for billions of years, very slowly undergoing spontaneous decay.

The spinning masses continued to fly outward from the big bang. As time passed, localized masses collected to form galaxies; and within these galaxies, stars, solar systems, planets, asteroids, and comets were formed.

Making New Molecules in the Lab

On a very small scale, scientists have been able to make new “heavy” atoms confirming the way a supernova could combine two smaller atoms to form a larger one. The heaviest atom that has been produced in the laboratory is Element 118 [3]. This was the work of an extended collaborative effort between scientists at the Joint Institute for Nuclear Research in Dubna, Russia and Lawrence Livermore National Laboratory in Berkeley, California. This very exacting experimental program was conducted over several years [4].

This is an example of one successful experiment. They used a cyclotron accelerator to produce a high-energy beam of a calcium isotope (atoms) to irradiate an isotope of californium (atoms). Six months of irradiation produced three identifiable Element 118 atoms! A statistical analysis of many months of measurements established that Element 118 decayed by emitting an alpha particle (a helium atom nucleus) in about 0.89 ms, establishing the half-life of Element 118. This decay reduces Element 118 to an Element 116 isotope that decays almost as quickly emitting one neutron followed by a second neutron. This decay destabilized Element 116 nucleus which fissions (splits apart) ending that experimental sequence. Studying the remains of these experiments (the fission products) made it possible to confirm Element 118 existed for an instant, a truly remarkable experimental accomplishment.

It is here where energy and the universe, as we know, took form. We are just beginning to understand the processes of the stars and supernovas to tap into the vast amounts of binding energy in an atom. The atomic binding energy available in one pound of uranium is equivalent to the chemical binding energy present in eight million pounds of coal!.1

Nuclear Energy

In theory, nuclear energy is available from all elements smaller (lighter) than iron through nuclear fusion and all elements larger (heavier) than iron through nuclear fission. Iron is on the peak of the permanence curve so it is also one of the most abundant elements. While all other “heavy” atoms can decay by spontaneous emission of subnuclear particles to gradually form iron. Iron does not spontaneously decay. When most of the nuclear energy of the universe is spent, iron and elements with similar atomic number (all being stable) will remain.

In general, the largest atoms are the most likely to quickly undergo nuclear decay such as the release of an alpha particle (a helium atom nucleus). Atoms that were heavier than uranium (see22) have undergone decay and fission and are no longer found on Earth. The amount of U-238 on Earth today is slightly less than half of what was present at Earth’s formation. The amount of U-235 on Earth today is less than 1% of what was present at Earth’s formation.

Of interest to us is the ability to use nuclear processes in a controlled and safe manner, because “controlled release” of the nuclear binding energy can be used to produce electricity. The energy released is the energy required to hold protons and neutrons, in atom nuclei as they combine and rearrange in the progression to higher “binding energy.” We have been able to use nuclear fission on a practical/commercial scale with one naturally occurring element: Uranium. We could perform fission on elements heavier than uranium, but these are not available in nature. The hydrogen bomb is an example where the energy of fusion was used for massive destruction, but use of fusion for domestic energy production is much more difficult. Practical nuclear fusion methods continue to be an area of active research.

The only practical nuclear energy sources today are nuclear fission of uranium in nuclear reactors and the recovery of geothermal energy (heat) produced by nuclear decay under the surface of Earth. This occurs continuously with uranium, the primary fuel for both of these processes.

At 18.7 times the density of water, uranium is the heaviest of all the naturally occurring elements (the lightest is hydrogen; iron is 7.7 times the density of water). All elements (identified by the number of protons in the nucleus) occur in slightly differing forms known as isotopes. Each isotope forms by changing the number of neutrons packed into the nucleus of that atom.

Uranium has 16 isotopes of which only two are stable. Natural uranium consists of just two of those isotopes: 99.3% by weight U-238 and 0.71% U-235. (The atomic number of uranium is 72. The number of protons plus the number of neutrons is the mass number. U-238 has 72 protons+166 neutrons=238. U-235 has 72 protons+163 neutrons=235).

image
Figure 2.5 Basic steam cycle used with nuclear reactor source of heat.

Modern Nuclear Reactors in the United States

Modern nuclear power plants24 use a pressurized [5] water reactor to produce thermal energy to produce the steam to drive a turbine and generate electricity. The fuel is 3%–4% U-235-enriched uranium oxide pellets sealed in tubes that are held in racks in the reactor pressure vessel. This maintains the geometry of the reactor core. The water that removes the heat from the core leaves the reactor at about 320°C, and it would boil at a pressure of about 70 atmospheres (850 psi). The pressure in the reactor vessel is held at 150 atmospheres (2250 psi), so it never boils. This hot water is pumped to a heat exchanger, where steam is produced to drive the turbines. The high-pressure reactor cooling water will always contain small amounts of radioactive chemicals produced by the neutrons in the reactor. This radioactivity never gets to the steam turbine where it would make it difficult to perform maintenance on the turbine and steam-handling equipment.

Large pressurized water reactors produce about 3900 MW of thermal energy to produce about 1000 MW of electric power. The reactor core contains about 100 tons of nuclear fuel. Each of the nuclear fuel racks has places where control rods can be inserted. The control rods are made of an alloy that contains boron. Boron metal absorbs neutrons, so with these rods in position, there will not be enough neutrons to initiate the chain reaction. When all of the fuel bundles are in position and the lid of the pressure vessel sealed, the water containing boric acid fills the pressure vessel. The control rods are withdrawn, and the boron water solution still absorbs the neutrons from U-235 fission. As the water circulates, boric acid is slowly removed from the water and the neutron production rate increases; the water temperature and pressure are closely monitored. When the neutron produces the rated thermal power of the reactor, the boron concentration in the water is held constant. As the fuel ages through its life cycle, the boron in the water is reduced to maintain constant power output.

If there is an emergency that requires a power shutdown, the control rods drop into the reactor core by gravity. The control rods quickly absorb neutrons, and fission power generation stops. The radioactive fission products in the fuel still generate lots of heat, as these isotopes spontaneously decay when fission stops. Water circulation must continue for several hours to remove this radioactive decay heat before the reactor lid can be removed to refuel or maintain the reactor vessel.

U-235 is slightly less stable than U-238 and when enriched to 3%–8% it can be made to release heat continuously in a nuclear reactor. Enriched to 90% U-235 a critical mass will undergo a chain reaction producing the sudden release of huge amounts of energy becoming a nuclear bomb. We have mastered the technology to perform both of these processes. U-238 decays slowly. About half of the U-238 present when Earth was formed about 4.5 billion years ago (and >99% of U-235) has decayed, keeping the Earth’s interior a molten core.23

U-235 decays faster than U-238. We are able to induce fission of U-235 by bombarding it with neutrons. When one neutron enters a U-235 nucleus to form U-236, it breaks apart almost instantly because it is unstable. It breaks apart to form the nucleus of smaller atoms plus two or three neutrons. These two or three neutrons can collide with U-235 to produce another fission in a sustained chain reaction.

The First Use of Nuclear Power

Our use of nuclear fission was to make a bomb which was based on an uncontrolled chain reaction. A neutron chain reaction results when, for example, two of the neutrons produced by U-235 fission produce two new fission events. This will occur when nearly pure U-235 is formed into a sphere that contains a critical mass: about 60 kg of metal. Then in each interval of 10 billionth of a second, the number of fission events grows from 1, 2, 4, … 64, … 1024, 2048, … as illustrated by Figure 2.6. The transition from very few fission events to an uncountable number occurs in less than a microsecond. The enormous energy released in this microsecond is the source of the incredible explosive power of a nuclear fission bomb.

image
Figure 2.6 Escalating chain reaction such as in a nuclear bomb.

This escalating chain reaction is to be distinguished from the controlled steady-state process as depicted by Figure 2.7. In a controlled steady-state process, a nearly constant rate of fission occurs (rather than a rapidly increasing rate) with a resulting constant release of energy.

image
Figure 2.7 Controlled steady-state chain nuclear fission such as in a nuclear reactor.

The first nuclear bomb used in war exploded over Hiroshima, Japan, was a U-235 fueled bomb. Two hemispheres containing half of the critical mass are slammed together in a “gun barrel” with conventional explosive charges. In the resulting nuclear explosion, about 2% of the U-235 mass underwent fission. Everything else in the bomb was instantly vaporized. The fireball and the explosion shock wave incinerated and leveled a vast area of Hiroshima. This is the legacy of nuclear energy that indelibly etched fear into the minds of world citizens. The second explosion at Nagasaki was a plutonium bomb, followed by the development and testing of even more powerful and fearsome nuclear weapons during the Cold War period, adding to this legacy of fear.

For a nuclear bomb, the rapid chain reaction depicted in Figure 2.6 is competing with the tendency for the melting/vaporizing uranium to rapidly “splat” over the surroundings. This “splatting” action tends to stop the chain reaction by separation of small pieces of uranium. Weapon’s grade U-235 is typically at least 80% U-235; higher purities give increased release of the nuclear energy (more fission and less splatting).

The enormous energy available from U-235 in a very small space led US naval technologists to consider using nuclear energy to power submarines. The task is to configure the nuclear fuel (U-235 and U-238) so that exactly one of the neutrons produced by a U-235 fission produces only one new fission event. The shapes of the reactor core and control rods (that absorb neutrons) combine to serve as a “throttle” to match the energy release to load. The thermal energy produces steam that propels the vessel and provides electric power. All of this technology development was done with private industrial firms under contract by the military and was classified “top secret.”

The industrial firms that built the nuclear reactors for the military also built steam turbines and generators for electric power stations. The first nuclear reactor built to produce electric power for domestic consumption was put into service in Shipingsport, Ohio, in 1957, just 15 years after the “Top Secret” Manhattan Project was assembled to build a nuclear weapon. This represents a remarkable technological achievement. Today, modern nuclear reactors produce electricity based on technology that closely mimics the reactors first used on the submarines.

Reserves

Uranium reserves are difficult to estimate; however, an accurate estimate can be made on the energy in the spent rods from US nuclear power generation. Current nuclear technology uses 3.4% of uranium in fuel, leaving 96.6% of the uranium unused. The amount of nuclear fuel in spent nuclear fuel rods in US nuclear facilities contains as much energy as the entire recoverable US coal reserve. The depleted uranium left behind during the fabrication of the initial nuclear fuel rods has about four times as much energy as that remaining in the spent nuclear fuel rods. Combined, this stockpiled uranium in the United States has the capacity to meet all of the US energy needs with very little greenhouse gas emissions for the next 250 years.

Reprocessing Technology

Recovering fuel values from uranium that has already been mined and used will require reprocessing. A typical spent nuclear fuel rod in the United States contains about 3.4% fission products, 0.75%–1% unused U-235, 0.9% Pu-239 (Plutonium), 94.5% U-238, and trace amounts of atoms with atomic masses greater than U-235 and U-238 (referred to as transuranic elements).

Reprocessing spent nuclear fuel would make available the major fraction, the unspent uranium there most of the uranium that remains there. This would add to the stockpiled uranium fuel and in addition, separate the fission products. The fission products then become the “nuclear waste” with a very small quantity requiring storage of more than 100 years.

Reprocessing involves removing the 3.4% that is fission products and enriching U-235 and/or Pu-239 to meet the “specifications” of nuclear reactor fuel. The “specifications” depend on the nuclear reactor design. Nuclear reactors and fuel-handling procedures can be designed that allow nuclear fuel specifications to be met at lower costs than current reprocessing practice in France. For comparative purposes, the costs of coal, US nuclear fuel from mined uranium, and French reprocessed fuel are about 1.05, 0.68, and 0.90, respectively, cents per kWh or electricity produced.

Around 33% to 40% of the energy produced in nuclear power plants today originates comes from U-238 irradiated by neutrons in the reactor core. When a U-238 nucleus receives a neutron, there follows two nuclear decay events arriving at Pu-239 in about 90 days. That plutonium becomes nuclear fuel. For every three parts of U-235 fuel entering the reactor, about two parts of U-235 plus Pu-239 leave the reactor as “spent fuel.” To date, these fuel values remain stored at the power plant site. All of the uranium, the two parts U-235 and Pu-239, are the target of reprocessing technology. To tap this stockpile of fuel, mixed oxide fuel (MOX–uranium and plutonium metal oxide) or new fast-neutron reactors could be put in place that produce more (“breed” Pu-239) Pu-239 than the combined U-235 and Pu-239 in the original spent fuel.

Decades of commercial nuclear power provide stockpiles of spent fuel rods. Billions of dollars have been collected on a 0.1 cent per kWh tax levied to decommission an “old” reactor. Decommissioning a “retired” reactor must include disposing the on-site spent fuel or reprocessing it. The remarkable safety history for US designed reactors is set against a costly history of regulations that limit slow technology development. These circumstances provide opportunity or perpetual problems, depending on the decisions made to use (or not use) advanced nuclear power options.

Figure 2.8 summarizes the accumulation of spent fuel currently being stored on-site at the nuclear power plants in the United States.

image
Figure 2.8 Approximate inventory of commercially spent nuclear fuel and fissionable isotopes having weapon potential (Pu-239 and U-235). The solid lines are for continued operation without reprocessing and the dashed lines are for reprocessing (starting in 2005) to meet the needs of current nuclear capacity.

The United States uses about 100 GW of electrical power generating capacity from nuclear energy. The construction and startup of most of these plants occurred between 1973 and 1989. In 2007, the inventory of spent nuclear fuel corresponded to about 30 years of operation at 100 GW. Figure 2.8 approximates the total spent fuel inventories and cumulative inventories of U-235 and Pu-239 scenarios of reprocessing versus continued operation without reprocessing. Reprocessing is the key to decreasing Pu-239 and U-235 inventories and ending the accumulation of spent fuel at the nuclear reactor sites.

If reprocessing had started in 2015 to serve all current nuclear power plants, the current inventories, along with the Pu-239 that is generated as part of PWR operation, would provide sufficient Pu-239 to operate at existing capacity through 2065. If in 2005 the demand for Pu-239 and U-235 increased threefold (~300 GW capacity), the current inventories would last until about 2035. This does not depend on fast-neutron reactor technology to convert the much greater inventories of U-238 into Pu-239 fuel. This partly explains why breeder reactor research and operation were discontinued. Breeder reactors will not be needed for some time.

Fast-neutron reactor technology could allow nuclear reactors to meet all energy needs for the next 200–300 years without generating additional radioactive materials and without mining additional uranium. The potential of this technology should not be ignored.

Recoverable uranium on Earth could provide all energy needs for thousands of years (Actual estimates are millions of years, depending upon electrical energy consumption.). Reprocessing and fast-neutron reactors can use uranium that is already mined and stored in various forms to provide all energy needs for hundreds of years while simultaneously eliminating nuclear waste. For this reason, an option to use thorium as a nuclear fuel can be delayed for several decades. It does remain a long-term nuclear option.

The fact is that nuclear power offers many outstanding options. The key missing aspect of the nuclear solution is the commitment to do it well and sustainably as opposed to allowing it to be dominated by knee-jerk reactions.

In the pursuit of sustainable energy, nuclear power emerges favorable for five reasons:

1. On a BTU basis, nuclear fuel is the least expensive and it is economically sustainable. Nuclear fuel has the potential to be 10 times less expensive than any alternative (less than $0.10 per MBTU).

2. Nuclear fuel is the most readily available fuel—it is stockpiled at commercial reactors in the form of spent fuel.

3. Nuclear fuel is the most abundant; enough has already been mined, processed, and stored in the United States to supply all US energy needs for centuries.

4. There is no technically available alternative to give sustainable energy supply for current and projected energy demand.

5. Nuclear energy generates a small fraction of greenhouse gas emissions relative to the best alternatives.

It is impractical to replace transportation fuels with biomass because there simply is not enough available biomass to meet these needs—let alone nontransportation energy demands. The long- term availability of petroleum has already led to military conflict to keep the oil flowing. The contribution to the trade deficit is also a drag on the US economy.

Natural gas power plants provide a short-term solution because they are cheaper to build and they convert about 50% of the combustion energy into electricity. Natural gas facilities also provide quick startup to compensate for unreliable wind. It is clear that the diverse mix of energy used for electrical power is better when both natural gas and nuclear are used to meet long-term goals on sustainable and inexpensive electrical power.

Coal will be an important global energy source for decades to produce electricity. Coal can remain for centuries as a feedstock to the chemical industry. However, coal is already used for about 50% of electric power production (see Table 2.1) even though nuclear energy is less expensive on a fuel basis.

Table 2.1

US electricity power production in billions of kilowatt hours [6]

 1999 2012
Coal 1881 50.1% 1514 36.9%
Natural gas 571 15.2% 1237 30.2%
Nuclear 728 19.4% 769 18.8%
Hydroelectric 319 8.5% 276 6.8%
Petroleum 118 3.1% 23.2 0.6%
Wood 37 1.0% 37.8 0.9%
Geothermal 14.8 0.4% 15.6 0.4%
Wind 4.5 0.1% 140.8 3.4%
Solar 0.5 0.0% 4.3 0.1%
Other bio 22.6 0.6% 19.8 0.5%
Misc 55 1.5% 56.1 1.4%
Total 3752  4095  

Image

Geothermal

Geothermal energy is the heat that is released from continuous nuclear decay occurring with uranium that is distributed throughout Earth on land and sea. Two factors lead to the accumulation of this heat: (i) thousands of feet of the Earth’s crust provide good insulation that reduces the rate of heat lost to outer space; and (ii) heavier elements (like uranium) are pulled by gravity toward the center of Earth, where these elements undergo natural radioactive decay releasing heat.

The warmer the geothermal heat source, the more useful the energy. For most locations, higher temperatures are located several thousand feet under the surface, and the cost of accessing them is great compared to alternatives. At the center of Earth, some 3700 miles below the surface, temperatures reach 9000°F and metals and rocks are liquid.25

At Yellowstone Park and Iceland, useful geothermal energy is available a few hundred feet under the surface or at the surface (hot springs and geysers). Even at these locations the cost of the underground pipe network necessary to create an electrical power plant is high. On a case-by-case basis, geothermal heating has been economical. Much of Iceland’s residential and commercial heating is provided by geothermal energy (see box).

Geothermal Heating in Iceland

(from http://www.energy.roche`ster.edu/is/reyk/history.htm)

The first trial wells for hot water were sunk by two pioneers of the natural sciences in Iceland by Eggert Olafsson and Bjarni Palsson, at Thvottalaugar in Reykjavik and in Krisuvik on the southwest peninsula in 1755–1756. Additional wells were sunk by Thvottalaugar in 1928 through 1930 in search of hot water for space heating. They yielded 14 liters per second at a temperature of 87°C, which in November 1930 was piped 3 km to Austurbacjarskoli, a school in Reykjavik which was the first building to be heated by geothermal water. Soon after, more public buildings in that area of the city as well as about 60 private houses were connected to the geothermal pipeline from Thvottalaugar.

The results of this district-heating project were so encouraging that other geothermal fields began to be explored in the vicinity of Reykjavik. Wells were sunk at Reykir and Reykjahbd in Mosfellssveit, by Laugavegur (a main street in Reykjavik) and by Ellidaar, the salmon river flowing at that time outside the city but now well within its eastern limits. Results of this exploration were good. A total of 52 wells in these areas are now producing 2400 liters per second of water at a temperature of 62–132°C.

Hitaveita Reykjavikur (Reykjavik District Heating) supplies Reykjavik and several neighboring communities with geothermal water. There are about 150,000 inhabitants in that area, living in about 35,000 houses. This is way over half the population of Iceland. Total harnessed power of the utility’s geothermal fields, including the Nesjavellir plant, amounts to 660 MW, and its distribution system carries an annual flow of 55 million cubic meters of water.

Some manufacturers refer to the use of groundwater or pipes buried in the ground used in combination with a heat pump as geothermal furnaces. These furnaces do not use geothermal heat. Rather, the large mass of Earth simply acts as energy storage to take in heat during the summer and give up heat during the winter.

Recent Solar Energy

Use of Sunlight

Solar energy provides the most cost-effective means to reduce heating costs and can be used to directly produce electricity. Both can be cost-effective, depending on the local cost of conventional electrical energy alternatives.

Solar heating is the oldest, most commonly used and least expensive use of sunlight. Building location, orientation, and window location can be used to displace auxiliary heating such as a natural gas furnace. Windows located on the south side of a northern hemisphere building will bring in sunlight to heat during the winter. A strategically located tree or well-designed roof overhang can block the sunlight during the summer. The number and placement of windows will vary, based on design preference. Aesthetics, solar functionality, and construction materials (siding on a building) are available for solar performance. New building designs are available that provide cost-effective combinations for solar systems.

Solar water heating systems are the next most popular use of solar energy. They use solar heat to reduce the consumption of natural gas or electricity to heat water. Clarence Kemp is known as the father of solar energy in the United States. He patented the first commercial Climax Solar Water Heater. This and competing systems sold about 15,000 units in Florida and California by 1937. In 1941, between 25,000 and 60,000 were in use in the United States, with 80% of the new homes in Miami having solar hot water heaters. Use outside the United States has developed, especially in regions where electricity costs are high and the climate is warm.

When confronted with the oil boycott and subsequent oil supply incentives, Israel proceeded with a major initiative to use solar water heaters. More than 90% of Israeli households owned solar water heaters at the start of the twenty-first century.28 Solar water heaters are also quite popular in Australia. At sunny locations where electricity is expensive and where natural gas is not available, solar water heating is a good option. It is easy to store warm water so the solar energy collected during the day is available at night.

Considerable research has been conducted using mirrors to focus sunlight and generate the high temperatures required to produce steam for electrical power generation. To date, most of these systems are too costly. Alternatively, the direct conversion of sunlight to electricity is popular in niche markets, and new technology is poised to expand this application.

In the 1950s, Bell Laboratory scientists made the first practical photovoltaic solar cell. Today, that photovoltaic technology is widely used on flat screen computer monitors (provide electricity with the “picture code and it works) as well as for producing electrical power for electric devices in remote locations. These remote devices include highway signs that cannot be easily connected to grid electricity and small electrical devices like handheld calculators.

Solar energy is usually not used for power generation in competition with grid electricity. In some locations, photovoltaic cells on roofs provide an alternative for enthusiasts where consumer electrical prices are above $0.10 per kWh. Small solar roof units show a better payback to meet individual electrical needs than commercial units designed to sell bulk power to the electrical grid. While consumers will usually pay more than $0.08 per kWh for electricity, when selling excess electricity to the grid one typically receives less than $0.04 per kWh.

The south-facing walls and roof sections of every building in the United States are potentially useful locations for solar photovoltaic panels. Materials having both aesthetic and solar function are becoming available today. From this perspective, there is great potential for solar energy to replace a portion of grid electrical power. At 4.3 billion kWh in 2012, solar electrical power on the grid provided about 0.1% of the electrical energy production (see Table 2.1).

Based on Table 2.1, trends in electrical power generation are as follows: decreasing use of coal, increasing use of natural gas, near constant hydroelectric and nuclear, and increasing wind power.

Hydroelectric Energy

Water stored in high-elevation lakes or held by dams creates high-pressure water at the bottom of the dam. The energy stored in the high-pressure water can be converted to shaft work using hydroelectric turbines to produce electricity. Most of the good dam sites in the United States have been developed, so this is a mature industry. At 319.5 billion kWh in 1999, hydroelectric power on the grid provided 8.6% of the electrical energy production. Environmentalists oppose dam construction in the Northwest United States, and there is active lobbying (especially among native Americans living on the Reservation on the river) to remove dams on the Columbia River.

Wind Energy

Wind energy is one of the oldest and most widely used forms of power. Traditionally, the shaft work was used to pump water or mill grain. In the 1930s, an infant electrical power industry was driven out of business by policies favoring the distribution of fossil fuel electricity.

Between 1976 and 1985, over 5500 small electric utility units (1–25 kW) were installed on homes and remote locations in response to high oil prices. The installation and use dwindled when government subsidies ended in 1985. By the end of 2014, wind power increased to 4.5% of generated electricity in the United States. Goals are to increase this to as much as 20% [7].

The price of electrical power from wind has decreased from more than $1.00 per kWh in 1978 to about $0.066 per kWh in 2014 [8]. This does not include the cost of backup power when wind is not blowing.

The primary issue for more widespread use of wind power is not the cost of the wind turbines. The first and foremost issue is (i) wind is not dependable power on demand which means every MW of wind turbine power requires a MW of backup electrical power from an alternative source. One could argue on this basis that at best, wind power capital costs might be twice as much as other major power sources because of the constant need for backup power; other important issues are (ii) environmental impact (noise pollution, poor aesthetics, bird kills, etc.); and (iii) high maintenance costs because of the large number of wind turbines needed to generate as much power as a typical coal-fired power plant.

The dependability issue is the ability of wind power to supply continuous electrical power. The ability of a facility to provide electricity is characterized by its “capacity factor.” This factor is the actual energy supplied by a wind turbine compared to the theoretical power supplied if it operated continuously at its design capacity. Wind power suffers from low-capacity factors because of the lack of wind at night and the lack of power demand when the wind is blowing. Table 2.2 provides capacity factors for 2012 for the most common sources of grid electrical power.

Table 2.2

Example capacity factors (2012)

Power source Capacity factor
Coal 0.558
Natural gas 0.333
Nuclear 0.862
Hydro 0.400
Renewables 0.323
Petroleum 0.056

The capacity factor for wind is typically 0.2–0.35. Wind’s low-capacity factor reflects the fact that wind is not always blowing while the low-capacity factor for natural gas is primarily a reflection of how natural gas is used to provide peak demand power. Nuclear power plants have long startup times, and so, they are the first sources of power used on the grid which results in high-capacity factors.

One of the less obvious opportunities for electrical power supply is energy storage. Storing wind energy as it is available for use during peak demand times will increase the value of the wind energy and would increase capacity factors. This could increase the value of wind energy from a wind farm by a factor of three or more. Such an increase in the value of wind energy would change the economic outlook of wind power from marginal to profitable. Currently, it is more cost-effective to build a backup natural gas power plant than it is to use storage.

Solar Hydrogen

One of the hottest research topics of recent years (2010–2014) has been the use of sunlight to produce hydrogen. This can be done through engineering of bacteria or catalytic solar panels. The standard approach is to split water into hydrogen and oxygen.

Despite the publicity and tens of millions of dollars that the government is investing in this project, the technology has three fatal flaws; any one of which could be a roadblock to it becoming a sustainable technology.

The first flaw is the cost of collecting sunlight. Plants like corn or soybeans tend to self-assemble once the seed is put in the ground to cover the hundreds of square miles of land with plants. The sunlight is dispersed, and so large acreages of land are necessary. For all solar energy applications (except biomass), the cost of collecting the disperse sunlight translates to represents a huge investment for the infrastructure of covering the surface of hundreds of square miles of land with solar collectors or reactors.

The second flaw with solar hydrogen is the cost of collecting the hydrogen produced over hundreds of square miles of land. This is complicated because the hydrogen product must be kept separated from the oxygen that is also produced. Hydrogen–oxygen mixtures easily detonate yielding a powerful explosion risking the energy collector infrastructure. So, there is the additional capital cost to provide separators that collect the hydrogen and release the oxygen at each solar collector. This complicates the facility design that covers huge acreages.

The third flaw is that even if the hydrogen is collected, the large-scale storage and handling of hydrogen is still too costly and burdensome for widespread use in transportation. Hydrogen storage is an ongoing research topic across the globe.

If it ever becomes feasible to do this on a large scale, it is unlikely that conversion of water to hydrogen can compete with direct conversion of solar energy to electricity.

The United States is a rich country capable of devoting efforts to research that does not hold the prospect for immediate and sustainable utility. Research topics such as solar hydrogen deserve to be studied, but primarily in the context of advancing science and engineering rather than a realistic prospect of providing a new sustainable energy technology.

Biomass

Energy storage limits the utility for both wind power and solar energy. Nature’s way for storing solar energy is to produce biomass. Biomass is biological material; grass, wood, corn, palm oil, and similar plant material. Time, the absence of oxygen, and compaction (promoted by Earth overburdens) convert biomass to coal, petroleum, or other geological variations of these materials.

Wood has been used through the ages to produce heat. Today, wood is burned to provide heat and very little is used to generate electricity, corn is converted to ethanol, and vegetable oils are converted to biodiesel. Unlike fossil fuels, biomass is not available in geological formations that have accumulated for years. Rather, biomass grows each year and must be harvested and used quickly to get quality fuel. The supply must be renewed every year.

The availability of biomass is reported as estimated amounts produced per year. A wide range of biomass types, as well as different terrain and climate, control biomass availability. The supply of biomass tends to increase with increasing prices. Table 2.3 summarizes example prices and availability of solid biomass (not including fats and oils). Updated prices on corn would need to be used for any estimates using that feedstock.

Table 2.3

US estimate of supplies and production potential of ethanol from biomassa [9]

 Price $/dry ton Quantity million dry tons/yr Conversion gallons ethanol/ton Ethanol equivalent millions of gallons/yr Cost of feedstock/gallon ethanol
$2.40 bu (56 lb) Corn, United States [10,11] 85.7 280 89 24920 $0.96
Refuse derived waste 15 80 80 6400 $0.19
Wood waste cheap 30 10 110 1100 $0.27
Wood waste expensive 45 80 110 8800 $0.41
Switchgrass cheap 25 5 95 475 $0.26
Switchgrass expensive 45 250 95 23750 $0.47

Image

Except for reliable corn numbers, conversions are optimistic.

aEstimated per-ton yields of ethanol from corn, sorghum, refuse-derived fuel, wood waste, and switchgrass are 89, 86, 80, 110, and 95, respectively. Corn has 56 pounds per bushel with an assumed price of $2.40 per bushel ($85.71/ton) with an annual US production estimate of 10 billion bushels or 280 million tons.

Solid biomass is used for energy five different ways: (i) burning for heat or electrical power generation; (ii) conversion to ethanol; (iii) pyrolysis; (iv) gasification; and (v) anaerobic (without oxygen) methane production (landfill gas). The high cost of biomass makes conversion to liquid fuels and use as chemical feedstock the best applications for most of these renewable energy sources. Direct combustion and anaerobic methane are handled on a case-by-case basis, where they are generally profitable if the biomass has already been collected for other reasons (e.g., solid waste collection in cities). For quality liquid fuel production, two technologies stand out: ethanol production and coal gasification for Fischer–Tropsch liquid fuel production. When including oil seed crops (e.g., soybeans and rapeseeds) a third option for biodiesel is also becoming quite attractive.

Ethanol and Biodiesel

Biofuels like ethanol and biodiesel qualify as recent solar energy. Plants convert sunlight (radiation) to chemical bonds that store the energy for years if properly stored. The process of making ethanol and biodiesel is the process of changing nature’s chemical bonds to those chemical bonds in liquid fuels useful for powering engines. Processes for making the liquid fuels include use of chemical reactions and processes to remove and purify the liquids from that part of the biomass that cannot be converted to either ethanol or biodiesel.

Table 2.3 shows the number of gallons of ethanol that can be produced from the most common forms of biomass. The corn data of Table 2.3 are important points of reference. Corn is the largest commodity crop in the United States and provides high yields of dense biomass. While the price per ton of corn is almost twice the price of large-volume switchgrass and wood, the processing and accumulation cost of corn is substantially less than the processing costs for the other feedstocks.

Dried distiller grain is the solid by-product sold as a high-protein, high-fat cattle feed when producing ethanol from corn. Over half of the corn costs are recovered by the sale of this by-product. These by-products make the use of corn, one of the most affordable relative to other biomass crops (e.g., sugarcane, not a temperate zone crop). Other biomass materials may actually have a higher yield of ethanol per acre, but they do not provide the valuable by-products.

Corn is the most commonly used biomass for producing ethanol. The production process consists of adding yeast, enzymes, and nutrients to the ground starchy part of corn to produce a beer. The beer contains from 4% to 18% ethanol, which is concentrated by distillation that is similar to the distillation used to produce whiskey. The final fuel ethanol must contain very little water for use as motor fuel. In gasoline, the water is an insoluble phase that extracts the ethanol and settles to the bottom of the tank. If this water gets to the engine, the engine will stall.

About 90 gallons of ethanol are produced from one ton of corn or about 2.5 gallons of ethanol can be obtained per bushel of ethanol. The cost to produce one gallon of ethanol from corn is equal to about $/bushel×0.5 (by-product credit) ÷ 2.5+$0.70 (processing costs). At $2.40/bu, this is $1.20/gallon. Ethanol has about two-thirds the energy content of gasoline, so these prices translate to $1.80 per equivalent gasoline gallon at $2.40/bu corn. Farmers have found that owning ethanol-producing facilities is a good investment because when corn prices are low, the profit margin for ethanol production can increase.

Estimates of gasoline used in US cars, vans, pickup trucks, and SUVs are about 130 billion gallons of gasoline per year. (These numbers agree with the motor gasoline consumption of 3.05 billion barrels reported elsewhere.) About 500 million prime, corn-producing acres would be required for ethanol to replace all of the gasoline. This is about one-quarter of the land in the lower 48 states. The lower 48 states have about 590 million acres of grassland, 650 million acres of forest, and 460 million acres of croplands (most is not prime acreage).

If all of the current corn crop were converted to ethanol, this would replace about 17 billion gallons of gasoline—less than 15% of our current consumption. Estimates of dedicating acreage for ethanol production equivalent (yield-based) to current gasoline consumption would require nine times the acreage used for the current US corn crop. This approach is not realistic. However, if hybrid vehicle technology doubles fuel economy and the electric power grid further reduces gasoline consumption to about 60 billion gallons, substantial ethanol replacement of gasoline is possible. Use of perennial crops would be a necessary component of large-volume ethanol replacement for gasoline.

Corn is an annual crop and must be planted each year. For this reason, costs for mass production of wood and grasses are potentially less than corn. In the late twentieth century, corn-to-ethanol production facilities dominated the biomass-to-ethanol industry. This was due to (i) less expensive conversion technologies for starch-to-ethanol production compared to cellulose-to-ethanol production required for wood or grasses; and (ii) generally ambitious farmer-investors who viewed this technology as stabilizing their core farming business and providing a return on the ethanol production plant investment. State governments usually provide tax credits for investment dollars to build the ethanol plants, and there is a federal subsidy for fuel grade ethanol from biomass.

Because of lower feedstock costs (see Table 2.3), wood-to-ethanol and grass-to-ethanol technologies could provide lower ethanol costs—projections are as low as $0.90 per equivalent gasoline gallon. Research focus has recently been placed on cellulose-to-ethanol production. The cost of cellulose-to-ethanol production has decreased from more costly to about the same as corn-to-ethanol technology. Based on present trends, cellulose-to-ethanol technology cost could compete in the ethanol expansion in the twenty-first century. It would require large tracts of land dedicated to cellulose production.

The current world production of oils and fats is about 240 billion pounds per year (32.8 billion gallons, 0.78 billion barrels), with production capacity doubling about every 14 years. This compares to a total US consumption of crude oil of 7.1 billion barrels per year of which 1.35 billion barrels is distillate fuel oil (data for the year 2000).41 With proper quality control, biodiesel can be used in place of fuel oil (including diesel) with little or no equipment modification. Untapped, large regions of Australia, Colombia, and Indonesia could produce more palm oil. This can be converted to biodiesel that has 92% of the energy per gallon as diesel fuel from petroleum. This biodiesel can be used in the diesel engine fleet without costly engine modifications.

In the United States, ethanol is the predominant fuel produced from farm commodities (mostly from corn and sorghum), while in Europe, biodiesel is the predominant fuel produced from farm commodities (mostly from rapeseed). In the United States, most biodiesel is produced from waste grease (mostly from restaurants and rendering facilities) and from soybeans.

In the United States, approximately 30% of crop area is planted to corn, 28% to soybeans, and 23% to wheat. For soybeans this translates to about 73 million acres (29.55 million hectares) or about 2.8 billion bushels (76.2 million metric tons). Soybeans are 18%–20% oil by weight, and if all of the US soybean oil production were converted to biodiesel, it would yield about 4.25 billion gallons of biodiesel per year. Typical high yields of soybeans are about 40 bushels per acre (2.7 tons per hectare), which translates to about 61 gallons of biodiesel per acre. By comparison, 200 bushels per acre of corn can be converted to 520 gallons of ethanol per acre.

Table 2.4 compares the consumption of gasoline and diesel to the potential to produce ethanol and biodiesel from US corn and soybeans. If all the starch in corn and all the oil in soybeans were converted to fuel, it would only displace the energy contained in 21 billion gallons of the 187 billion gallons of gasoline and diesel consumed in the United States. Thus, the combined soybean and corn production consumes 58% of the US crop area planted each year. It is clear that farm commodities alone cannot displace petroleum oil for transportations fuels. At best, ethanol and biodiesel production is part of the solution. The US biodiesel production in 2005 was about 30 million gallons per year compared to distillate fuel oil consumption of 57 billion gallons per year.

Table 2.4

Comparison of annual US gasoline and diesel consumption versus ethanol and biodiesel production capabilities

Gasoline consumption (billions of gallons per year) 130
Distillate fuel oil (including diesel) consumption 57
Ethanol from corn [equivalent gasoline gallons] 25 [17]
Biodiesel from soybeans [equivalent diesel gallons] 4.25 [3.8]

Converting corn and soybean oil to fuel is advantageous because the huge fuel market can absorb all excess crops and stabilize the price at a higher level. In addition, in times of crop failure, the corn and soybeans that normally would be used by the fuel market could be diverted to the food market. The benefits of using soybeans in the fuel market might be improved by plant science technology to develop higher oil content soybeans.

Soybeans sell for about $0.125 to $0.25 per pound, while soybean oil typically sells for about twice that ($0.25–$0.50 per lb). The meal sells for slightly less than the bean at about $0.11–$0.22 per pound. Genetic engineering that would double the oil content of soybeans (e.g., 36%–40%) would make the bean, on average, more valuable. In addition, the corresponding 25% reduction in the meal content would reduce the supply of the meal and increase the value of the meal. At a density of 0.879 g/cc, there are about 7.35 lbs of biodiesel per gallon. A price of $0.25 per lb corresponds to $1.84 per gallon; $0.125 per lb to $0.92 per gallon.

Fuel production from corn and soybean oil would preferably be sustainable without agricultural subsidies (none for ethanol use, biodiesel use, farming, or not farming). A strategy thus emerges that can increase the value of farm commodities, decrease crude oil imports, decrease the value of crude oil imports, and put US agriculture on a path of sustainability without government subsidies.

To be successful, this strategy would need the following components:

1. Develop better oil-producing crops.

• Promote genetic engineering of soybeans to double oil content and reduce saturated fat content (saturated fats cause biodiesel to plug fuel filters at moderate low temperatures).

• Promote the establishment of energy crops like the Chinese tallow tree in the South that can produce eight times as much vegetable oil per acre as soybeans.

2. Plan a future with widespread use of diesel engines and fuel cells.

• Promote plug-in hybrid electric vehicle (PHEV) technology that uses electricity and higher fuel efficiency to replace 80% of gasoline consumption. Apply direct-use ethanol fuel cells for much of the remaining automobile transportation energy needs.

• Continue to improve diesel engines and use of biodiesel and ethanol in diesel engines. Fuel cells will not be able to produce enough power to compete with diesel engines in trucking and farm applications for at least a couple of decades.

3. Pass antitrust laws that are enforced at the border. If the oil-exporting nations allow the price of petroleum to exceed $70 per barrel ($2.00 per gallon diesel, not including highway taxes), do not allow subsequent price decreases to bankrupt new alternative fuel facilities.

4. Fix the dysfunctional US tax structure.

• Restructure federal and state taxes to substantially eliminate personal and corporate income taxes and replace the tax revenue with consumption taxes (e.g., 50%) on imports and domestic products. This would increase the price of diesel to $2.25 per gallon (red diesel, no highway tax).

• Treat farm use of ethanol and biodiesel as an internal use of a farm product, and, therefore, no consumption tax would be applied. Increased use of oil crops would include use of rapeseed in drier northern climates (rapeseed contains about 35% oil) and use of Chinese tallow trees in the South. Chinese tallow trees can produce eight times as much oil per acre as soybeans. If Chinese tallow trees were planted in an acreage half that of soybeans and the oil content of soybeans were doubled, 17–20 billion gallons of diesel could be replaced by biodiesel allowing continued use of soybean oil in food applications. This volume of biodiesel production would cover all agricultural applications and allow the imports to be optional.

The PHEV technology would displace about 104 billion gallons per year of gasoline with electricity and increase energy efficiency. The electricity could be made available from the reprocessed spent nuclear fuel and adding advanced technology nuclear reactors. About half of the remaining 26 billion gallons of gasoline could be displaced with ethanol and half with continued use of gasoline.

In this strategy, up to 55 billion gallons of annual diesel and gasoline consumption would still need to come from fossil fuel sources. These could be met with petroleum, coal-derived liquid fuels (like Fischer–Tropsch fuels), and Canadian oil sand fuels. Increase of electric trains for freight could replace much of the 55 billion gallons.

It is in the farmer’s interests to convert at least part of corn and oil seed (soybeans) to fuels since this creates an increased demand for their commodities and higher prices. When the resulting renewable fuel production capacity is combined with new shale oil fracking sources, the United States emerges with a much better fuel supply security in the year 2015 than in 2000.

Algal Biodiesel

Microorganisms like algae can be used to produce vegetable oils that can then be converted to biodiesel. Two categories of this technology are proposed: (i) use of bacteria to convert nutrients in a liquid to vegetable oils; and (ii) use of photosynthesis to produce the vegetable oil in algal or bacterial pools of water.

On the approach of converting nutrients, it is likely that sustainable approaches can be developed for bacteria (or other anaerobic microorganisms) to convert the sewage discharge of cities to a crop of microorganisms and methane. Those microorganisms could then be processed to produce a biodiesel product and solid biomass that could be burned as a fuel or sterilized and used as food in fish farms. This approach has two advantages over the photosynthesis approaches: (i) it uses a concentrated pool of nutrients that avoids the cost of collecting sunlight/radiation; and (ii) it receives the economic advantage of performing a waste treatment for which cities now provide a dedicated revenue stream to maintain. An advantage of this approach is that once successful, the technology can be expanded to allow other waste streams to be processed (other organic trash from cities) by the bacteria. The pools can incorporate a photosynthesis component to add to the waste conversion component. This photosynthetic component can be incrementally expanded as technology becomes available.

Algal biodiesel from photosynthesis can be achieved in two ways: (i) use concentrated light in reactors containing microorganisms; (ii) pools containing microorganisms dispersed over a large area to collect dispersed sunlight.

On the concentrated light photosynthesis, there is either the huge infrastructure cost of collecting and focusing sunlight or the huge cost of making electricity and converting it to light for the organisms. Collecting sunlight is expensive, and if collected, it would require more efficiency to convert the sunlight directly to electricity. An approach to produce electricity to generate light to yield “bugs” that can then be processed to produce liquid fuels does not survive an economic analysis—it is far better to use the electricity directly to charge batteries.

In a (2009) conference advocating algal biodiesel [15], even with high productivity of oil from algal biodiesel that a land mass the size of New Mexico [16] covered with water would be adequate to satisfy all our liquid fuel needs. There was an algal diesel test plot at Roswell, New Mexico [16] yielded disappointing results because the algae did not thrive in the cool weather that prevailed at night and much of the day. The fact is that huge expanses of the United States constantly seek more fresh water for cities and agriculture, and the assumption that water is not available for algal biodiesel. When climate, water, and relatively flat terrain needs are considered, possible locations for large-scale photo algal biodiesel production are limited to southern Louisiana and parts of Florida. In both instances, the first challenge will be to overcome environmentalists and environmental regulation to convert huge expanses of natural habitat to algal farms. Other technical and cost barriers include: (i) developing and maintaining the huge pond algal farms; (ii) preventing other microorganisms from overpowering the preferred algae in these ponds; (iii) the collection of algae for processing; and (iv) the huge volumes of algal fluids (intracellular water) that must be removed and processed to get to the oil.

To conclude, the only approach that has promise for sustainable and affordable production of biodiesel from microorganisms is the conversion of city sewage to oil by bacteria (or other organisms). When this industry is established and sustained, and if it overcomes all organism-processing hurdles, it would be realistic to believe that a sustainable photosynthesis-based industry could be established. Such an industry would first expand by supplementing the waste-based facilities. Algal-biodiesel technology is much like solar hydrogen where research can be justified because the United States can afford to perform research on energy that does not actually offer the prospect of sustainable industries in the next few decades.

Fossil Fuels

The “fossil” designation of certain fuels implies that the fuel energy originated from prehistoric vegetation or organisms. Sunlight is the only external source of energy received on earth. Sunlight supplied the energy for vegetation, the food for organisms. This suggests that fossil fuels are really “stored solar energy.” They are the most commonly used energy source to provide heat and drive our machines. Fossil fuels tend to be concentrated at locations near the Earth’s surface. This makes them easily accessible. Fossil fuel sources include the following:

• Coal

• Petroleum

• Heavy Oil

• Oil sands

• Oil shale

• Natural gas

• Methane hydrates

In Wyoming, there are vast coal seams 40-feet thick less than 100 feet underground. They can be “surface mined” rather than common underground “shaft coal mined.” In the Middle East, hundreds of barrels per day of crude oil flow from a single well under the natural pressure in the deposit. Each source provides abundant energy.

Coal, petroleum, and natural gas are accessible fossil fuels and easy to use (see box “Petroleum and Gas”). They are by far today’s most popular fuels. Table 2.5 summarizes the known accessible reserves of these fuels in the world and in the United States. World recoverable reserves for coal, natural gas, and petroleum are 2.5E+19 (25 billion billion), 8.6E+19, and 7.0E+18 BTUs, respectively [12]. For coal, the total estimated reserves are about a factor of 10 higher than the estimated recoverable reserves.

Table 2.5

Comparison of energy reserves and rates of consumption

Energy description Amount (BTU) Amount (barrels)
World recoverable coal reserves 2.5E 19  
World recoverable natural gas reserves 8.6E 19  
World recoverable petroleum reserves 7.0E 18  
US petroleum consumption (year 2000). 4.5E 16/yr 19.7E 9/day
US reserves of tight oil (fracking) 3.6E 17 58E 9 [13]

Petroleum Oil

In 2000, the United States consumed 19.7 million barrels of petroleum per day or 3.8E+16 BTUs per year. This consumption would deplete known US petroleum reserves in about 3 years and estimated US petroleum reserves in about 7.6 years. The statistics is summarized in Table 2.5.

It is important to note that between 2005 and 2015 the estimates of world energy petroleum reserves increased by about 33% and the gas reserves increased by about 25%. By definition, the consumption of the reserves should result in decreases in reserves. These data indicate oil reserves are actually many times greater than reported in Table 2.5; it is more a matter of the cost of recovering the oil and the time to gear up the industry for that recovery.

From 2006 to 2008, rapid increases in oil demand, largely from China, outpaced the implementation of technology to access the increasing reserves that become available as the price increases. Ultimately, there is no energy crisis for the United States or the world; rather, there is simply a price to be paid for that energy. And there is the potentially very high price associated with poor planning as was indicated by the 2008 recession.

Heavy Oil and Oil Shale

Estimated energy reserves in heavy oil, oil sands, and oil shale dwarf known reserves in coal, natural gas, and petroleum. Figure 2.9 is an attempt to put the quantities of these reserves in perspective. One evolutionary route to form these three other oil reserves includes the advanced stages of petroleum decay. Petroleum deposits contain a wide range of hydrocarbons ranging from the very volatile methane to nonvolatile asphaltines/tars.

image
Figure 2.9 Comparison of estimated reserves of prominent fuels other than renewable fuels.

When petroleum is sealed securely in rock formations, the range of volatility is preserved for millions of years or converted to methane if buried deeper, where it is converted by geothermal heat.

Petroleum and Gas—From the Ground to the Refinery

The inserted image [14] illustrates a typical petroleum reservoir and drilling used to recover petroleum. A rock cap has kept the reserve isolated from atmospheric oxygen for millions of years. Since the petroleum is lighter than water, it floats above any water in the petroleum deposit. Petroleum gases, including natural gas, are the least dense material in the formation and are located above the oil. Drilling and placing a pipe to the petroleum reserve allows recovery. Oil reserves recovered by conventional land drilling applications are typically several hundred feet to about a mile deep.

image
Figure 2.10 Illustration of petroleum drilling rig and reservoir.

When the cap rock is fractured by an earthquake, by erosion of rocks above the formation or simply due to the porous nature of the rock, the more volatile components of petroleum escape. This leaves less-volatile petroleum residues in the forms of heavy oils, oil sand, or “tight oil” in natural oil shale formations.

Heavy oils are volatile-depleted deposits that will not flow to a producing well at reservoir conditions but need assistance for recovery. Oil sand liquid is a “heavy” oil—typically not mobile at reservoir conditions, but heat or solvents can make the oil flow through the reservoir porous rock. Oil in shale formations is usually immobile and present in “fine grained” rock formed when annual silt and clay settle out of the water in thin layers that does not allow oil flow. Unlike the oil in oil sands that can be extracted in situ or with small amounts of solvent and/or heat the oil in shale is more difficult to extract.

Heavy oil reserves in Venezuela are estimated to be from 100 billion barrels to one trillion barrels. These heavy oils are generally easier to recover than oil from oil sands. The United States, Canada, Russia, and Middle East have heavy oil reserves totaling about 300 billion barrels (conservative estimates). These heavy oil reserves are estimated to be slightly greater than all the more easily recovered conventional crude oil reserves.

Surface reserves of oil sands have been mined and converted to gasoline and diesel fuel since 1969 in Alberta, Canada. Production costs are about $20 per barrel of oil produced. This supplies about 12% of Canada’s petroleum needs. The sands are strip-mined and extracted with hot water. Estimated reserves in Alberta are 1.2–1.7 trillion barrels with two open pit mines now operating. Other estimates put oil sand reserves in Canada, Venezuela, and Russia between 330 and 600 billion barrels. These estimates represent about 90% of the world’s heavy oil (and oil sand) reserves in Western Canada and Venezuela. Total global oil sand reserves are 6–10 times the proven conventional crude oil reserves. (Conventional crude oil reserves are estimated at one trillion barrels.)

Oil Fracking Technology

During the past 20 years, horizontal drilling techniques have been developed and used to tap the huge quantities of natural gas and shale oil known to be in the vast shale formations. A vertical hole is drilled into the shale formation deep underground. A horizontal hole is then drilled from that vertical well into the shale formation. High-pressure water containing proprietary chemicals and fine sand is pumped into the shale, fracturing the shale rock. When the water is removed, the fine sand in the “fracking fluid” holds the thin layers of fractured shale open so that the oil and natural gas can flow to the vertical “production” well.

Shale oil is best characterized as relatively nonvolatile oil dispersed in shale. World shale oil reserves are estimated to be 600–3000 times the world crude oil reserves. Estimates specific to the western United States place reserves at 2–5 times the known world oil reserves. Tight oil reserves are the shale oil that can be recovered using 2015 fracking technology; the US tight oil reserves are approximately twice the US conventional petroleum reserves and are enough to last about 8 years without supplements from the other sources as illustrated in Figure 2.1.

Oil to be recovered from a drilled well must be liquid and flow through the geological formation. The oil in oil shale near the surface is not liquid; it is like a paste stuck in the shale. A shale oil formation deeper in the earth where temperatures are warm enough, the oil will be liquid rather than a paste. The shale is so fine grained that the oil and gas still won’t flow and this is called “tight oil.” “Fracking” fractures the shale and creates fractured rock paths for oil flow from the shale oil formation to the producing well.

Those highly productive geological formations in Texas and North Dakota extend into Canada. Shale oil fracking brings with it the following: (i) a currently realized increase in US oil and natural gas production; and (ii) improvement and lower cost approaches to fracking and horizontal drilling that have implications beyond shale oil reserves in the United States. The cumulative result is that stable oil productivity and prices should be realizable well into the 2030s.

While the industry has adopted terms like crude oil, heavy oil, oil sands, oil shale, tight oil, wet gas, and dry gas, the reality is that all combinations of volatility, ability to flow (viscosity and porosity), and accessibility (a few meters to 10,000 meters below the surface) exist. As a result technology like fracking can have an immediate impact on recovering oils from shale in North Dakota as well as extended impacts on accessing oils in different formations.

A reoccurring theme emerges, that energy reserves are important and immense (including energy in spent nuclear fuel rods) and the application of good technology is the determining factor. This was the case for the tight oil boom illustrated in Figure 2.1.

Natural Gas from Fracking

Shale gas fracking technology has introduced about 665 trillion cubic feet of recoverable natural gas reserves that is estimated to be about half of the total US recoverable reserves. Natural gas consumption in the United States is about 22 trillion cubic feet per year. This new natural gas brings with it a low-cost stability to electrical power production because natural gas electrical power plants are relatively inexpensive to build. Natural gas can also be used as a transportation fuel. These reserves further stabilize vehicular fuel prices and supplies, past the 2030s.

As an indicator of the extent to which natural gas can be used to replace gasoline, about 15% of Argentina’s automobiles are natural gas powered. The top three nations based on the number of natural gas vehicles are Iran, Pakistan, and Argentina. These statistics natural gas can replace gasoline if needed or deemed a national priority.

Methane Hydrates

Methane gas often escapes from underground deposits (due to porous or cracked cap rock, erosion of overburden, etc.). If this gas is released deep in the ocean, where there is a combination of water pressure and low temperatures, methane hydrate is formed. Methane hydrate is ice that contains methane in the crystal structure of the ice. The hydrate is stable below the freezing point of water as well as at temperatures slightly warmer in deeper water where the pressure is high. Conditions are right for the formation of methane hydrates on the sea floor (below a few hundred feet of seawater) where the temperature is relatively constant near 4°C, the temperature of maximum water density methane hydrate conditions occur even in tropical oceans.

Methane may form from fossil fuels through reactions initiated by high temperatures from the decay of organic material. Methane is always present in crude oil wells, in coalmines, and surface swamps. The digestive systems of animals and people also contain anaerobic microbes in the digestive process that produces methane. If any of these sources of methane are released into water at cool temperatures and high pressures (due to depth of water), methane hydrates can form.

Also, very cold temperatures can cause methane hydrate formation without high pressures. The water in the Arctic permafrost is frozen and decaying organic material in the soil forms methane hydrate. As long as the permafrost is frozen it holds the methane in hydrate form. When permafrost thaws, the methane is released.

Methane from methane hydrate reserves is not currently recovered. The Japanese have great interest in technology to recover methane from deep ocean deposits. They lack natural fossil fuel reserves and there are deep coastal water areas that are potential sources of large methane hydrate deposits. In the United States, methane hydrates have received the attention of congressional hearings where reserves were estimated at 200,000 trillion cubic feet of natural gas. The hydrate reserves off the east and west coastal boundary waters are under the jurisdiction of the United States. Using conservative estimates, these hydrates contain enough methane to supply energy for hundreds of years.

Natural gas emissions from hydrate reserves can occur naturally. The methane greenhouse effect from released methane will probably occur whether or not we capture fuel energy. If we burn the natural gas the resulting carbon dioxide released would have about 10% the greenhouse effect of any released methane. Methane released from methane hydrates on the sea floor may have contributed to end some of the historic ancient ice ages. During an ice age, the sea levels drop as massive ice sheets form. The ocean water required to form the ice lowers the sea level that reduces the seawater pressure on the sea floor. The lower ocean bottom pressure melts the hydrate releasing methane that bubbles to the surface releasing it into the atmosphere.

Here, greenhouse gas emissions can be reduced while recovering usable natural methane by mining and recovering the methane reserves that are most likely to release naturally. Most hydrate mining research involves changing the temperature and pressure at the solid hydrate reserve location to cause the methane hydrates to melt and recover the released methane. Experts at a congressional hearing agreed that Alaska’s North Slope was the most likely candidate for initial research on hydrate methane recovery because of relatively easy access (compared to the deepwater Gulf of Mexico) for the gas collection infrastructure and where the crude oil industry already exists.

Figure 2.9 summarizes the energies available in recoverable fuels. These fuels are all in concentrated deposits with the exception of uranium. The largest fraction of available uranium is dissolved in global seawater (oceans). The recovery process is known but costly so it’s not competitive at today’s low energy prices. These numbers approximate the size of the different energy reserves relative to available conventional crude oil.

Liquids from Coal and Natural Gas

Commercial processes for converting coal to diesel and gasoline date back to Germany before World War II. The processes for converting natural gas to a range of liquid fuels are actually less expensive than the processes that use coal.

Sasol Ltd., a corporation in South Africa, uses the Fischer–Tropsch process to convert coal to gasoline, diesel, and chemicals. This industry was developed since they have now crude oil deposits and as a result of embargoes against South Africa used to pressure South Africa to end racial discrimination.

Shell Corporation uses Fischer–Tropsch process to produce liquid fuels from natural gas from wells such as at Shell’s Pearl GTL facility in Ras Laffan, Qatar. GTL stands for gas to liquids, and is a way to use stranded natural gas where converting the gas to liquid reduces transportation costs substantially.

As illustrated in Figure 2.1, the use of GTL technology is expected to continue to increase, slowly but steadily.

Lessons to Be Learned from History

History has shown that a good diversity in the electrical power infrastructure provides stable and reasonable prices as well as a reliable supply. This has been the case for decades in the United States. That diversity has increased in the past decade with coal reducing from over 50% to 36% of the source of US electrical power. Natural gas and wind power have increased; they have a synergy in that natural gas power plants are able to start or shut down on short notice to compensate for changing wind velocity. Nuclear has held its own at 19%–20%.

For liquid fuels, the stress of fuel imports and the high costs at the fuel pump have been resolved by a combination of technology and diversity as illustrated in Figure 2.1. Fracking technology took several years to develop but it rapidly became a main player and bringer of prosperity. Fuel prices of over $90 per barrel kick-started this industry and prices of over $50 per barrel should sustain the industry. Gas to liquid technology has also emerged as a contributor.

The real winner in the liquid fuel sector was the United States as illustrated in Figure 2.11 on the money flow out of the United States for oil imports. It took drastic conditions to bring the technology to bear to fix the problems, but it did happen. In 2008, 60% of US demands were met by oil imports at prices of $110 per barrel compared to 30% at $55 per barrel. The flow of money and opportunity from the US economy was decreased by 75% due to technology.

image
Figure 2.11 Historic and projected prices of petroleum, consumption of petroleum, and billions of dollars per year spent on oil imports by the United States.

Three lessons stand out as most important from the past decade:

• Beware of alarmists

• Diversity creates stability

• Technology provides diversity

A fourth lesson is to:

• Create strategies and technologies for likely pending crises.

Alarmist agendas can come from corporations that seek profit as well as groups that focus on single issues. A decade ago the oil corporations used the alarmist approach in an attempt to gain access to oil reserves in Alaska’s national parks. US leaders held strong to the position of protecting the parks, and a decade after the confrontation the United States has emerged with a better tight oil fracking industry and a reserve of oil in Alaska’s national parks for a possible future rainy day.

On the other end of the spectrum, environmentalist groups or new industries bring alarmist agendas to either ban an industry or promote an industry. The alarmist approach is to ban nuclear power. A better approach is to recognize that the US nuclear is the safest of all the power industries. A solution is to make nuclear power even safer and to reprocess the spent nuclear fuel. A second alarmist approach is to point to a single solution such as wind power to reduce greenhouse gas emissions. This approach fosters sustainable industries that do not place a burden on society to subsidize the otherwise unsustainable industry.

The past decades have seen good trends in diversity for both liquid fuels and for electrical power generation. The greatest single game changing step on diversity is the widespread use of battery powered electric vehicles or direct-grid-powered electric vehicles. This great step would increase the diversity of both the liquid fuel and electric grid infrastructures (batteries can be used for peak load shifting). This single step would also create a path to rapid and substantial decreases in greenhouse gas emissions.

The United States would do well to increase nuclear power to 25%–30% and to adopt first reprocessing technology to use the decades of uranium and plutonium in spent fuel rods that have accumulated over 30 years of nuclear power production. The next step would include fast neutron flux reactor technology to tap the centuries of power in the U-238 that is in spent fuel rods. It costs between $1.20 and $1.40 to produce one million BTU (MBTU) of thermal energy from coal. Uranium fuel costs about $0.62 per MBTU when enriched to 3.4% U-235 isotope in the fuel. Higher U-235 enrichment reduces the cost of energy. Next generation IV reactors would improve the energy produced per ton of uranium fuel.

As with the past decades, technology enables diversity. These technologies include: Better batteries, technologies for better direct-grid-powered vehicles (fifth mode of transportation), and technologies for using the U-235, plutonium, and U-238 in spent nuclear fuel.

The changes that fracking and horizontal drilling technology have brought to the US economy and the energy sector have been remarkable. The resources and time needed to enact a coordinated strategic approach for the energy sector are clearly available. Much is attainable, and it is a matter of options US people and leadership choose. The following strategies have merit for setting policy:

• Allow key technologies to take their course without excessive federal subsidy that assists a particular technology. These subsidies put potential competitive technologies at a disadvantage.

• Keep battery technologies and electric vehicle technologies on course to create stability in transportation fuel prices, supply, and sustainability (including reduced greenhouse gas emissions) by tapping that stability of the electrical grid infrastructure.

• Target win–win technology approaches such as converting waste into fuel (e.g., reprocessing spent nuclear fuel, converting sewage into liquid fuels) and providing faster and more convenient public transportation.

• Biofuel technologies will not replace petroleum, but they have advantages beyond the fossil fuels they replace. A range of technologies should be studied to advance both fundamental and applied science and initial commercialization should be fostered (but not maintained) by governments.

• Beware of alarmists that advocate approaches that are costly—there are cost-effective approaches to address some of the most alarming threats related to the environment. If there are problems with an issue (e.g., nuclear power, tight oil fracking), fix the problems. Do not kill the industry!

It is important to implement smart strategies that improve the economy and are sustainable without subsidies. There are many good solutions and every reason to be optimistic. It is possible for the prosperous countries to continue to prosper. It is also possible this success will show others on the planet paths for them to prosperity.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset