The Ghosts of Hill 88

The top of Hill 88 in the Marina Headlands

By Tyler Shewbert

North of San Francisco is the Marin Headlands. Now part of the National Park Service, all of this area north of the Golden Gate was once belonged to the U.S. military. As one travels about the area, the remnants of its military past can be seen. I have spent a decent amount of time exploring this area but my hike up Hill 88 on June 28th, 2016 had probably the most profound impact of any of my journeys in the area.

I started at Rodeo Beach, with the intention of just walking up a hill because it was there. I did not have any idea what I would find. As I climbed, I climbed past the history of the area. First was Battery Townsley, which guarded the Golden Gate until the end of World War Two. This was a mere three-quarters of a mile hike, and not that high above the beach. I continued hiking, spying a hill in the distance that appeared to be the highest in the immediate area, and therefore the one that I would to climb.

As I approached the middle point between Battery Townsley and that hill, I found the concrete remnants what appeared to be more fortifications. These still had a World War Two feel about them, and from this place I could see my eventual goal even better. It was surrounded by a fence, so I was not even sure if I would be able to make it all the way to the top, except that I had seen people coming down from there.

Hill 88 from a distance

I continued, finally arriving at the top. I was greeted by the guard house.

The guard house

I was intrigued by what I found. The top of this hill, 1053 feet above sea level, had been flattened and there was what appeared to be some sort of former military installation.

The stands for the radar domes

The site was covered in graffiti. The concrete design screamed Cold War to me. This was not a World War Two facility. Upon walking around, I found the old helicopter pad, which solidified my reasoning that this was a Cold War facility and not part of the batteries from WWII.

Helicopter pad

I continued to explore. I found some ravens who were enjoying the amazing weather.

Ravens enjoying the view

The view from the top was amazing. The wind was light. In the East Bay, the temperatures were nearing triple-digits, but at the top of this hill it was nice and cool, with a light breeze.

Facing San Francisco

Looking towards the exit of the Golden Gate

Looking towards the Financial District and the East Bay

The site had an ominous vibe to it. I found a place to sit and eat my lunch, and having great cell phone service, I proceeded to look up what this place was. I found this site. It said that Hill 88 had been the site of the radar control station for the Nike missile base that existed in the area during the Cold War.

Having visited a Nike missile site across the valley a few years before, I understood their purpose. The Nike site SF-88 in the Marin headlands used Nike Hercules missiles. These had a range of 87 miles, and could carry a 20 kiloton nuclear warhead, if desired, or a conventional payload. At sites in the United States, the payload was almost always the nuclear payload. Fitted with such a payload, the missile could theoretically be launched and destroy several high-altitude bombers or missiles that were inbound to a target. There were over 145 such sites in the United States, until they began to phase out during the 1970s.

For me, this was a reminder of a frightful time before I was born that my parents had spoke of. This was the time of duck and cover drills and nuclear brinksmanship. I am grateful that somehow the United States and Soviet Union managed to wade through this tenuous time without destroying one another and a good chuck of the planet with them. I hope that those lessons are not forgotten by my generation and that nuclear disarmament continues.

Hill 88 in operation

Nukes and Floods

By Tyler Shewbert

Since the end of the Second World War, international institutions such as the United Nations, World Health Organization and International Monetary Fund have proliferated. These institutions serve various purposes, but their greatest success has been facilitating diplomatic relations between global powers that has successfully prevented a nuclear conflict. While there has been nonstop warfare since the end of World War Two, no nuclear weapons have been used since the United States dropped the bombs on Hiroshima and Nagasaki. Preventing a nuclear catastrophe has been the greatest success of increased international cooperation, even in times when proxy conflicts between the great powers happens.

The era of nuclear warfare has forever changed how wars are fought. The days are gone when U.S. generals like Gens. LeMay and MacArthur advocated for limited use of nuclear weapons against the enemy. Now, the belief among the military class in nuclear armed states is that the use of nuclear weapons, particularly among the well armed states of the U.S., Russia, and China, would spark a disaster that would propel the planet into many years of misery and chaos, including their own nation. This is something that no responsible leadership class would want to be remembered for.

In this era of anti-globalization, it is important to remember the value of the international bodies set up in the years that followed WWII. The U.S. and Soviet Union fought proxy wars, but never did these conflicts escalate into nuclear strikes. This can partly be attributed to having reasonable diplomatic classes on both sides that understood the results if they failed. Institutions such as the U.N. have allowed these parties to resolve issues at the Security Council table rather than the battlefield. This has not always succeeded, but these diplomatic solutions have prevented nuclear conflicts, which itself is a great success.

Other global bodies have facilitated economic growth across the planet. Sometimes this has come in the form of direct aid, but often it has come in trade deals that have increased cross-border trade. This is essential. International trade brings countries together economically which in turn makes them reliant on other nations for their success. When countries cooperate economically, they are in turn less likely to go to war with each other.

It can be argued that President Nixon going to China was one of the definitive diplomatic overtures in the past fifty years. The enabled the Chinese leadership to eventually to begin economic reforms under Deng Xiaoping in 1978. They knew the West would be open to conducting business with them. The increased trade between the U.S. and China is partly responsible for the lack of armed conflict between the two countries, along with consistent diplomatic relations.

However, for factory workers in the industrialized nations of Europe and the U.S., this increased trade has led to economic instability due to the transfer of production to China and other developing nations. This economic instability is not solely the responsibility of over-shoring production. A decrease in union membership in the United States and the increase in automation have also played significant roles in creating economic hardships for production workers.

These hardships are a reality, and they must be properly dealt with. For many years this reality has been ignored by politicians, and now it has manifested as resentment against globalization which threatens the relative global peace that has existed for the past seventy years. The continued success and progress of the human race is dependent on this stability, facilitated by global institutions and trade. This means that politicians in the West must recognize the plights of those who feel threatened by globalization and believe that tearing down these institutions is the only solution. The Brexit vote and the election of Donald Trump are pleas for help from a class of citizens that feel disorientated in a globalized world, and if they are ignored it will mean continued attacks on the global order that has facilitated great numbers of people being brought out of the dredges of poverty.

This same anti-globalization attitude also threatens the institutions that bring about strong diplomatic relations that prevent the disaster of nuclear warfare. These institutions must also be kept intact for the purposes of mitigating climate change damages over the next century, and preventing wars caused by the displacement of population that is likely to occur.

For the sake of the continued progress of humanity, the anti-globalists must have their grievances heard. If these people prosper economically, there will be one less reason for them to attack the idea of globalization. However, if these grievances are ignored, and the international order begins to break down, we all will face increased risk of nuclear conflict, and also will be unable to deal with the displacement and disasters that climate change will cause.

My Nuclear Fantasy

ITER: the world’s largest Tokamak (courtesy ITER
Tyler Shewbert

I have been a proponent of nuclear power, both fission and fusion, since I was very young. I became fascinated with nuclear energy’s potential around age nine when I began to read about physics. Science fiction was the medium that peaked my interest in these subjects. My parents came of age in the 1950s and 60s, and therefore had a mixed view of nuclear energy. They had their concerns as many people did, and still do, about its potential. However, they always allowed me to explore topics independently and develop my own opinions. Within a few years, after reading many of the arguments for and against the use of fission power, my mind was set that this was the energy source that could change human civilization. I accepted that the technical problems with breakeven fusion energy might make it unattainable, but as an optimist I hoped that it would be successful, and that it could revolutionize the world.

Through my teen years and early twenties this idea cemented, but was rarely discussed. I diverged into other interests and rarely looked again at nuclear energy. In the background of my mind, the necessity of providing many terawatt hours of power for the planet’s energy needs was always there, and eventually in my later twenties this brought me back to nuclear energy.

After Fukushima, the growth that the nuclear energy sector had been seeing globally slowed dramatically. This disappointed me. I thought that opinions beginning to swing back in favor of nuclear energy were permanent. It only took one incident to drastically alter those opinions. Plants were shut down in Japan, Germany and many other countries. The anti-nuclear movement caught its breath again against the rising tide of pro-nuclear environmentalists. Once again, my nuclear fantasy was put on hold.

I have envisioned a world where by using nuclear energy air pollution is greatly reduced. Without the need for fossil fuels for energy sources, the air would begin to clear. Nuclear sources mixed in with solar, wind, hydro and other sources would create an energy boom that would lift the developing world out of poverty. As the air cleared, and poverty was reduced, the Earth would become a calmer place.

I know this is a fantasy. Fission produces waste. This can be dealt with somewhat, and as new technologies such as the Waste Annihilating Molten Salt Reactor develop, the waste issue can be dealt with even more effectively. The cost of environmental damage from a meltdown can be catastrophic to a nation. These are rare but with each new reactor the probability of an incident would increase. The most significant risk in developing nuclear energy infrastructure is the chance it will be used to develop weapons. The economics of fission energy are not practical for developing nations.

I will still defend fission. I have come to terms with its downsides and understand that these are problems which can either be solved or mitigated. I know that it is necessary to include fission energy in the energy mix to reduce climate change. It is immoral to ask the people in developing countries to not to use energy on the scale the developed countries do. To be able to provide billions of people with carbon free energy will allow economies to grow, people to come out of poverty and live richer lives. To do this nuclear energy must grow.

Fusion is another topic all together. It is always called the technology that is “twenty years away”. However, there is good news coming out of the organizations researching fusion. If we achieve the coveted breakeven power production, it will still take time to make fusion energy production economical, particularly for the impoverished nations around the world which are in dire need of energy. Yet this is a goal that is worth striving for and I will gladly spend my lifetime working towards it to pass the baton to the next generation which might finally usher in the era of fusion power. With that, I believe everything will change.

This is mostly speculative. I know there is no magic bullet to solving the world’s energy and climate issues. It will take a mixture of solutions and international cooperation that has not been seen in human history. These are the great tasks for the next hundred years. With a damaged climate, civilization will rip apart. Without developing nations providing energy to their populations, global inequality in incomes and standard of livings will tear the world apart. I am an optimist though. I know that humanity is both capable of great terror and beautiful progress, but history seems to tell us the progress typically wins out over the terror. I can only play my role in helping to find solutions to the problems.

Save

The Future of Space is Nuclear

NEXIS ion thruster undergoing testing as part of Project Prometheus
Tyler Shewbert

Since the beginning of the Space Age, the relationship between space exploration and development, and nuclear power as a source of propulsion, heating and electricity was seen as symbiotic. Before Sputnik was launched in 1957, the development of nuclear thermal propulsion (NTP), in NERVA/Rover programs, had already been going on for two years. These programs continued until their cancellations in 1972. During the nearly two decades of development, a solid foundation of knowledge was acquired about nuclear thermal rocket (NTR) technology. The program was cancelled, based on political, not technological, reasons.

Since the beginning of the United States space program, radioisotope thermoelectric generators (RTG) have been used as sources of heat and power on missions ranging from Apollo to New Horizons. The farthest human object in space, Voyager 2, is powered by a RTG. The “Nuclear Power Assessment Study” released by John Hopkins Applied Physics Laboratory in 2015 states that the newer radioisotope power systems will continue to power Humanity’s robotic exploration of the Solar System.

Inspection of Cassini spacecraft RTGs before launch

Nuclear systems allow for more energy than than either chemical or solar sources. Due to this increase in available power, many of the restrictions limiting the exploration and settlement of space can be overcome. The main advantages of space nuclear applications are smaller volume, reasonable mass, long lasting operational times, independence from the Sun’s energy, the ability to deploy kilowatt and megawatt power sources, and reliable operations.

Space is a harsh environment. For power needs closer to the Sun, solar power can provide much of the power needed for most current space applications. As we journey out farther from the Earth, the efficiency of solar power declines. For most exploration in space out past the Earth, nuclear power sources become necessary. They provide the necessary heat and electricity for instruments to function properly. For future missions, both human and robotic, to Mars and the outer planets, nuclear energy will be necessary to power and heat the science packages that will further human knowledge of our neighborhood in space.

For serious human exploration and eventual economic development of space, both nuclear fission systems and nuclear propulsion will need to be developed. Nuclear fission plants will provide the necessary electricity and heat to settle the Moon and Mars. Solar energy will compliment both, but it is well documented that small nuclear reactors would give an advantage to settlers that solar would not.

Nuclear energy sources would also be necessary for any large-scale, local resource development. The power needs of any space mining operation could be met much easier with nuclear energy. Any such operations would rely almost entirely on nuclear energy to develop resources, due to the necessary heat requirements. In situ resource utilization (ISRU), the collection and processing of materials in space for human uses, could be done with nuclear power on a large-scale.

Sketch of nuclear thermal rocket

Nuclear propulsion methods, both nuclear thermal and nuclear electric, would allow for more efficient use of propellant. Nuclear thermal rockets, which have been studied at length by both the United States and Soviet Union/Russia, involve heating a fluid, typically hydrogen, in a nuclear reactor and expanding it out of a rocket nozzle to produce thrust. This leads to higher specific impulse, almost double that of chemical propulsion. Specific impulse (usually abbreviated Isp) is a measure of the efficiency of rocket. This allows for reduced travel times. This would allow any future explorers on Mars to stay longer on the surface. Many Mars mission designs have used nuclear thermal rockets as their preferred choice of propulsion. This was one of the main goals of the NERVA/Rover programs, and also one of the reasons it was cancelled. Solid core nuclear thermal rockets have been well-researched and ground tested. Liquid core and gaseous core engines theoretically would lead to even higher specific impulses, therefore opening up the outer Solar System to human exploration and eventually settlement.

Where do we stand today? Since the cancellation of NERVA/Rover, there have been a few starts and stops to serious nuclear propulsion and fission power systems. Project Timberwind was part of the Strategic Defense Initiative developing NTR for defense purposes, but was cancelled before ground testing began. However, there were still some advances in materials technology made. Project Prometheus began in 2003 with the purpose of developing smaller fission reactors for space applications. This was to be a team effort between NASA and the U.S. Navy. Rather than developing nuclear thermal propulsion, the fission reactors developed in Project Prometheus were to be used in nuclear electric propulsion (NEP), using a reactor to run ion engines. This was to culminate in the Jupiter Icy Moons Orbiter (JIMO), both of which were cancelled in 2005. Recently, NASA’s Marshall Spaceflight Center have been testing out nuclear fuels for nuclear thermal propulsion for a human Mars mission.

For any significant human exploration and settlement of the Solar System to take place, fission power systems, and nuclear thermal and nuclear electric propulsion systems need to be researched, ground tested, space tested, and deployed into operation. These technologies need to be treated as a long-term, space-infrastructure project.

NERVA/Rover engines were being developed not only for a possible Mars mission, but also for a Lunar shuttle. Some engines were designed to be turned on and off up to sixty times, allowing for such a shuttle. A similar set of goals needs to be established and studied. Developing NTP designs with only the goal of getting us to Mars is shortsighted. A more expansive set of goals guiding the development needs to be established. A cislunar nuclear shuttle would allow for the development of Moon settlements. Supplying any permanent Mars or Moon settlement would require large amounts of supplies to be sent until advanced ISRU was well established, and NTP could do this. With liquid and gaseous core engines, it is possible to shorten travel times between Earth and Mars, and a functioning interplanetary economy could eventually develop. These cores would also open up the resources in the asteroid belt and exploration and possible settlement of the outer planet’s moon systems. Without NTP, none of this is practical.

Fission power systems would allow settlements on Mars and the Moon to have more energy than solar alone could provide. This would lead to better resource development and utilization, and therefore the foundation of a self-sustaining space economy. Economies and settlements can only grow as much as their energy resources allow, and fission power would allow for scalable energy systems that could provide the necessary excess energy for economic expansion. Just enough energy is not enough, there must be excess for there to be any sort of successful economic development. Mining and processing asteroids would require large amounts of energy, particularly heat energy, which is much easier to deploy using nuclear power systems.

There is already a large knowledge-base for some of these technologies, however, it is spread mostly between various Department of Energy and NASA programs. Research projects in these fields have unfortunately been subject to cancellation time and time again, subject to the whims of politics. This has led to significant strides in technology development, only to be shut down on the verge of taking the next step. Without this technology, any sort of permanent human presence in space is not possible. Until we take the development of nuclear space applications seriously, we will remain in low Earth orbit, and the only significant economic use of space will be satellites. Due to legal regulations, private companies such as SpaceX and ULA developing nuclear based propulsion solutions is not practical at this time, therefore the onus is on government agencies. A framework similar to the ISS, ITER or CERN that spreads the cost among several different developed nations would make it cost-effective. This would also allow for the continuation of the project if a backing country’s political climate changes and no longer sees this as a worthwhile endeavor.

The future of Humanity’s presence in space depends on the long term development of nuclear space systems for settlement and exploration. It is an undertaking that will not reap immediate rewards, but needs to be treated as a long-term research and development project, similar to the quest for nuclear fusion, because the long-term benefits to humanity are immense. It is the destiny of humanity to explore and settle the Solar System, and this is only possible through nuclear technology.

Originally on: http://www.adastranuclear.com

Save

Lessons from Galvani’s and Volta’s Competitive Spirit

Tyler Shewbert

Luigi Galvani’s experiments testing frog legs to see whether electricity was responsible for muscle contractions was easily reproduced by scientists for decades after his initial experiments. This reliable reproducibility allowed other scientists including Allesandro Volta to derive their own hypotheses behind what was causing the muscle contractions. While the theoretical framework to explain the contractions had not been developed, the results of the experiments sparked the development of electrophysiology and contributed to the development of the battery by Volta. In recent decades within the biomedical field there is a solid theoretical framework for the development of experiments, however those experiments have low rates of reproducibility. This is causing economic damage by increasing rates of failure in drug trials in later stages. Scientific progress relies on discarding failed hypotheses and the development of experiments that allow for reproducibility by others to confirm or deny hypotheses. Modern science can still learn from the competition and mistrust betweeen Volta and Galvani.

Introduction

In the late 18th century Luigi Galvani began experimenting with frog legs. His methods included using the lower half of a frog which had been severed from the body and had exposed nerves. He explored the effects of electricity on muscle movement within the legs. Initially he experimented with external sources of electricity such as Leyden Jars. Electricity induced contractions in the leg muscles of the frogs. He then experimented with atmospheric causes of electricity and found this had little effect. He concluded that the muscle had some sort of intrinsic electricity within it [1]. A scientific contemporary of Galvani, Allesandro Volta, contested Galvani’s explanation that the muscle contractions were due to intrinsic electricity and were instead caused by the metals used in connecting the nerve to the muscle, and that the muscle was simply reacting to the electricity in the metals. Both Volta and Galvani ended up pursuing further experiments in animal electricity to support their own theories [2]. Galvani ended up producing a contraction by connecting the two nerves from each leg together [1]. Volta countered that he could produce electricity by mixing silver and zinc, and that metals were responsible for the contractions, eventually developing the electric battery [1]. Out of Galvani’s experiments came two major breakthroughs: The eventual development of the field of electrophysiology and Volta’s development of the battery [2]. Galvani’s method was simple enough to be reproduced by other scientists. Eusebio Valli could reproduce Galvani’s experiments with the same results as could others [2]. The lack of acceptance of Galvani’s hypothesis of animal electricity was due to Volta’s success with the electric battery and a lack of theoretical framework that could explain the results.

In recent decades, the inability to replicate results has begun to plague the biomedical field, particularly within preclinical research [3]. The ability to attain “robust, reproducible results” is essential to directing the direction of further research [4, 5]. Often the original researcher of a published finding is themselves unable to attain the same results [4]. This has been attributed to a desire for “flashy results” that “ignore the lack of scientific rigor” [4]. It is necessary for scientific results to be efficient and if the majority of preclinical research is not reproducible, the results are inefficient [4]. Galvani’s methods produced reproducible results that allowed science to progress without a strong theoretical framework. Today there is a strong theoretical foundation for which biomedical research is done, but the experimental framework is failing in developing effective and efficient methods for developing experiments which allow for reproducible results.

Results and Discussion

As reported by Begley and Ellis in Nature, clinical trials in onocology have the highest rate of failure when compared to other areas [5]. They attribute the high failure rate not only to the difficulty treating cancer but also to the “quality of published preclinical data”. The effectiveness of drug development relies heavliy on the availiable literature [5]. The problem is that the results of preclinical studies are be taken at face value, and this causes problems later in clincal trials. Amgen studied fifty-three papers that were considered “landmark” studies and found that in only 11% of the cases were the results scientifically confirmed [4, 5]. This has a negative economic as well as a scientific impact. When preclincal studies are used for drug development and there is less than a 50% reproduciblity rate, clinical trials fail [3–5]. This has led to an overall decrease in the rate of success for Phase II clinical trials from 28% to 18% in the years 2008–2010 [3].

Contrast this with Galvani’s work. Volta was able to replicate Galvani’s expirment and by doing so was able to develop hypotheses that enabled him to eventually develop the battery [1, 2, 6]. If Volta had been forced to question Galvani’s methods due to inability to dervive the same result, he would have discarded the Galvani’s expirments, and Volta might not have began exploring the electric relationship between zinc and silver that lead to his development of the battery [1]. Galvani himself was able to further his expirements because of the consistency of his work which eventually led him to fairly accurate conclusions with regards to the conduction of electricity within animals [1, 2, 6]. He was able to develop a hypothesis that the electricity was conducted by means of a watery interior with an oil exterior which was shockingly close to the model developed by Hodgkin-Huxley [2]. If Galvani’s expirement had not produced consistent results it would have not been taken seriosly by himself or his contemporaries. These consistent results allowed Volta to develop an end product in the electric battery that ended up having signifcant economical value and Galvani to suggest there was an “intrinsic” electricity in animal.

Biomedical research is in part driven by the ability to produce tangible economic results. As failure rates of clinical trials increase, the research community could learn some lessons from the distant past in the form of Galvani and Volta competition and practices. There was a fundamental mistrust between Galvani and Volta which caused Volta to check Galvani’s expiriments and Galvani’s theory of animal electricity. Begley and Ioannidis reached a conclusion that “science operates under the trust me model that is no longer considered appropriate in corporate life nor in government” [4]. They state that endorsing the current state of research that is “producing results the majority of which cannot be substantiated” would be erronous. To rectify this, they suggest “rethink[ing] methods and standardization of research practices” so that the focus would not promote the pursuit of studies that might have flash and gain headlines but little substance for further research and economic benefit [4].

The research community would benefit from standards and practices that produced results that could be readily verified by others. This would encourage others to use the solid foundation built upon reliable data for developing further hyprotheses. From Galvani and Volta’s competion two thing can be learned that are applicable to today’s environment. The first is that a solid methodology rooted in reproducible results will spark further expirementation that will have solid results. The second is that a lack of trust among scientists has the benefit of sparking competition to develop solid expiriments to proove one’s own hypotheses.

Outlook and relevance of work

The reproducibility of Galvani’s research contributed to the development of electrophysiology and the development of the battery by Volta. The point of contention between Galvani and Volta was not whether Galvani’s methodology was sound, since Volta could achieve the same results. Their problem was a lack of a sound theoretical framework to interpret the results and a fundamental healthy mistrust between them. Due to Galvani’s sound methods and reproducible results, science could develop further because the hypotheses the results created needed further experimentation and testing, eventually leading Volta to develop the battery to defend his hypothesis and Galvani’s experiment connecting the nerves of the two frog legs to defend his.

The biomedical field could benefit from a shift in thinking. Rather than releasing methods that produce results researchers cannot even reproduce in their own labs, science would benefit from a reduction in releasing results that might cause sensation in the public and make sense within the theoretical framework, but do not produce the same results twice. The scientific community would benefit in the same way that it did when Galvani and Volta were competing to explain their own theories. If the methods are sound and reproducible, other researchers will have the opportunity to challenge the originator’s hypothesis and put forth their own hypothesis to explain the results. This would not slow down progress but rather help along the development of the theoretical framework by making sure that other researcher’s claims have been properly validated.

References

[1] Piccolino M. Luigi Galvani and animal electricity: two centuries after the foundation of electrophysiology. Trends in neurosciences. 1997;20:443–8.

[2] Piccolino M. Animal electricity and the birth of electrophysiology: the legacy of Luigi Galvani. Brain Research Bulletin. 1998;46:381–407.

[3] Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10:712-.

[4] Begley CG, Ioannidis JPA. Reproducibility in Science. Improving the Standard for Basic and Preclinical Research. 2015;116:116–26.

[5] Begley CG, Ellis LM. Drug development: Raise standards for preclinical cancer research. Nature. 2012;483:531–3.

[6] Piccolino M. Luigi Galvani’s path to animal electricity. Comptes rendus biologies. 2006;329:303–18.

Save

In Defense of Radioisotope Powered Pacemakers

A Medtronic Pu-238 powered pacemaker
Tyler Shewbert

Starting in 1970 radioisotope powered pacemakers were implanted in over 3000 patients worldwide. These devices had longer lifetime power supplies than battery powered pacemakers therefore eliminating the need for battery replacement surgeries [1, 2]. A thirty-one year study performed by the Newark Beth Israel Medical Center of 139 patients showed that nuclear-powered pacemakers required less surgeries than a control group of lithium battery-powered devices [2]. This same study showed that cancer rates for patients with nuclear-powered pacemakers were similar to a control group with battery-powered devices [2]. The feared increased cancer rates did not materialize [1, 2]. With improvements in modern electronics technology and improvements in semiconductor energy conversion efficiency, radioisotope pacemakers can once again reliably provide pacing services for patients whose life expediencies are greater than twenty years, reducing the need for invasive surgeries to replace batteries. A new generation should be developed.

Introduction

A radioisotope power source takes the heat from the decay of a radioactive substance and generates heat through some sort of thermal energy to electrical energy conversion process using either the thermoelectric effect or thermionic effect [1–4]. Pacemaker devices of the 1960s had short battery lives, ranging from twelve to eighteen months [1–4]. A proposal was made to use radioisotope power sources which would have longer lifetimes and require less surgery since each time a battery had to be replaced surgery was needed. The Atomic Energy Commission had a guideline for 90% device reliability over ten years [2]. Several manufacturers developed nuclear-powered pacemakers using either thermoelectric or thermionic power conversion systems [1–4]. Two isotopes, Pu-238 and Pm-147, were chosen as the heat source [1–4]. The amount of radioactive material in each device ranged from 0.105 to 0.40 grams [3]. A majority of the developed devices used Pu-238 due to its 87.7 year half-life [3, 4]. In previous experiments Pu-238 capsules of 30–50 grams were implanted within dogs to test for carcinogenic effects. These experiments showed no significant change from a control group [3, 4]. There were members of the medical community that did not trust those experiments and believed that the prolonged exposure to implanted radioactive materials would cause cancer [5]. However, this did not happen. In a long term study the rates of cancer in two control groups, one with nuclear-power pacemakers and those with battery-powered pacemakers, showed no statistically significant difference in cancer rates over thirty-one years [2]. The major drawbacks to the nuclear-powered pacemakers were the availability of nuclear fuel, the excessive size compared to other pacemakers of the period, and the stringent FDA and NRC regulations compared to non-nuclear devices [1]. If nuclear-powered devices were developed using modern electronics technology the size would be smaller. However, the strain on Pu-238 sourcing would be significant since neither the United States or Russia is producing any currently for commercial use, but there are other isotopes such as tritium that could be used that are widely available and would not require as much regulation [6].

Results and Discussion

The original motivation for nuclear-powered pacemakers was the need for a pacemaker power source that lasted longer than a year to eighteen months [1–4]. However, this is not as much of a concern with today’s pacemakers. With new lithium batteries, modern pacemakers have an expectancy of around ten years [7]. While this is an improvement, an otherwise healthy individual in their early forties might have as many as four battery replacement surgeries within their lifetime. This is where a new generation of nuclear-powered devices will be useful. It will enable a long-term patient to require less surgery.

The results of three decades of tracking patients reveal that nuclear-powered devices worked well. The study performed by the team at the Newark Beth Israel Medical Center was revealing. Over fourteen years they implanted and tracked the progress of 132 patients [1, 2]. Of these patients, twelve needed surgery because they needed mode changing, and the devices they had implanted did not have the ability to be changed remotely [1, 2]. Power failure occurred in only one case [1, 2]. Fifteen were removed because of component malfunctions and eight units because the high pacing threshold had been passed [1, 2]. After fifteen years, the survival rate was 99% for the power systems and 82% for the entire pacing system [1, 2]. The malignancy rate was similar to that of the normal population and tumors were not concentrated around the pacemaker as had been feared but randomly distributed as in a normal population [1, 2].

From the study by the team at Newark Beth Israel Medical Center a few conclusions can be made. First, nuclear-powered pacemakers are a reliable power source for pacemakers. The failure rate over fifteen years was less than 1%, which was better than the NRCs recommendation of 10% over ten years. Second, the fear of increased cancer rates that had been mentioned by Hart, the FDA, and the NRC proved to not materialize. The exposure to low-levels of chronic radiation was not a concern. Third, the exposure to radiation for patients was well within the limits that the NRC has set up for workers in nuclear sites [1–4]. According to EW Webster, as mentioned in the 2006 paper by Parsonnet, the requirement for the use of a fluoroscopically-controlled replacement for battery-powered pacemakers would expose the patient to 1.6 times as much radiation as 15 years of pacing using a Pu-238 powered device [2]. As of May 2004, twelve of the 139 patients were still being followed, and one patient still had their original pacemaker thirty-one years later [2]. Two other major studies reached the same conclusions with regards to the safety and reliability of nuclear-powered pacemakers and therefore the Newark Beth Israel Medical Center can be considered a good survey of other studies [2]. Nuclear-powered pacemakers were successful.

The major drawbacks of the technology were that nuclear-powered pacemakers were larger than the battery-powered devices of the era, the surgeries required for replacing the pacemakers due to pacing problems, and the regulations and risks involving handling of nuclear fuel [1]. With improvements in pacemaker technology the first two drawbacks would be significantly reduced. Pacemakers electronics are much more advanced than in the 1970s. The concern that Hiram Hart and others had concerning increased cancer rates never materialized and therefore should not be a concern in the future when considering whether this technology should be revisited.

There is also a modern solution to the drawback of nuclear fuel handling. There has been improvement in betavoltaic devices over the past forty years. These devices had been originally considered as a power source for pacemakers in the 1970s [3, 4]. Betavoltaic power sources use b particle decay as method of generating current as opposed to a decay found in Pu-238 devices. Early semiconductor devices were not well suited for b decay power conversion and this was a major reason why Pu-238 was chosen. However, semiconductor energy conversion technology has improved since the 1970s and there is once again renewed interest is using the technology for long-term, low power needs [8]. While early betavoltaic pacemakers used promethium as an isotope, the increased efficiency of energy conversion technology has enabled the development of betavoltaic devices using tritium, an isotope of hydrogen, and other less radioactive substances as the energy source [8]. Using less radioactive isotopes could allow for a reduction in the regulatory framework that the NRC and FDA has imposed on nuclear-powered pacemakers. This would reduce the costs associated with production and ultimately the disposal of the device, both of which had been major costs due to the specific handling requirements of Pu-238 and promethium.

Outlook and relevance of work

Nuclear-powered pacemakers have a successful history of providing long term, reliable power to pacemakers with similar side effects that of a battery-powered device. With improvements in modern pacemaker electronics technology and improvements in semiconductor energy conversion technology the time had come to revisit the use of radioisotopes as power sources for pacemakers, particularly with patients who are expected to have multi-decade survival timelines.

While using Pu-238 would prove a hassle due to the lack of supply and stringent regulations, developing betavoltaic-powered pacemakers would be a logical course to take. Tritium is widely available in seawater so the main issue would be making sure the size of the tritium storage is the size a battery would use in a pacemaker. This is because tritium’s half-life is only 12.5 years, but two packages of tritium could be used in sequence to extend the lifetime to twenty-five years [8]. If less radioactive isotopes could be used in modern nuclear-powered pacemakers the regulations could be revisited, simplifying the process and therefore reducing regulatory costs. The advancement in pacemaker electronics technology would mean that there will be less pacemakers replaced due to pacing and lead problems that occurred in the ones implanted in the 1970s and 1980s.

Developing long-term pacemaker power solutions using radioisotopes would once again allow patients, particularly those who may have a pacemaker for forty years or more, to reduce the frequency of surgery. Over a multi-decade length of time, this could potentially reduce the cost of the pacemaker since there would not be a required surgery every ten years or so. The fears of radiation causing cancer was proven to be false by the first wave of nuclear-powered pacemakers and a new generation of devices would be able to use improvements in energy conversion technology and pacemaker technology to allow for more efficient and reliable devices than the first generation.

References

[1] Parsonnet V, Berstein AD, Perry GY. The nuclear pacemaker: Is renewed interest warranted? The American journal of cardiology. 1990;66:837–42.

[2] Parsonnet V, Driller J, Cook D, Rizvi SA. Thirty‐One Years of Clinical Experience with “Nuclear‐Powered” Pacemakers. Pacing and clinical electrophysiology. 2006;29:195–200.

[3] Huffman FN, Norman JC. Nuclear-fueled cardiac pacemakers. Chest. 1974;65:667–72.

[4] Norman JC, Sandberg Jr GW, Huffman FN. Implantable nuclear-powered cardiac pacemakers. New England Journal of Medicine. 1970;283:1203–6.

[5] Hart H. Nuclear-Powered Pacemakers. Pacing and Clinical Electrophysiology. 1979;2:374–6.

[6] Association WN. Plutonium. World Nuclear Association; 2017.

[7] Mallela VS, Ilankumaran V, Rao N. Trends in Cardiac Pacemaker Batteries. Indian Pacing and Electrophysiology Journal. 2004;4:201–12.

[8] Bourzac K. A 25-Year Battery. MIT Techonology Review: MIT; 2009.

Save