The Versatility of Vagus Nerve Stimulation

By Tyler Shewbert


            The idea of electrical stimulation to treat medical disorders has been around since 1889. The first major device that was implemented successfully on a large scale was the pacemaker. It the past twenty years, there has been increased interest in the use of electrical stimulation of specific nerves as a method of treating various conditions. As the understanding of the way electrical pathways in the body effect the body’s function increase, researchers are able to explore new methods of treating common problems using electroceutical devices that are much less invasive than their predecessors such as deep brain stimulation and pacemakers. Vagus Nerve Stimulation (VNS) has been shown to be one of the most promising methods, allowing researchers to treat migraines, epilepsy, traumatic brain injury (TBI), inflammation, and other problems in humans and animals.


Electroceuticals essentially work by manipulating the action potentials within the body which are responsible for controlling the body’s functions [1]. These actions potentials control the body’s functions through certain patterns which electroceutical devices are able to manipulated [1]. The main advantage of using electroceuticals rather than deep brain stimulation is that electroceuticals allow for the pinpointing of certain nerves while deep brain stimulations can influence large areas of nerves which are not related to treating the disease [1]. The recent developments in mapping the nervous system’s responsibilities in certain diseases such as obesity is enabling researchers to believe that electroceuticals can be an effective way of treating such diseases [1].

Electroceuticals are gaining interest from corporations and research institutions. In recent years, GlaxoSmithKline (GSK) and the National Institutes of Health have begun funding research initiatives backed by grants reaching into the hundreds of millions of dollars. The reward for companies such GSK and ElectroCore, which makes gammaCore, is that the regulatory framework for medical devices is much less costly and quicker than drug regulations. This will enable products to get to market quicker than their pharmacological based counterparts.

The vagus nerve plays a central role in the automatic nervous system which is responsible for the function of organs [2]. It is the longest nerve in the automatic nervous system  [2]. Due to its important role in automatic nerve processes, it was hypothesized that electrically stimulating parts of the vagus nerve would be successful in treating a range of diseases  [2]. Controlled stimulation of the vagus nerve has been used to treat epilepsy and was first performed in the 1990s [3]. This was traditionally performed by implanting a device on the vagus nerve in the neck and connecting it to a stimulator device implanted in the chest [4]. The gammaCore device has been shown to be successful in treating cluster headaches in human patients. Rheumatoid arthritis patients’ inflammation has been treated using VNS [3]. It was shown in rats and rabbits to have an effect in reducing the damage caused by traumatic brain injury [3]. Professor Chris Toumazou of Imperial College has developed a device that would help control hunger in patients suffering from obesity [2]. The actual clinical success of using VNS in treating humans is mixed [5]. However, due to its role in the nervous system, treating conditions with VNS is a tempting and worthwhile pursuit for researchers. In this paper, research on the stimulation of the vagus nerve as a treatment for cluster headaches, inflammation and traumatic brain injury will be surveyed.



Results and Discussion

The gammaCore device works by using an electroceutical device that is externally placed on the neck. The Prevention and Acute Treatment of Chronic Cluster Headache (PREVA) trial was performed on 45 patients using the gammaCore device and 47 who were not [4]. After four weeks, the group who have been routinely using the gammaCore device were suffering six less cluster headache attacks a week [4]. This is three times greater than the control group which was only suffering two less cluster headaches a week using traditional methods of treatment [4].

Another study was performed to see if the gammaCore device would work during an acute cluster headache attack. This trial was also successful, and 47% of the patients reported the attack was over within eleven minutes [4]. There was no control group to compare this to, but a study which tested pharmaceutical methods of treating acute cluster headache attacks found that it took two hours to be free of pain in only 22% of the cases [4]. This shows that electroceutical methods have the potential for being better at treating cluster headaches.

Kevin Tracy and his research team performed a proof-of-concept experiment to see whether or not VNS could be used successfully in treating inflammation in rheumatoid arthritis (RA) patients [4]. A VNS device was implanted within the chest of patients and stimulation was conducted for 42 days. After 42 days, the device was switched off for 14 days, and then turned on for another 28 days [4]. The Disease Activity Score (DAS), a method for tracking the activity of RA in patients, decreased for the first 42 days while the device was on, and then increased for the 14 days that the device was off, and then once again decreased for the last 28 days [4]. This shows that the VNS device was successful in reducing the inflammation caused by RA [4]. The timeline for the whole process is shown in the following figure:

Figure 1. Timeline for the RA study (from [6])

On Day 0 the patient received a 60 s stimulation of 250 ms pulses of a current of between 0.25-2.5 mA and then nothing again till Day 7 [6]. From Day 7 to Day 28, the current was set to a maximum tolerable value up to 2.0 mA and the 60 s stimulation of 250 ms pulses was used daily [6]. From Day 28 to Day 42, for patients who had not responded to the treatment, the stimulations were increased to four times a day [6].

The device inhibited the tumor necrosis factor successfully during the days which the device was turned on [6]. This was the component that was critical in reducing the inflammation in these patients. This study was small in scale, composed of only 17 patients, so large-scale studies are needed to see how effective this method of treatment is for reducing RA inflammation [6].

Studying the effects of VNS to treat traumatic brain injury on humans is much harder than the previous two studies mentioned here. This is because some sort of brain trauma has to occur. Studies on rats and rabbits have shown promising results [6]. Studies took the form of having the animal perform a cognitive test such as running a maze, traumatically injuring the brain, and the using VNS treatments for two to four weeks. In these studies, the use of VNS was successful in helping the animals perform the tasks that they had been taught before the injury after the trauma was experienced [6]. However, performing this type of study on humans is unethical, and it is also would be hard to perform studies on patients who had experienced TBI in the previous two hours, the time in which researchers believe that VNS needs to begin after the initial injury to the brain successfully treat it [6].

Outlook and relevance of work

            Vagus Nerve Stimulation has to potential to treat a wide range of diseases and injuries. The power lies in the central role that the vagus nerve plays in the automatic nervous system. The three examples in this paper are only a few of the treatments being explored using VNS. Other studies have shown that it may be successful in treating obesity, which until now has required invasive surgery to treat. If we are able to continue to improve our knowledge of neural circuitry and how neural signal influence bodily functions, the ability to treat a large number of problems will be available. The funding in this field, several hundred million dollars, is still limited compared the billions of dollars spent of drug research each year. However, as the promise VNS and other types of electroceuticals is proven, it can be assumed that the funding will increase, enabling researchers to improve understanding how the electroceuticals are working. A major breakthrough, which would be a daunting undertaking, would be the full mapping of neural circuitry and signaling for several different problems. Once this is accomplished, a better understanding of the role that electroceuticals are playing in the alleviation of symptoms, and use that fundamental understanding to develop new treatments.



[1]       K. Famm, B. Litt, K. J. Tracey, E. S. Boyden, and M. Slaoui, “Drug discovery: a jump-start for electroceuticals,” Nature, vol. 496, no. 7444, p. 159, 2013.

[2]       G. Finnegan. (2016) Could tweaking a nerve beat obesity? Horizon.

[3]       S. Miller and M. S. Matharu, “The Use of Electroceuticals and Neuromodulation in the Treatment of Migraine and Other Headaches,” in Electroceuticals: Advances in Electrostimulation Therapies, A. Majid, Ed. Cham: Springer International Publishing, 2017, pp. 1-33.

[4]       A. Majid, Electroceuticals: Advances in Electrostimulation Therapies. Springer, 2017.

[5]       S. K. Moore, “Follow the wandering nerve,” IEEE Spectrum, vol. 52, no. 6, pp. 78-82, 2015.

[6]       F. A. Koopman et al., “Vagus nerve stimulation inhibits cytokine production and attenuates disease severity in rheumatoid arthritis,” Proceedings of the National Academy of Sciences, vol. 113, no. 29, pp. 8284-8289, July 19, 2016 2016.


The Success of Polypyrrole Based Neural Sensors

By Tyler Shewbert


            The study of neural activity can be performed with implanted electrodes. One of the major drawbacks that researchers face when using typical flat, metal electrodes is that the impedance caused by the growth of scar tissue around the implant renders the collection of data impossible within weeks [1-4]. A proposed solution is to develop electrodes that have been organically enhanced using polymers and peptides that would allow the electrode and neurons to have a more intimate connection that would last longer. The polymer polypyrrole (Ppy) and various peptides were added to metallic conductors of gold and iridium by a team at the University of Michigan. They were found to improve the implanted electrode’s ability to study neural activity [1-4]. The success of this research shows that the use of organic electrodes for the study of neural activity is possible and potentially better than their non-organic counterparts.


            An electrode is a basic electrical device used for conduction. When used as neural sensors, they are implanted [4]. However, for neural applications, flat, metallic electrodes are surrounded by scar tissue caused by inflammation. This renders the device useless within a matter of weeks due to the increasing impedance caused by scarring [1-3].

To improve and optimize such sensors three things are needed: Improved capacitance, convex surfaces, and better biocompatibility [3]. Low impedance is necessary when an electrode is being used to measure neural signals [3]. The capacitance between the electrode and the area where it is implanted is modelled as in series with the impedance caused by the tissue. Therefore, by increasing the capacitance of the electrode, the electrodes efficiency can be increased [3]. Convex surfaces would allow electrodes to form more intimate connections with the tissue around the implant [3]. Iridium and gold have both been used electrode contacts for neural sensors because of their known biocompatibility [3]. Unfortunately, long-term recordings using these devices fail [3].

Electroactive polymers and peptides have shown promising results in modifying electrodes to improve all three of these areas. Ppy is an organic, conducting polymer [4]. Ppy in combination with the synthetic peptide DCDPGYIGSR was found to improve the results of in vitro neural recordings within guinea pigs [1]. Ppy was also used in conjunction with the nonapeptide CDPGYIGSR to improve the surface of the electrode to enhance its ability to connect with the surrounding tissue [2]. The use of Ppy in combination with various other biological materials was shown to increase the area of connection between the neuron and electrode, increasing capacitance and reducing impedance [1-4].

Results and Discussion

David Martin and his team at the University of Michigan published a series of papers about using organic materials to enhance implantable neural electrodes capabilities. They first began by exploring how Ppy doped with polystyrene sulfonate (PSS) could be used to change the topology of the electrode [3]. Next, they examined how Ppy and the peptides could be used to increase the attraction of neural filaments to the electrode. They found that using this combination allowed them to gain the desired convex shape that would improve connection between the electrode and the surrounding tissue [2]. Finally, they made electrodes which were composed of Ppy and the synthetic peptide DCDPGYIGS. They implanted these within guinea pigs to study whether or not the changed surfaces improved data recording when compared to a control group of guinea pigs implanted with flat-surfaced electrodes, and also tested the environmental effects on the electrodes using deionized water [1, 4].

In the first paper, the combination of Ppy and PSS was grown onto neural electrodes made of either Au or Ir [3]. The structure of the Ppy/PSS on the electrode was controlled precisely and reproducibly by a charge passing through the system [3]. The topology of the structure was complex enough that the efficient surface area for a Ppy/PSS film was estimated to be 26 times greater than the surface area for a flat gold electrode. As this surface area increased, the capacitance increased [3]. Impedance spectroscopy showed that the coated electrode had impedance values of one to two times less than that of a flat Au electrode [3]. Thickness of the film was varied from 5 to 20 mm. The best thickness for the film was found to be 13 mm [3]. Neural implementation of the electrodes within guinea pigs showed that a Ppy/PSS coated electrode could record high-quality neural data [3]. The ability to reduce the impedance by as much as two orders of magnitude and the ability to increase the surface area by 26 times proved neural electrodes efficiency could be improved by the addition of polymers.

The team then examined the possibility of adding biomaterials to the Ppy film in hopes of increasing the development of the connection between the tissue and the electrode [2]. The nonapeptide CDPGYIGSR and fibronectin fragments (SLPF) were added to the Ppy film [2]. Impedance spectroscopy once again showed that the impedance for the Ppy/SLPF material was an order of magnitude lower at the biologically important frequency of 1 kHz [2]. Next, glial cells from rats and neuroblastoma cells were grown on electrodes both with and without biological coating [2]. The Ppy/SLPF coating attached to the glial cells and the Ppy/CDPGYIGSR attached to the neuroblastoma cells better than the control groups of electrodes without biological coating [2]. The results also verified the idea that a convex, highly complex morphology between the tissue and the electrode was the best for establishing a connection between the two [2]. The most important result out of this paper was the ability to add cell-binding biomaterial to the polymer film to increase the chance that a well-developed connection between the tissue and the electrode could be established.

The teams third paper in 2003 studied the long-term effects of the film-enhanced electrode in the environment and its ability to record data over the period of several weeks [1]. Ppy and a synthetic peptide DCDPGYIGSR were now used as the film deposited on Au  [1]. First, the electrodes were soaked in de-ionized water for several time periods up to seven weeks [1]. It was found that the peptides did not diffuse after seven weeks, which had been a major concern [1]. After to probes had been soaked for seven weeks, they were then implanted in guinea pigs [1]. A control group of guinea pigs also had non-coated electrodes implanted [1]. The impedance was measured at 1 kHz at one week, two weeks and three weeks [1]. Recording of data was also performed periodically [1]. The electrodes were also stained for microfilaments to show the amount still connected between the neurons and the electrodes [1].  The following table summarizes the results:

Coated Electrodes Non-Coated Electrodes
· Impedance: Stable for the first week and then increased by 300% by then end of week three. · Impedance: Decreased for the first week and the jumped to 300% by end of third week.
· Recording: 62.5% still recording after second week. · Recording: No data found at end of week two.
· Filaments: At the end of week one: 83%. End of week two: 67%. · Filaments: At the end of week one: 10%. End of week two: 6%.

Table 1. Comparison of the results of the coated and non-coated electrodes implanted in guinea pigs (data from [1])

From Table 1, the importance of the filaments being connected and the ability for electrodes to record data is obvious. The ability for the electrode to maintain recordings is directly related to the number of filaments are still connected [1]. The main advantage using biologically enhanced electrodes is in recording neural data. It would be interesting to see the results of a study that compared the neural filament connections for a Ppy/PSS film versus a film enhanced with Ppy/DCDPGYIGSR to see how much of an effect the peptide has on enhancing the connection.

Outlook and relevance of work

            The University of Michigan team has shown that the for neural sensing, biologically enhanced electrodes are more effective than their non-coated counterparts. The ability to implant neural sensors that have longer lifetimes has the advantage of being able to perform long-term studies on neural activity and reducing the need for surgery to implant the electrodes. The lower impedance that is seen for the first two weeks, as in the third study, allows for a more accurate collection of data. Further studies can reveal even better peptides than promote connectivity between neurons and the electrodes, potentially for longer periods of times. Various other polymers are being studied also such as polythiophene, poly(3,4-ethylenedioxythiophene) (PEDOT), and polyaniline [5]. Further research on the potential toxicity of such electrodes is needed before large-scale human studies can be performed. Results from a 2009 study of a PEDOT based electrodes showed no toxic effects in rats [6]. While bioelectronic solutions might not solve all the problems they are being applied to, it seems that organically enhanced electrodes for neural sensing is the correct solution, but further refinement is necessary.




[1] Cui X, Wiler J, Dzaman M, Altschuler RA, Martin DC. In vivo studies of polypyrrole/peptide coated neural probes. Biomaterials. 2003;24:777-87.

[2] Cui X, Lee VA, Raphael Y, Wiler JA, Hetke JF, Anderson DJ, et al. Surface modification of neural recording electrodes with conducting polymer/biomolecule blends. Journal of biomedical materials research. 2001;56:261-72.

[3] Cui X, Hetke JF, Wiler JA, Anderson DJ, Martin DC. Electrochemical deposition and characterization of conducting polymer polypyrrole/PSS on multichannel neural probes. Sensors and Actuators A: Physical. 2001;93:8-18.

[4] Berggren M, Richter‐Dahlfors A. Organic bioelectronics. Advanced Materials. 2007;19:3201-13.

[5] Guimard NK, Gomez N, Schmidt CE. Conducting polymers in biomedical engineering. Progress in Polymer Science. 2007;32:876-921.

[6] Asplund M, Thaning E, Lundberg J, Sandberg-Nordqvist A, Kostyszyn B, Inganäs O, et al. Toxicity evaluation of PEDOT/biomolecular composites intended for neural communication electrodes. Biomedical Materials. 2009;4:045009.


Magnetoencephalography as a Method for Studying Deep Brain Stimulation

By Tyler Shewbert


            Magnetoencephalography (MEG) has been shown to be an effective method to study the effects of deep brain stimulation in patients with chronic pain, Parkinson’s disease (PD), and Essential tremor (ET). The advantages that MEG provides over other neural imaging methods, functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), are that MEG has less intense magnetic fields than fMRI, so the DBS equipment is not harmed, and the ability for temporal resolution in the millisecond range which neither PET or MRI scans can provide [1, 2]. The main disadvantage of MEG is that is does not allow for accurate localization of deeper brain activity. MEG is the technology that is best suited for studying deep brain stimulation (DBS) due to its ability for accurate temporal and spatial resolution, and its lack of negative effects on the DBS device while it is functioning [1, 3, 4].


            Magnetoencephalography (MEG) is a neural imaging technique first developed by David Cohen at MIT in 1968. MEG records the activity of the brain’s magnetic fields on the outside of the head [1, 4-6]. Cohen reported that these fields are on the order of 1 pico-tesla in strength [6]. MEG works by using a superconducting quantum interface (SQUID) which converts sub-quanta magnetic field changes into voltage changes [5]. The SQUID is connected to superconducting coils placed as close as possible to the head of a patient [5]. Due to the weak magnetic fields caused by the brain’s currents, the potential for outside interference is high. The external interference is reduced by magnetically shielding the room and the MEG device using Mu-metal, Al and other materials with differing magnetic properties [5-7]. Also, a gradiometer is placed on the other side of the superconducting current coil to help reduce external magnetic noise by evening out the external signals over the two coils [5]. Software is used to filter out signal noise by using reference sensors placed in positions where they can pick up mostly external noise and this can then be subtracted out of the harmonics of the superconducting coil data [5]. The following image shows the basic setup of an MEG recording device:

Figure 1. The basic setup of an MEG showing the superconductor, gradiometer, input coil and SQUID. (from [5])

The major drawback is that MEG cannot accurately locate brain functioning within the cortex. Computational methods are used for localization deeper within the brain but for those methods to work accurately there needs to be further study of how the deeper regions of the brain function [5]. This is because the two main methods of localization, dipole fitting and minimum based approaches for spatial reconstruction, both require assumptions of how the brain works to localize activity [5].

Deep brain stimulation has proven successful in treating patients who have otherwise not responded to other types of treatment for various neurological disorders [8]. However, there is a lack of understanding why DBS is successful in treating these disorders  [8]. Brain activity is difficult to accurately record and the implantation of DBS devices makes this more difficult [1, 4, 8]. The use of fMRI imaging can cause overheating or movement of DBS electrodes, or the associated pulse generator which has been implanted within the patient due to the strong magnetic fields that the fMRI machine uses [2]. PET scans have temporal resolution in the order of minutes which does not allow researchers to observe changes in brain activity that the DBS device is causing accurately. Therefore, research has been performed on the use of MEG as a method of imaging brain activity while the DBS implants are functioning. Results and Discussion

Several studies have been conducted on the effectiveness of using MEG imaging to study the causes of why DBS is successful at treating a range of neurological disorders including chronic pain, PD, and ET [1, 3, 4]. In each of these studies the conclusion was reached that MEG allowed the researchers a valuable way of studying the mechanisms behind the success of DBS treatments [1, 3, 4]. MEG allowed researchers to use DBS in both high and low frequency situations [1, 3, 4]. MEG also allowed researchers accuracy in the order of milliseconds for temporal resolution allowing researchers to see detailed neurological changes while the DBS devices were being turned on and off with spatial resolution of ~5 mm3 [1, 3, 4]. The results of three studies will be discussed here.

Since little was known about the neural mechanisms that alleviate pain in patients with chronic pain, a study was performed in 2006 to see whether MEG would be useful in imaging a patient’s brain while the DBS implant was operating and while it was off [1]. The selected patient had phantom limb pain which was being treated by a low frequency (7 Hz) stimulation [1]. The patient’s brain was initially recorded using MEG while the device was on for ten minutes and then switched off for ten minutes a total of four cycles. He reported the pain as increasing in each off cycle and the pain diminishing during on cycles [1]. The researchers found that the periods of DBS did not affect the MEG imaging  [1]. They also compared the brain activity data during the periods when the DBS device was turned off to data from fMRI scans previously taken and these were similar, showing that MEG was accurate at studying the effects of DBS at low frequencies  [1].

The same research team later tested their hypothesis that the success of MEG imaging was only possible due to the low, 7 Hz frequency that the patient’s DBS device had been using in the previous experiment [1, 4]. They proceeded to examine a patient who was using 7 Hz and 180 Hz DBS stimulation for treatment of cluster-headaches. The researchers assumed that the electromagnetic noise produced by the high frequency stimulation would possibly interfere with the MEG imaging [2]. The researchers were able to accurately image the areas of the brain that had been reported in earlier studies using fMRI as activating when the DBS stimulation was on and off with MEG imaging [2]. However, they found activity in the periaquaductal grey (PAG), which is deep within the cortex, and while consistent with fMRI studies of pain and pain relief, the researchers believed that the PAG measurements were not as locally accurate fMRI imaging due to the limitations of MEG localization within the deeper cortex [2]. Even with that, the researchers concluded that MEG was a still a reliable method of imaging neurological impacts of DBS while using high frequency stimulation [2].

The third study was published in 2013. The researchers examined if MEG imaging would be useful in studying the motor tremors that Parkinson’s patients suffer from and how DBS devices reduce these tremors [3]. Prior to 2013 researchers had found that MEG imaging would be successful in studying patients using high-frequency DBS to treat PD [3]. The researchers were successfully able to use MEG imaging to investigate the motor tremors of the patients when the DBS device was on and off [3].

All three of these studies and several others not discussed here have reached the same conclusion: MEG imaging is an accurate way to study the functioning of the brain in while a DBS device is on [3]. This is a powerful tool for researchers since it is not as electronically disruptive as fMRI scans and can be performed safely on patients while the device is on and allows for detailed temporal and spatial resolution of ~5 mm3. The use of MEG as a research tool for DBS is still in its early phase therefore further studies will be needed and improvements made to the methodology.

Outlook and relevance of work

MEG is essential to furthering the study of why deep brain stimulation works. Being able to temporally resolve brain functions on the scale of milliseconds will provide researchers insight into how the devices are working. The spatial resolution of 5 mm3 is like that of an fMRI scan. There is currently a lack of information about how DBS works and this complicates its use as a reliable treatment. The ability to study the brain’s response while DBS is occurring is the main advantage of MEG imaging and this will help expand knowledge of the device’s impact on neurological conditions.

MEG has the potential to be the best way to study DBS in the future but improvements are needed. The drawback of not being able to spatially localize the activity deep within the cortex can be improved as general knowledge of the deeper cortex is gathered from PET and fMRI scans is applied to the MEG localization algorithms. Broader study is needed. In each one of these studies only one patient was studied. They were performed as a proof-of-concept. To gain further knowledge on how to properly implement MEG imaging as a method of studying DBS, large studies with many participants will be needed. This will allow researchers to have a better foundation of what to look for and what errors are occurring in their studies.

The necessary improvements to MEG imaging for DBS studies will be made. The potential for helping patients whose only option is DBS treatment is too great. However, to improve those treatments doctors need a better understanding on how DBS is working in the brain. Improved MEG techniques will allow this to be accomplished.




[1] Kringelbach ML, Jenkinson N, Green AL, Owen SL, Hansen PC, Cornelissen PL, et al. Deep brain stimulation for chronic pain investigated with magnetoencephalography. Neuroreport. 2007;18:223-8.

[2] Ray NJ, Kringelbach ML, Jenkinson N, Owen SLF, Davies P, Wang S, et al. Using magnetoencephalography to investigate brain activity during high frequency deep brain stimulation in a cluster headache patient. Biomedical Imaging and Intervention Journal. 2007;3:e25.

[3] Bajwa J, Connolly A, Johnson M. Magnetoencephalography of Deep Brain Stimulation in a Patient with ET/PD Syndrome (P06.089). Neurology. 2013;80:P06.089.

[4] Ray N, Kringelbach ML, Jenkinson N, Owen S, Davies P, Wang S, et al. Using magnetoencephalography to investigate brain activity during high frequency deep brain stimulation in a cluster headache patient. Biomedical imaging and intervention journal. 2007;3.

[5] Barnes GH, A; Hirata, M. Magnetoencephalogram. Scholarpedia. 2010;5:3172.

[6] Cohen D. Magnetoencephalography: evidence of magnetic fields produced by alpha-rhythm currents. Science. 1968;161:784-6.

[7] Cohen D. Magnetoencephalography: detection of the brain’s electrical activity with a superconducting magnetometer. Science. 1972;175:664-6.

[8] Kringelbach ML, Jenkinson N, Owen SL, Aziz TZ. Translational principles of deep brain stimulation. Nature Reviews Neuroscience. 2007;8:623-35.


From the Voltage Clamp to the Patch-Clamp

By Tyler Shewbert


Alan L. Hodgkin and Andrew F. Huxley wrote a series of five papers in 1952 in which they developed an electrical model for the action potential within the membrane of the squid axon. This model was the first quantitative model describing the electrical workings in nerve cells [1]. The experimental technique that they used was the voltage clamp method, which was improved by Hodgkin by eliminating the differences in membrane potential, allowing for the measurement of the ion current flowing in and out of the cell [1, 2]. The success of the H-H model led to the development of the patch-clamp method by Bert Sakmann and Erwin Neher in the 1970s [1]. The patch-clamp method has revolutionized the study of ionic current within cell membranes because it allows accurate measurement to be taken of small, excitable and nonexcitable cells, and the ability to measure the currents within single ion channels. However, Sakmann and Neher’s success was built upon the success of the H-H model and the voltage clamp method showing the importance of research that lays the foundation for major breakthroughs. [1, 3, 4].


The voltage clamp method is thought to have been first used by Kenneth Cole and George Marmots of Wood Hole as a method for measuring squid axons [1]. However, the breakthrough use of the voltage clamp was developed by Hodgkins and Huxley. In previous experiments there had been an issue of electrode polarization which they overcame by using two electrodes, creating the same potential across the squid membrane.  Hodgkins and Huxley then could accurately measure the ionic currents flowing in and out of the membrane [1, 2]. This enabled Hodgkins and Huxley to develop a mathematical model for current flow through the membrane. This model became the basis for future electrophysiological research.

The voltage clamp method did not allow for the measurement of individual ionic current channels within the membrane or smaller sized cells.  The patch-clamp method that Bert Sakmann and Erwin Neher developed in the 1970s allowed for the measurement of individual ionic current channels, even in small cells, including mammalian cells [1, 3, 4]. The patch-clamp technique has been improved since the 1970s allowing researchers to improve the accuracy of their current measurements and examine single channels within most cell types [3, 4]. This technique has been a boon to electrophysiological researchers ever since.

Results and Discussion

            The key to Hodgkins and Huxley success in the 1952 papers was the adjustments they made to the voltage clamp method that enabled the membrane of the squid axon to be kept at the same potential so that accurate measurements of the current flowing through the membrane could be recorded [1, 2]. There were limitations to the voltage clamp technique. The individual ion channels flowing in and out of the membrane could not be measured [1, 5]. The accuracy was effected by signal noise[1]. The method could only be used on nerve cells large enough to attach the pipettes necessary for current measurement to, hence the use of the squid axon [1, 5]. Even with these limitations, Hodgkins and Huxley developed their mathematical model of action potential through nerve membranes with remarkable accuracy that still serves as a basis for modern studies.

Bert Sakmann and Erwin Neher began developing the patch-clamp method in the 1970s [5]. This technique revolutionized the study of the action potential and ionic current channels. The main contributions of the patch-clamp method was its ability to reduce the signal to noise ratio of the measurement, the ability to take measurement of currents flowing through single ionic channels, and the ability to measure the ionic channels of smaller cells, including mammals [1, 3, 4].

The patch-clamp method has its roots in the voltage clamp method used by Hodgkins and Huxley. Instead of using two electrodes to overcome the polarization of the membrane, Sakmann and Neher used small, heat polished pipettes with electrodes the size of 0.5-1.0 mm which were filled with a saline solution and electrically sealed to the membrane of the cell through the application of a slight suction to the pipette [4]. Sakmann and Neher had transistors available to improve the amplification of the measured current while Hodgkins and Huxley only had vacuum tubes available to them [4]. Sakmann and Neher found that by using this technique they could achieve an electrical seal around 50 MW which allowed high resolution current measurements of single ion channels [3, 4]. However, Sakmann and Neher found that while this enabled accurate measurements of the ion channels within mammalian and other smaller cells to be performed, there was noise from the saline bath and pipette, and the current from the pipette and membrane was different [3-5]. A basic overview of the patch-clamp method can be seen in Figure 1.

Figure 1: An overview of the basic concept of the patch-clamp technique (from [5]).

In a 1981 paper Hamill, Neher, Sakmann, and Sigworth presented an “improved patch-clamp technique” [3]. In this paper, the authors described an improved method that would allow the electrical seal between the pipette and membrane to achieve resistances of in the gigaohm range [3]. This was accomplished by taking extra precautions to make sure the pipette surface was kept clean and suction was applied to the pipette interior  [3]. As the resistance of the electrical seal is increased, the noise is reduced allowing for improved resolution in the recording of the current [3-5]. They reported that they were able to get gigaohm seals almost all of the cell types they tried [3].

This order of magnitude improvement from the original technique has had profound impacts on the study of electrophysiology. The patch-clamp method has enabled researchers in neuroscience to examine the ion channels within nerve cells [5]. In the past twenty years, the patch-clamp method has been used in a “variety of excitable and nonexcitable cell types, ranging from neurons to lymphocytes”, therefore expanding its use outside of the realm of neuroscience [6].

Since Hodgkins and Huxley first measured the action potential in the squid axon, their mathematical model has held. This was revolutionary since it finally proved the hypothesis that Galvani had proposed 150 years before that there was some sort of electricity within animals. Once Hodgkins and Huxley had developed a mathematical foundation other methods could be developed such as the patch-clamp. Hodgkins and Huxley did the best they could with the resources they had. The current measured from the membrane needed to be amplified, but the transistor was not yet in common use, so they were working with vacuum tubes [1]. For Sakmann and Neher, the understanding of the voltage clamp method and the H-H model coupled with the advances in amplification technology allowed them to break through the restrictions that Hodgkins and Huxley faced. By developing the patch-clamp method, Sakmann and Neher opened electrophysiology to new cells types of all sizes, with improved resolution [5]. Hodgkins and Huxley laid the groundwork for Sakmann and Neher’s breakthrough which has contributed to the electrophysiology research in the last forty years.


Outlook and relevance of work

            The research performed by both teams won the Nobel Prize in Physiology or Medicine: Hodkgins and Huxley in 1963, and Sakmann and Neher in 1991 [1]. This recognition is well deserved. The H-H model has stood up to testing in the past six decades since its origination [1]. Hodkgins and Huxley finally formalized ideas that had been put forth by Galvani 150 years before. Their improvement of the voltage clamp method was essential in the development of the field of electrophysiology. If they had not been able to create an isopotential membrane, their experiments would not have been successful [1, 2, 6]. The reliability of the experimental methods that they presented and their mathematical model enabled further researchers to build off of their discovery, culminating with the patch-clamp method which has revolutionized the research of electrophysiology since the 1970s [5]. The ability for researchers to study nonexcitable cells ion channels as well as individual channels within neurons and other excitable cells has been a boon to researchers since the 1970s [5, 6].

The track from Galvani’s initial famous frog leg experiments to modern research using the patch-clamp is a testament to the resolve of science as an institution. Over 200 years have passed since Galvani’s initial experiments, simply involving the electrical stimulation of frog nerves, to being able to measure the individual ionic currents within those nerves. Research in an overarching field such as electrophysiology is not a fast process. The lesson to be learned from its success is that solid foundational work is necessary for the future improvements and successes in the field. Without the work of other researchers prior to Hodkgins and Huxley such as Cole and Marmots, the revolutionary isopotential membrane created by a dual-electrode voltage-clamp would not have happened. The revolutionary patch-clamp was built upon the earlier work of Hodgkins and Huxley, and this method has allowed electrophysiology researchers to expand into many different cell types.





[1] Schwiening CJ. A brief historical perspective: Hodgkin and Huxley. The Journal of Physiology. 2012;590:2571-5.

[2] Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology. 1952;117:500.

[3] Hamill OP, Marty A, Neher E, Sakmann B, Sigworth F. Improved patch-clamp techniques for high-resolution current recording from cells and cell-free membrane patches. Pflügers Archiv European journal of physiology. 1981;391:85-100.

[4] Sakmann B, Neher E. Patch-clamp techniques for studying ionic channels in excitable membranes. Annual review of physiology. 1984;46:455-72.

[5] Veitinger DS. The Patch-Clamp Technique: An Introduction Science Lab by Leica Microsystems2011.

[6] Cuevas J. Electrophysiological Recording Techniques.  xPharm: The Comprehensive Pharmacology Reference. New York: Elsevier; 2007. p. 1-7.


The Ghosts of Hill 88

The top of Hill 88 in the Marina Headlands

By Tyler Shewbert

North of San Francisco is the Marin Headlands. Now part of the National Park Service, all of this area north of the Golden Gate was once belonged to the U.S. military. As one travels about the area, the remnants of its military past can be seen. I have spent a decent amount of time exploring this area but my hike up Hill 88 on June 28th, 2016 had probably the most profound impact of any of my journeys in the area.

I started at Rodeo Beach, with the intention of just walking up a hill because it was there. I did not have any idea what I would find. As I climbed, I climbed past the history of the area. First was Battery Townsley, which guarded the Golden Gate until the end of World War Two. This was a mere three-quarters of a mile hike, and not that high above the beach. I continued hiking, spying a hill in the distance that appeared to be the highest in the immediate area, and therefore the one that I would to climb.

As I approached the middle point between Battery Townsley and that hill, I found the concrete remnants what appeared to be more fortifications. These still had a World War Two feel about them, and from this place I could see my eventual goal even better. It was surrounded by a fence, so I was not even sure if I would be able to make it all the way to the top, except that I had seen people coming down from there.

Hill 88 from a distance

I continued, finally arriving at the top. I was greeted by the guard house.

The guard house

I was intrigued by what I found. The top of this hill, 1053 feet above sea level, had been flattened and there was what appeared to be some sort of former military installation.

The stands for the radar domes

The site was covered in graffiti. The concrete design screamed Cold War to me. This was not a World War Two facility. Upon walking around, I found the old helicopter pad, which solidified my reasoning that this was a Cold War facility and not part of the batteries from WWII.

Helicopter pad

I continued to explore. I found some ravens who were enjoying the amazing weather.

Ravens enjoying the view

The view from the top was amazing. The wind was light. In the East Bay, the temperatures were nearing triple-digits, but at the top of this hill it was nice and cool, with a light breeze.

Facing San Francisco

Looking towards the exit of the Golden Gate

Looking towards the Financial District and the East Bay

The site had an ominous vibe to it. I found a place to sit and eat my lunch, and having great cell phone service, I proceeded to look up what this place was. I found this site. It said that Hill 88 had been the site of the radar control station for the Nike missile base that existed in the area during the Cold War.

Having visited a Nike missile site across the valley a few years before, I understood their purpose. The Nike site SF-88 in the Marin headlands used Nike Hercules missiles. These had a range of 87 miles, and could carry a 20 kiloton nuclear warhead, if desired, or a conventional payload. At sites in the United States, the payload was almost always the nuclear payload. Fitted with such a payload, the missile could theoretically be launched and destroy several high-altitude bombers or missiles that were inbound to a target. There were over 145 such sites in the United States, until they began to phase out during the 1970s.

For me, this was a reminder of a frightful time before I was born that my parents had spoke of. This was the time of duck and cover drills and nuclear brinksmanship. I am grateful that somehow the United States and Soviet Union managed to wade through this tenuous time without destroying one another and a good chuck of the planet with them. I hope that those lessons are not forgotten by my generation and that nuclear disarmament continues.

Hill 88 in operation

Nukes and Floods

By Tyler Shewbert

Since the end of the Second World War, international institutions such as the United Nations, World Health Organization and International Monetary Fund have proliferated. These institutions serve various purposes, but their greatest success has been facilitating diplomatic relations between global powers that has successfully prevented a nuclear conflict. While there has been nonstop warfare since the end of World War Two, no nuclear weapons have been used since the United States dropped the bombs on Hiroshima and Nagasaki. Preventing a nuclear catastrophe has been the greatest success of increased international cooperation, even in times when proxy conflicts between the great powers happens.

The era of nuclear warfare has forever changed how wars are fought. The days are gone when U.S. generals like Gens. LeMay and MacArthur advocated for limited use of nuclear weapons against the enemy. Now, the belief among the military class in nuclear armed states is that the use of nuclear weapons, particularly among the well armed states of the U.S., Russia, and China, would spark a disaster that would propel the planet into many years of misery and chaos, including their own nation. This is something that no responsible leadership class would want to be remembered for.

In this era of anti-globalization, it is important to remember the value of the international bodies set up in the years that followed WWII. The U.S. and Soviet Union fought proxy wars, but never did these conflicts escalate into nuclear strikes. This can partly be attributed to having reasonable diplomatic classes on both sides that understood the results if they failed. Institutions such as the U.N. have allowed these parties to resolve issues at the Security Council table rather than the battlefield. This has not always succeeded, but these diplomatic solutions have prevented nuclear conflicts, which itself is a great success.

Other global bodies have facilitated economic growth across the planet. Sometimes this has come in the form of direct aid, but often it has come in trade deals that have increased cross-border trade. This is essential. International trade brings countries together economically which in turn makes them reliant on other nations for their success. When countries cooperate economically, they are in turn less likely to go to war with each other.

It can be argued that President Nixon going to China was one of the definitive diplomatic overtures in the past fifty years. The enabled the Chinese leadership to eventually to begin economic reforms under Deng Xiaoping in 1978. They knew the West would be open to conducting business with them. The increased trade between the U.S. and China is partly responsible for the lack of armed conflict between the two countries, along with consistent diplomatic relations.

However, for factory workers in the industrialized nations of Europe and the U.S., this increased trade has led to economic instability due to the transfer of production to China and other developing nations. This economic instability is not solely the responsibility of over-shoring production. A decrease in union membership in the United States and the increase in automation have also played significant roles in creating economic hardships for production workers.

These hardships are a reality, and they must be properly dealt with. For many years this reality has been ignored by politicians, and now it has manifested as resentment against globalization which threatens the relative global peace that has existed for the past seventy years. The continued success and progress of the human race is dependent on this stability, facilitated by global institutions and trade. This means that politicians in the West must recognize the plights of those who feel threatened by globalization and believe that tearing down these institutions is the only solution. The Brexit vote and the election of Donald Trump are pleas for help from a class of citizens that feel disorientated in a globalized world, and if they are ignored it will mean continued attacks on the global order that has facilitated great numbers of people being brought out of the dredges of poverty.

This same anti-globalization attitude also threatens the institutions that bring about strong diplomatic relations that prevent the disaster of nuclear warfare. These institutions must also be kept intact for the purposes of mitigating climate change damages over the next century, and preventing wars caused by the displacement of population that is likely to occur.

For the sake of the continued progress of humanity, the anti-globalists must have their grievances heard. If these people prosper economically, there will be one less reason for them to attack the idea of globalization. However, if these grievances are ignored, and the international order begins to break down, we all will face increased risk of nuclear conflict, and also will be unable to deal with the displacement and disasters that climate change will cause.

My Nuclear Fantasy

ITER: the world’s largest Tokamak (courtesy ITER
Tyler Shewbert

I have been a proponent of nuclear power, both fission and fusion, since I was very young. I became fascinated with nuclear energy’s potential around age nine when I began to read about physics. Science fiction was the medium that peaked my interest in these subjects. My parents came of age in the 1950s and 60s, and therefore had a mixed view of nuclear energy. They had their concerns as many people did, and still do, about its potential. However, they always allowed me to explore topics independently and develop my own opinions. Within a few years, after reading many of the arguments for and against the use of fission power, my mind was set that this was the energy source that could change human civilization. I accepted that the technical problems with breakeven fusion energy might make it unattainable, but as an optimist I hoped that it would be successful, and that it could revolutionize the world.

Through my teen years and early twenties this idea cemented, but was rarely discussed. I diverged into other interests and rarely looked again at nuclear energy. In the background of my mind, the necessity of providing many terawatt hours of power for the planet’s energy needs was always there, and eventually in my later twenties this brought me back to nuclear energy.

After Fukushima, the growth that the nuclear energy sector had been seeing globally slowed dramatically. This disappointed me. I thought that opinions beginning to swing back in favor of nuclear energy were permanent. It only took one incident to drastically alter those opinions. Plants were shut down in Japan, Germany and many other countries. The anti-nuclear movement caught its breath again against the rising tide of pro-nuclear environmentalists. Once again, my nuclear fantasy was put on hold.

I have envisioned a world where by using nuclear energy air pollution is greatly reduced. Without the need for fossil fuels for energy sources, the air would begin to clear. Nuclear sources mixed in with solar, wind, hydro and other sources would create an energy boom that would lift the developing world out of poverty. As the air cleared, and poverty was reduced, the Earth would become a calmer place.

I know this is a fantasy. Fission produces waste. This can be dealt with somewhat, and as new technologies such as the Waste Annihilating Molten Salt Reactor develop, the waste issue can be dealt with even more effectively. The cost of environmental damage from a meltdown can be catastrophic to a nation. These are rare but with each new reactor the probability of an incident would increase. The most significant risk in developing nuclear energy infrastructure is the chance it will be used to develop weapons. The economics of fission energy are not practical for developing nations.

I will still defend fission. I have come to terms with its downsides and understand that these are problems which can either be solved or mitigated. I know that it is necessary to include fission energy in the energy mix to reduce climate change. It is immoral to ask the people in developing countries to not to use energy on the scale the developed countries do. To be able to provide billions of people with carbon free energy will allow economies to grow, people to come out of poverty and live richer lives. To do this nuclear energy must grow.

Fusion is another topic all together. It is always called the technology that is “twenty years away”. However, there is good news coming out of the organizations researching fusion. If we achieve the coveted breakeven power production, it will still take time to make fusion energy production economical, particularly for the impoverished nations around the world which are in dire need of energy. Yet this is a goal that is worth striving for and I will gladly spend my lifetime working towards it to pass the baton to the next generation which might finally usher in the era of fusion power. With that, I believe everything will change.

This is mostly speculative. I know there is no magic bullet to solving the world’s energy and climate issues. It will take a mixture of solutions and international cooperation that has not been seen in human history. These are the great tasks for the next hundred years. With a damaged climate, civilization will rip apart. Without developing nations providing energy to their populations, global inequality in incomes and standard of livings will tear the world apart. I am an optimist though. I know that humanity is both capable of great terror and beautiful progress, but history seems to tell us the progress typically wins out over the terror. I can only play my role in helping to find solutions to the problems.


The Future of Space is Nuclear

NEXIS ion thruster undergoing testing as part of Project Prometheus
Tyler Shewbert

Since the beginning of the Space Age, the relationship between space exploration and development, and nuclear power as a source of propulsion, heating and electricity was seen as symbiotic. Before Sputnik was launched in 1957, the development of nuclear thermal propulsion (NTP), in NERVA/Rover programs, had already been going on for two years. These programs continued until their cancellations in 1972. During the nearly two decades of development, a solid foundation of knowledge was acquired about nuclear thermal rocket (NTR) technology. The program was cancelled, based on political, not technological, reasons.

Since the beginning of the United States space program, radioisotope thermoelectric generators (RTG) have been used as sources of heat and power on missions ranging from Apollo to New Horizons. The farthest human object in space, Voyager 2, is powered by a RTG. The “Nuclear Power Assessment Study” released by John Hopkins Applied Physics Laboratory in 2015 states that the newer radioisotope power systems will continue to power Humanity’s robotic exploration of the Solar System.

Inspection of Cassini spacecraft RTGs before launch

Nuclear systems allow for more energy than than either chemical or solar sources. Due to this increase in available power, many of the restrictions limiting the exploration and settlement of space can be overcome. The main advantages of space nuclear applications are smaller volume, reasonable mass, long lasting operational times, independence from the Sun’s energy, the ability to deploy kilowatt and megawatt power sources, and reliable operations.

Space is a harsh environment. For power needs closer to the Sun, solar power can provide much of the power needed for most current space applications. As we journey out farther from the Earth, the efficiency of solar power declines. For most exploration in space out past the Earth, nuclear power sources become necessary. They provide the necessary heat and electricity for instruments to function properly. For future missions, both human and robotic, to Mars and the outer planets, nuclear energy will be necessary to power and heat the science packages that will further human knowledge of our neighborhood in space.

For serious human exploration and eventual economic development of space, both nuclear fission systems and nuclear propulsion will need to be developed. Nuclear fission plants will provide the necessary electricity and heat to settle the Moon and Mars. Solar energy will compliment both, but it is well documented that small nuclear reactors would give an advantage to settlers that solar would not.

Nuclear energy sources would also be necessary for any large-scale, local resource development. The power needs of any space mining operation could be met much easier with nuclear energy. Any such operations would rely almost entirely on nuclear energy to develop resources, due to the necessary heat requirements. In situ resource utilization (ISRU), the collection and processing of materials in space for human uses, could be done with nuclear power on a large-scale.

Sketch of nuclear thermal rocket

Nuclear propulsion methods, both nuclear thermal and nuclear electric, would allow for more efficient use of propellant. Nuclear thermal rockets, which have been studied at length by both the United States and Soviet Union/Russia, involve heating a fluid, typically hydrogen, in a nuclear reactor and expanding it out of a rocket nozzle to produce thrust. This leads to higher specific impulse, almost double that of chemical propulsion. Specific impulse (usually abbreviated Isp) is a measure of the efficiency of rocket. This allows for reduced travel times. This would allow any future explorers on Mars to stay longer on the surface. Many Mars mission designs have used nuclear thermal rockets as their preferred choice of propulsion. This was one of the main goals of the NERVA/Rover programs, and also one of the reasons it was cancelled. Solid core nuclear thermal rockets have been well-researched and ground tested. Liquid core and gaseous core engines theoretically would lead to even higher specific impulses, therefore opening up the outer Solar System to human exploration and eventually settlement.

Where do we stand today? Since the cancellation of NERVA/Rover, there have been a few starts and stops to serious nuclear propulsion and fission power systems. Project Timberwind was part of the Strategic Defense Initiative developing NTR for defense purposes, but was cancelled before ground testing began. However, there were still some advances in materials technology made. Project Prometheus began in 2003 with the purpose of developing smaller fission reactors for space applications. This was to be a team effort between NASA and the U.S. Navy. Rather than developing nuclear thermal propulsion, the fission reactors developed in Project Prometheus were to be used in nuclear electric propulsion (NEP), using a reactor to run ion engines. This was to culminate in the Jupiter Icy Moons Orbiter (JIMO), both of which were cancelled in 2005. Recently, NASA’s Marshall Spaceflight Center have been testing out nuclear fuels for nuclear thermal propulsion for a human Mars mission.

For any significant human exploration and settlement of the Solar System to take place, fission power systems, and nuclear thermal and nuclear electric propulsion systems need to be researched, ground tested, space tested, and deployed into operation. These technologies need to be treated as a long-term, space-infrastructure project.

NERVA/Rover engines were being developed not only for a possible Mars mission, but also for a Lunar shuttle. Some engines were designed to be turned on and off up to sixty times, allowing for such a shuttle. A similar set of goals needs to be established and studied. Developing NTP designs with only the goal of getting us to Mars is shortsighted. A more expansive set of goals guiding the development needs to be established. A cislunar nuclear shuttle would allow for the development of Moon settlements. Supplying any permanent Mars or Moon settlement would require large amounts of supplies to be sent until advanced ISRU was well established, and NTP could do this. With liquid and gaseous core engines, it is possible to shorten travel times between Earth and Mars, and a functioning interplanetary economy could eventually develop. These cores would also open up the resources in the asteroid belt and exploration and possible settlement of the outer planet’s moon systems. Without NTP, none of this is practical.

Fission power systems would allow settlements on Mars and the Moon to have more energy than solar alone could provide. This would lead to better resource development and utilization, and therefore the foundation of a self-sustaining space economy. Economies and settlements can only grow as much as their energy resources allow, and fission power would allow for scalable energy systems that could provide the necessary excess energy for economic expansion. Just enough energy is not enough, there must be excess for there to be any sort of successful economic development. Mining and processing asteroids would require large amounts of energy, particularly heat energy, which is much easier to deploy using nuclear power systems.

There is already a large knowledge-base for some of these technologies, however, it is spread mostly between various Department of Energy and NASA programs. Research projects in these fields have unfortunately been subject to cancellation time and time again, subject to the whims of politics. This has led to significant strides in technology development, only to be shut down on the verge of taking the next step. Without this technology, any sort of permanent human presence in space is not possible. Until we take the development of nuclear space applications seriously, we will remain in low Earth orbit, and the only significant economic use of space will be satellites. Due to legal regulations, private companies such as SpaceX and ULA developing nuclear based propulsion solutions is not practical at this time, therefore the onus is on government agencies. A framework similar to the ISS, ITER or CERN that spreads the cost among several different developed nations would make it cost-effective. This would also allow for the continuation of the project if a backing country’s political climate changes and no longer sees this as a worthwhile endeavor.

The future of Humanity’s presence in space depends on the long term development of nuclear space systems for settlement and exploration. It is an undertaking that will not reap immediate rewards, but needs to be treated as a long-term research and development project, similar to the quest for nuclear fusion, because the long-term benefits to humanity are immense. It is the destiny of humanity to explore and settle the Solar System, and this is only possible through nuclear technology.

Originally on:


Lessons from Galvani’s and Volta’s Competitive Spirit

Tyler Shewbert

Luigi Galvani’s experiments testing frog legs to see whether electricity was responsible for muscle contractions was easily reproduced by scientists for decades after his initial experiments. This reliable reproducibility allowed other scientists including Allesandro Volta to derive their own hypotheses behind what was causing the muscle contractions. While the theoretical framework to explain the contractions had not been developed, the results of the experiments sparked the development of electrophysiology and contributed to the development of the battery by Volta. In recent decades within the biomedical field there is a solid theoretical framework for the development of experiments, however those experiments have low rates of reproducibility. This is causing economic damage by increasing rates of failure in drug trials in later stages. Scientific progress relies on discarding failed hypotheses and the development of experiments that allow for reproducibility by others to confirm or deny hypotheses. Modern science can still learn from the competition and mistrust betweeen Volta and Galvani.


In the late 18th century Luigi Galvani began experimenting with frog legs. His methods included using the lower half of a frog which had been severed from the body and had exposed nerves. He explored the effects of electricity on muscle movement within the legs. Initially he experimented with external sources of electricity such as Leyden Jars. Electricity induced contractions in the leg muscles of the frogs. He then experimented with atmospheric causes of electricity and found this had little effect. He concluded that the muscle had some sort of intrinsic electricity within it [1]. A scientific contemporary of Galvani, Allesandro Volta, contested Galvani’s explanation that the muscle contractions were due to intrinsic electricity and were instead caused by the metals used in connecting the nerve to the muscle, and that the muscle was simply reacting to the electricity in the metals. Both Volta and Galvani ended up pursuing further experiments in animal electricity to support their own theories [2]. Galvani ended up producing a contraction by connecting the two nerves from each leg together [1]. Volta countered that he could produce electricity by mixing silver and zinc, and that metals were responsible for the contractions, eventually developing the electric battery [1]. Out of Galvani’s experiments came two major breakthroughs: The eventual development of the field of electrophysiology and Volta’s development of the battery [2]. Galvani’s method was simple enough to be reproduced by other scientists. Eusebio Valli could reproduce Galvani’s experiments with the same results as could others [2]. The lack of acceptance of Galvani’s hypothesis of animal electricity was due to Volta’s success with the electric battery and a lack of theoretical framework that could explain the results.

In recent decades, the inability to replicate results has begun to plague the biomedical field, particularly within preclinical research [3]. The ability to attain “robust, reproducible results” is essential to directing the direction of further research [4, 5]. Often the original researcher of a published finding is themselves unable to attain the same results [4]. This has been attributed to a desire for “flashy results” that “ignore the lack of scientific rigor” [4]. It is necessary for scientific results to be efficient and if the majority of preclinical research is not reproducible, the results are inefficient [4]. Galvani’s methods produced reproducible results that allowed science to progress without a strong theoretical framework. Today there is a strong theoretical foundation for which biomedical research is done, but the experimental framework is failing in developing effective and efficient methods for developing experiments which allow for reproducible results.

Results and Discussion

As reported by Begley and Ellis in Nature, clinical trials in onocology have the highest rate of failure when compared to other areas [5]. They attribute the high failure rate not only to the difficulty treating cancer but also to the “quality of published preclinical data”. The effectiveness of drug development relies heavliy on the availiable literature [5]. The problem is that the results of preclinical studies are be taken at face value, and this causes problems later in clincal trials. Amgen studied fifty-three papers that were considered “landmark” studies and found that in only 11% of the cases were the results scientifically confirmed [4, 5]. This has a negative economic as well as a scientific impact. When preclincal studies are used for drug development and there is less than a 50% reproduciblity rate, clinical trials fail [3–5]. This has led to an overall decrease in the rate of success for Phase II clinical trials from 28% to 18% in the years 2008–2010 [3].

Contrast this with Galvani’s work. Volta was able to replicate Galvani’s expirment and by doing so was able to develop hypotheses that enabled him to eventually develop the battery [1, 2, 6]. If Volta had been forced to question Galvani’s methods due to inability to dervive the same result, he would have discarded the Galvani’s expirments, and Volta might not have began exploring the electric relationship between zinc and silver that lead to his development of the battery [1]. Galvani himself was able to further his expirements because of the consistency of his work which eventually led him to fairly accurate conclusions with regards to the conduction of electricity within animals [1, 2, 6]. He was able to develop a hypothesis that the electricity was conducted by means of a watery interior with an oil exterior which was shockingly close to the model developed by Hodgkin-Huxley [2]. If Galvani’s expirement had not produced consistent results it would have not been taken seriosly by himself or his contemporaries. These consistent results allowed Volta to develop an end product in the electric battery that ended up having signifcant economical value and Galvani to suggest there was an “intrinsic” electricity in animal.

Biomedical research is in part driven by the ability to produce tangible economic results. As failure rates of clinical trials increase, the research community could learn some lessons from the distant past in the form of Galvani and Volta competition and practices. There was a fundamental mistrust between Galvani and Volta which caused Volta to check Galvani’s expiriments and Galvani’s theory of animal electricity. Begley and Ioannidis reached a conclusion that “science operates under the trust me model that is no longer considered appropriate in corporate life nor in government” [4]. They state that endorsing the current state of research that is “producing results the majority of which cannot be substantiated” would be erronous. To rectify this, they suggest “rethink[ing] methods and standardization of research practices” so that the focus would not promote the pursuit of studies that might have flash and gain headlines but little substance for further research and economic benefit [4].

The research community would benefit from standards and practices that produced results that could be readily verified by others. This would encourage others to use the solid foundation built upon reliable data for developing further hyprotheses. From Galvani and Volta’s competion two thing can be learned that are applicable to today’s environment. The first is that a solid methodology rooted in reproducible results will spark further expirementation that will have solid results. The second is that a lack of trust among scientists has the benefit of sparking competition to develop solid expiriments to proove one’s own hypotheses.

Outlook and relevance of work

The reproducibility of Galvani’s research contributed to the development of electrophysiology and the development of the battery by Volta. The point of contention between Galvani and Volta was not whether Galvani’s methodology was sound, since Volta could achieve the same results. Their problem was a lack of a sound theoretical framework to interpret the results and a fundamental healthy mistrust between them. Due to Galvani’s sound methods and reproducible results, science could develop further because the hypotheses the results created needed further experimentation and testing, eventually leading Volta to develop the battery to defend his hypothesis and Galvani’s experiment connecting the nerves of the two frog legs to defend his.

The biomedical field could benefit from a shift in thinking. Rather than releasing methods that produce results researchers cannot even reproduce in their own labs, science would benefit from a reduction in releasing results that might cause sensation in the public and make sense within the theoretical framework, but do not produce the same results twice. The scientific community would benefit in the same way that it did when Galvani and Volta were competing to explain their own theories. If the methods are sound and reproducible, other researchers will have the opportunity to challenge the originator’s hypothesis and put forth their own hypothesis to explain the results. This would not slow down progress but rather help along the development of the theoretical framework by making sure that other researcher’s claims have been properly validated.


[1] Piccolino M. Luigi Galvani and animal electricity: two centuries after the foundation of electrophysiology. Trends in neurosciences. 1997;20:443–8.

[2] Piccolino M. Animal electricity and the birth of electrophysiology: the legacy of Luigi Galvani. Brain Research Bulletin. 1998;46:381–407.

[3] Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10:712-.

[4] Begley CG, Ioannidis JPA. Reproducibility in Science. Improving the Standard for Basic and Preclinical Research. 2015;116:116–26.

[5] Begley CG, Ellis LM. Drug development: Raise standards for preclinical cancer research. Nature. 2012;483:531–3.

[6] Piccolino M. Luigi Galvani’s path to animal electricity. Comptes rendus biologies. 2006;329:303–18.


In Defense of Radioisotope Powered Pacemakers

A Medtronic Pu-238 powered pacemaker
Tyler Shewbert

Starting in 1970 radioisotope powered pacemakers were implanted in over 3000 patients worldwide. These devices had longer lifetime power supplies than battery powered pacemakers therefore eliminating the need for battery replacement surgeries [1, 2]. A thirty-one year study performed by the Newark Beth Israel Medical Center of 139 patients showed that nuclear-powered pacemakers required less surgeries than a control group of lithium battery-powered devices [2]. This same study showed that cancer rates for patients with nuclear-powered pacemakers were similar to a control group with battery-powered devices [2]. The feared increased cancer rates did not materialize [1, 2]. With improvements in modern electronics technology and improvements in semiconductor energy conversion efficiency, radioisotope pacemakers can once again reliably provide pacing services for patients whose life expediencies are greater than twenty years, reducing the need for invasive surgeries to replace batteries. A new generation should be developed.


A radioisotope power source takes the heat from the decay of a radioactive substance and generates heat through some sort of thermal energy to electrical energy conversion process using either the thermoelectric effect or thermionic effect [1–4]. Pacemaker devices of the 1960s had short battery lives, ranging from twelve to eighteen months [1–4]. A proposal was made to use radioisotope power sources which would have longer lifetimes and require less surgery since each time a battery had to be replaced surgery was needed. The Atomic Energy Commission had a guideline for 90% device reliability over ten years [2]. Several manufacturers developed nuclear-powered pacemakers using either thermoelectric or thermionic power conversion systems [1–4]. Two isotopes, Pu-238 and Pm-147, were chosen as the heat source [1–4]. The amount of radioactive material in each device ranged from 0.105 to 0.40 grams [3]. A majority of the developed devices used Pu-238 due to its 87.7 year half-life [3, 4]. In previous experiments Pu-238 capsules of 30–50 grams were implanted within dogs to test for carcinogenic effects. These experiments showed no significant change from a control group [3, 4]. There were members of the medical community that did not trust those experiments and believed that the prolonged exposure to implanted radioactive materials would cause cancer [5]. However, this did not happen. In a long term study the rates of cancer in two control groups, one with nuclear-power pacemakers and those with battery-powered pacemakers, showed no statistically significant difference in cancer rates over thirty-one years [2]. The major drawbacks to the nuclear-powered pacemakers were the availability of nuclear fuel, the excessive size compared to other pacemakers of the period, and the stringent FDA and NRC regulations compared to non-nuclear devices [1]. If nuclear-powered devices were developed using modern electronics technology the size would be smaller. However, the strain on Pu-238 sourcing would be significant since neither the United States or Russia is producing any currently for commercial use, but there are other isotopes such as tritium that could be used that are widely available and would not require as much regulation [6].

Results and Discussion

The original motivation for nuclear-powered pacemakers was the need for a pacemaker power source that lasted longer than a year to eighteen months [1–4]. However, this is not as much of a concern with today’s pacemakers. With new lithium batteries, modern pacemakers have an expectancy of around ten years [7]. While this is an improvement, an otherwise healthy individual in their early forties might have as many as four battery replacement surgeries within their lifetime. This is where a new generation of nuclear-powered devices will be useful. It will enable a long-term patient to require less surgery.

The results of three decades of tracking patients reveal that nuclear-powered devices worked well. The study performed by the team at the Newark Beth Israel Medical Center was revealing. Over fourteen years they implanted and tracked the progress of 132 patients [1, 2]. Of these patients, twelve needed surgery because they needed mode changing, and the devices they had implanted did not have the ability to be changed remotely [1, 2]. Power failure occurred in only one case [1, 2]. Fifteen were removed because of component malfunctions and eight units because the high pacing threshold had been passed [1, 2]. After fifteen years, the survival rate was 99% for the power systems and 82% for the entire pacing system [1, 2]. The malignancy rate was similar to that of the normal population and tumors were not concentrated around the pacemaker as had been feared but randomly distributed as in a normal population [1, 2].

From the study by the team at Newark Beth Israel Medical Center a few conclusions can be made. First, nuclear-powered pacemakers are a reliable power source for pacemakers. The failure rate over fifteen years was less than 1%, which was better than the NRCs recommendation of 10% over ten years. Second, the fear of increased cancer rates that had been mentioned by Hart, the FDA, and the NRC proved to not materialize. The exposure to low-levels of chronic radiation was not a concern. Third, the exposure to radiation for patients was well within the limits that the NRC has set up for workers in nuclear sites [1–4]. According to EW Webster, as mentioned in the 2006 paper by Parsonnet, the requirement for the use of a fluoroscopically-controlled replacement for battery-powered pacemakers would expose the patient to 1.6 times as much radiation as 15 years of pacing using a Pu-238 powered device [2]. As of May 2004, twelve of the 139 patients were still being followed, and one patient still had their original pacemaker thirty-one years later [2]. Two other major studies reached the same conclusions with regards to the safety and reliability of nuclear-powered pacemakers and therefore the Newark Beth Israel Medical Center can be considered a good survey of other studies [2]. Nuclear-powered pacemakers were successful.

The major drawbacks of the technology were that nuclear-powered pacemakers were larger than the battery-powered devices of the era, the surgeries required for replacing the pacemakers due to pacing problems, and the regulations and risks involving handling of nuclear fuel [1]. With improvements in pacemaker technology the first two drawbacks would be significantly reduced. Pacemakers electronics are much more advanced than in the 1970s. The concern that Hiram Hart and others had concerning increased cancer rates never materialized and therefore should not be a concern in the future when considering whether this technology should be revisited.

There is also a modern solution to the drawback of nuclear fuel handling. There has been improvement in betavoltaic devices over the past forty years. These devices had been originally considered as a power source for pacemakers in the 1970s [3, 4]. Betavoltaic power sources use b particle decay as method of generating current as opposed to a decay found in Pu-238 devices. Early semiconductor devices were not well suited for b decay power conversion and this was a major reason why Pu-238 was chosen. However, semiconductor energy conversion technology has improved since the 1970s and there is once again renewed interest is using the technology for long-term, low power needs [8]. While early betavoltaic pacemakers used promethium as an isotope, the increased efficiency of energy conversion technology has enabled the development of betavoltaic devices using tritium, an isotope of hydrogen, and other less radioactive substances as the energy source [8]. Using less radioactive isotopes could allow for a reduction in the regulatory framework that the NRC and FDA has imposed on nuclear-powered pacemakers. This would reduce the costs associated with production and ultimately the disposal of the device, both of which had been major costs due to the specific handling requirements of Pu-238 and promethium.

Outlook and relevance of work

Nuclear-powered pacemakers have a successful history of providing long term, reliable power to pacemakers with similar side effects that of a battery-powered device. With improvements in modern pacemaker electronics technology and improvements in semiconductor energy conversion technology the time had come to revisit the use of radioisotopes as power sources for pacemakers, particularly with patients who are expected to have multi-decade survival timelines.

While using Pu-238 would prove a hassle due to the lack of supply and stringent regulations, developing betavoltaic-powered pacemakers would be a logical course to take. Tritium is widely available in seawater so the main issue would be making sure the size of the tritium storage is the size a battery would use in a pacemaker. This is because tritium’s half-life is only 12.5 years, but two packages of tritium could be used in sequence to extend the lifetime to twenty-five years [8]. If less radioactive isotopes could be used in modern nuclear-powered pacemakers the regulations could be revisited, simplifying the process and therefore reducing regulatory costs. The advancement in pacemaker electronics technology would mean that there will be less pacemakers replaced due to pacing and lead problems that occurred in the ones implanted in the 1970s and 1980s.

Developing long-term pacemaker power solutions using radioisotopes would once again allow patients, particularly those who may have a pacemaker for forty years or more, to reduce the frequency of surgery. Over a multi-decade length of time, this could potentially reduce the cost of the pacemaker since there would not be a required surgery every ten years or so. The fears of radiation causing cancer was proven to be false by the first wave of nuclear-powered pacemakers and a new generation of devices would be able to use improvements in energy conversion technology and pacemaker technology to allow for more efficient and reliable devices than the first generation.


[1] Parsonnet V, Berstein AD, Perry GY. The nuclear pacemaker: Is renewed interest warranted? The American journal of cardiology. 1990;66:837–42.

[2] Parsonnet V, Driller J, Cook D, Rizvi SA. Thirty‐One Years of Clinical Experience with “Nuclear‐Powered” Pacemakers. Pacing and clinical electrophysiology. 2006;29:195–200.

[3] Huffman FN, Norman JC. Nuclear-fueled cardiac pacemakers. Chest. 1974;65:667–72.

[4] Norman JC, Sandberg Jr GW, Huffman FN. Implantable nuclear-powered cardiac pacemakers. New England Journal of Medicine. 1970;283:1203–6.

[5] Hart H. Nuclear-Powered Pacemakers. Pacing and Clinical Electrophysiology. 1979;2:374–6.

[6] Association WN. Plutonium. World Nuclear Association; 2017.

[7] Mallela VS, Ilankumaran V, Rao N. Trends in Cardiac Pacemaker Batteries. Indian Pacing and Electrophysiology Journal. 2004;4:201–12.

[8] Bourzac K. A 25-Year Battery. MIT Techonology Review: MIT; 2009.