Arguments in Favor of Low Enriched Uranium as a Fuel Source for Space Reactors

The author’s of this white paper make a compelling argument for switching the development of space nuclear power reactor’s fuel source from Highly Enriched Uranium (HEU) to Low Enriched Uranium (LEU). The basis of their argument is political and economic. Due to proliferation concerns, an HEU based space reactor may be hard to literally get off the ground. Unfortunately, the devils of human nature are once again preventing progress. An HEU space reactor system would be the logical choice due to the smaller mass requirements.

http://www.smiocs.com/?attachment_id=51

Mass of LEU systems compared to 1 – kWe HEU KiloPower space system

Nanostructures as a Replacement for the Patch-Clamp Method

By Tyler Shewbert

Abstract

            The first measurement of the action potential was performed using the voltage-clamp method in the 1950s on the squid axon. The patch-clamp method was developed in the 1970s, and using a gigaohm seal the ability to measure individual ion channels within mammalian neurons was attained [1, 2]. The limitations of the patch-clamp method are a lack of scalability that would allow researchers to perform simultaneous measurements of multiple neurons in vitro, both externally and internally, and its inability to perform longer data collection [3, 4]. Researchers have been studying ways to use nanoscale structures such as nanowires, nanopillars and other designs to help expand researchers’ ability to study individual neuron behaviors while also studying the behavior of surrounding neurons. This paper will report on two methods under development to either replace or augment the patch-clamp method and help further the understanding of neuroelectric behavior.

Introduction

            The patch-clamp method was developed by Bert Sakmann and Erwin Neher in the 1970s [1]. The patch-clamp method involves using small, heat-polished pipettes with the electrodes of the size of 0.5-1.0 mm [1]. To achieve the gigaohm seal with the cell membrane, which allows for accurate measurements of the action potential, extra care is taken to make sure the pipette is clean and suction is applied to the pipette interior [1]. The resistance of the seal is inversely related to the signal to noise ratio, so the better the seal, the more accurate the ion channel recordings are [1]. However, the patch-clamp method requires a skilled researcher to perform the method and is limited in its ability to study networks of cell electrical behavior in vitro [3].

Nanoscale structures have been explored as a way of performing these types of experiments. Vertical nanowire electrode arrays (VNEA), kinked nanowires or pillar-shaped nanowire with embedded pn-junctions, and other methods have been examined as possible methods [3].

            A team at Harvard led by Hongkun Park developed a VNEA device with sixteen recording/stimulation pads. Each pad consisted of a 3×3 array of silicon nanowires (NW) that had dimensions of approximately 150 nm in diameter and 3 mm in length [4]. The core of each wire was silicon and consisted of a metal tip to provide conductivity [4]. Each array was a 4 mm square [4]. This size was chosen because it is similar to the size of a neuronal cell so it was thought that would increase the chances of only one cell being connected to each array [4]. The nanowires penetrated the cells membrane and recordings were performed [4]. The seal was in the range of 100-500 MW [3]. The following figure shows the a 3×3 pad:

 

 

Figure 1. A VNEA 3×3 pad. (from [4])

            Charles Lieber’s research group has experimented with kinked nanowires and nanotubes (NTs) with FETs fabricated within the nanostructure. The NW or NT penetrates the cell membrane and the FET is used to record intracellular signals [5]. The research discussed in this paper will discuss the use of SiO2 nanotubes to penetrate cells with embedded FETs for measuring the fast action potentials (FAPs) within the cell [5]. Referred to as a branched intracellular nanotube FET (BIT-FET), the group was able to simulate FAPs in cells using tubes as small as 3 nm, much smaller than other methods [5]. The nanotube connects the intracellular fluid to the FET as shown in the following figure:

Figure 2. Setup of the nanotube connecting the cytosol of the cell to the FET (from [4]).

Results and Discussion

            The results of two recent papers will be discussed here. Both were published in 2012. The work of the team at Harvard led by Park using VNEA and Lieber’s team’s recent nanotube research. While the work of the Harvard team shows promise, the work of Lieber’s team with nanotubes has greater potential for solving the limitations of the patch-clamp method.

            Park’s team performed a series of experiments on cultured cortical cells of rats [4]. The pads of VNEA penetrated the cells of the rats [4]. Patch-clamping was used to determine the membrane change, therefore determining if the VNEA had penetrated the membrane [4]. In over half the instances the VNEA penetrated the cell allowing for recording and stimulation of the cell [4]. Once the nanowire was inside of the cell, it was able to be stimulate and record the membrane potential using electrochemistry [4]. The duration of stable recording was 10 minutes [3]. A main advantage over external microelectrode devices is that the VNEA device was able to record multiple action potentials simultaneously [4].

            Unfortunately, the VNEA devices had high impedance and the intracellular recording of the VNEA device provided no significant advantage over a method which uses mushroom-shaped, gold-tipped microelectrode devices externally [3]. The high impedance issue could be solved by using more nanowires to penetrate the cell, in theory [3]. Unfortunately, in practice other researchers have found that increasing the number of nanostructures for penetration on a pad has the effect of reducing the number of nanostructures that penetrate the cell, causing a “bed of nails” scenario [3].

            The work using nano-FETs has proved more promising. This is because a recording that uses a FET built into the structure of the nanowire does not have to worry about impedance [5]. The use of the BIT-FET recording intracellular signals was tested on embryonic chicken cardiomyocyte cells [5]. After 45 seconds of the BIT-FET being in “gentle” contact with the cell membrane, the recorded electrical behavior showed a change that was consistent with the previously ran simulations that showed when intracellular recording took place [5]. Full-amplitude action potential recording was performed and was reproduced [5]. The BIT-FET devices had an hour of stable recording time [5].

            They speculated that the penetration of the cell was spontaneous rather than forced since no external pressure has been applied when the recordings showed intracellular electrical behavior [5]. They also found that the BIT-FET devices were reusable [5]. The device was designed for intracellular, multiplex recording of cells, and this was confirmed [5]. Due to their small size, the BIT-FET devices should be able to record electrical behavior from subcellular structures [3, 5]. These devices are limited at this point by the noise-levels of the nano-FET devices [3, 5]. The problem that other nano-FET devices have of having to push the cell onto the electrode seems to have found a solution in the BIT-FET device since no external pressure was being applied at the time of penetration  [3, 5]. This was theorized to be caused by lipid fusion and has the benefit of a tight seal that removes the need for circuitry that dealt with probe-membrane leakage [5].

 Outlook and relevance of work

            The work performed by both teams contributed to the search for a method to replace or augment the patch-clamp method as a method of examining electrical behavior in cells. Between the two, the BIT-FET device and the method developed by Charles Lieber’s team was more promising. While VNEA devices successfully recorded intracellular signals from multiple cells, the nanostructure did not always penetrate the cell successfully. The penetration rate was actually reduced in similar style experiments when the number of structures had been increased to reduce the impedance in attempts to improve the signal to noise ratio [3].

            The BIT-FET devices appear to be the route to a major breakthrough in intracellular recording. The ability of the BIT-FET to spontaneously penetrate the cell membrane helps to solve a problem that had been faced by kinked nanowires and other methods [3]. The BIT-FET’s ability to record subcellular structures accurately has the potential to replace the patch-clamp method. Also, the ability for multiple action potentials across many cells to be recorded simultaneous, something that the VNEA devices were also able to do, is invaluable. If improvements in reducing nano-FET noise levels succeed, these devices might prove quite successful as a complement and eventual replacement to the patch-clamp method.

References

[1] Shewbert T. From the Voltage Clamp to the Patch-clamp. Santa Cruz: University of California, Santa Cruz; 2017. p. 5.

[2] Cui Y, Wei Q, Park H, Lieber CM. Nanowire nanosensors for highly sensitive and selective detection of biological and chemical species. Science. 2001;293:1289-92.

[3] Spira ME, Hai A. Multi-electrode array technologies for neuroscience and cardiology. Nature nanotechnology. 2013;8:83-94.

[4] Robinson JT, Jorgolli M, Shalek AK, Yoon M-H, Gertner RS, Park H. Vertical nanowire electrode arrays as a scalable platform for intracellular interfacing to neuronal circuits. Nature Nanotechnology. 2012;7:180-4.

[5] Duan X, Gao R, Xie P, Cohen-Karni T, Qing Q, Choe HS, et al. Intracellular recordings of action potentials by an extracellular nanoscale field-effect transistor. Nature nanotechnology. 2012;7:174-9.

 

 

 

The Versatility of Vagus Nerve Stimulation

By Tyler Shewbert

Abstract

            The idea of electrical stimulation to treat medical disorders has been around since 1889. The first major device that was implemented successfully on a large scale was the pacemaker. It the past twenty years, there has been increased interest in the use of electrical stimulation of specific nerves as a method of treating various conditions. As the understanding of the way electrical pathways in the body effect the body’s function increase, researchers are able to explore new methods of treating common problems using electroceutical devices that are much less invasive than their predecessors such as deep brain stimulation and pacemakers. Vagus Nerve Stimulation (VNS) has been shown to be one of the most promising methods, allowing researchers to treat migraines, epilepsy, traumatic brain injury (TBI), inflammation, and other problems in humans and animals.

Introduction

Electroceuticals essentially work by manipulating the action potentials within the body which are responsible for controlling the body’s functions [1]. These actions potentials control the body’s functions through certain patterns which electroceutical devices are able to manipulated [1]. The main advantage of using electroceuticals rather than deep brain stimulation is that electroceuticals allow for the pinpointing of certain nerves while deep brain stimulations can influence large areas of nerves which are not related to treating the disease [1]. The recent developments in mapping the nervous system’s responsibilities in certain diseases such as obesity is enabling researchers to believe that electroceuticals can be an effective way of treating such diseases [1].

Electroceuticals are gaining interest from corporations and research institutions. In recent years, GlaxoSmithKline (GSK) and the National Institutes of Health have begun funding research initiatives backed by grants reaching into the hundreds of millions of dollars. The reward for companies such GSK and ElectroCore, which makes gammaCore, is that the regulatory framework for medical devices is much less costly and quicker than drug regulations. This will enable products to get to market quicker than their pharmacological based counterparts.

The vagus nerve plays a central role in the automatic nervous system which is responsible for the function of organs [2]. It is the longest nerve in the automatic nervous system  [2]. Due to its important role in automatic nerve processes, it was hypothesized that electrically stimulating parts of the vagus nerve would be successful in treating a range of diseases  [2]. Controlled stimulation of the vagus nerve has been used to treat epilepsy and was first performed in the 1990s [3]. This was traditionally performed by implanting a device on the vagus nerve in the neck and connecting it to a stimulator device implanted in the chest [4]. The gammaCore device has been shown to be successful in treating cluster headaches in human patients. Rheumatoid arthritis patients’ inflammation has been treated using VNS [3]. It was shown in rats and rabbits to have an effect in reducing the damage caused by traumatic brain injury [3]. Professor Chris Toumazou of Imperial College has developed a device that would help control hunger in patients suffering from obesity [2]. The actual clinical success of using VNS in treating humans is mixed [5]. However, due to its role in the nervous system, treating conditions with VNS is a tempting and worthwhile pursuit for researchers. In this paper, research on the stimulation of the vagus nerve as a treatment for cluster headaches, inflammation and traumatic brain injury will be surveyed.

 

 

Results and Discussion

The gammaCore device works by using an electroceutical device that is externally placed on the neck. The Prevention and Acute Treatment of Chronic Cluster Headache (PREVA) trial was performed on 45 patients using the gammaCore device and 47 who were not [4]. After four weeks, the group who have been routinely using the gammaCore device were suffering six less cluster headache attacks a week [4]. This is three times greater than the control group which was only suffering two less cluster headaches a week using traditional methods of treatment [4].

Another study was performed to see if the gammaCore device would work during an acute cluster headache attack. This trial was also successful, and 47% of the patients reported the attack was over within eleven minutes [4]. There was no control group to compare this to, but a study which tested pharmaceutical methods of treating acute cluster headache attacks found that it took two hours to be free of pain in only 22% of the cases [4]. This shows that electroceutical methods have the potential for being better at treating cluster headaches.

Kevin Tracy and his research team performed a proof-of-concept experiment to see whether or not VNS could be used successfully in treating inflammation in rheumatoid arthritis (RA) patients [4]. A VNS device was implanted within the chest of patients and stimulation was conducted for 42 days. After 42 days, the device was switched off for 14 days, and then turned on for another 28 days [4]. The Disease Activity Score (DAS), a method for tracking the activity of RA in patients, decreased for the first 42 days while the device was on, and then increased for the 14 days that the device was off, and then once again decreased for the last 28 days [4]. This shows that the VNS device was successful in reducing the inflammation caused by RA [4]. The timeline for the whole process is shown in the following figure:

Figure 1. Timeline for the RA study (from [6])

On Day 0 the patient received a 60 s stimulation of 250 ms pulses of a current of between 0.25-2.5 mA and then nothing again till Day 7 [6]. From Day 7 to Day 28, the current was set to a maximum tolerable value up to 2.0 mA and the 60 s stimulation of 250 ms pulses was used daily [6]. From Day 28 to Day 42, for patients who had not responded to the treatment, the stimulations were increased to four times a day [6].

The device inhibited the tumor necrosis factor successfully during the days which the device was turned on [6]. This was the component that was critical in reducing the inflammation in these patients. This study was small in scale, composed of only 17 patients, so large-scale studies are needed to see how effective this method of treatment is for reducing RA inflammation [6].

Studying the effects of VNS to treat traumatic brain injury on humans is much harder than the previous two studies mentioned here. This is because some sort of brain trauma has to occur. Studies on rats and rabbits have shown promising results [6]. Studies took the form of having the animal perform a cognitive test such as running a maze, traumatically injuring the brain, and the using VNS treatments for two to four weeks. In these studies, the use of VNS was successful in helping the animals perform the tasks that they had been taught before the injury after the trauma was experienced [6]. However, performing this type of study on humans is unethical, and it is also would be hard to perform studies on patients who had experienced TBI in the previous two hours, the time in which researchers believe that VNS needs to begin after the initial injury to the brain successfully treat it [6].

Outlook and relevance of work

            Vagus Nerve Stimulation has to potential to treat a wide range of diseases and injuries. The power lies in the central role that the vagus nerve plays in the automatic nervous system. The three examples in this paper are only a few of the treatments being explored using VNS. Other studies have shown that it may be successful in treating obesity, which until now has required invasive surgery to treat. If we are able to continue to improve our knowledge of neural circuitry and how neural signal influence bodily functions, the ability to treat a large number of problems will be available. The funding in this field, several hundred million dollars, is still limited compared the billions of dollars spent of drug research each year. However, as the promise VNS and other types of electroceuticals is proven, it can be assumed that the funding will increase, enabling researchers to improve understanding how the electroceuticals are working. A major breakthrough, which would be a daunting undertaking, would be the full mapping of neural circuitry and signaling for several different problems. Once this is accomplished, a better understanding of the role that electroceuticals are playing in the alleviation of symptoms, and use that fundamental understanding to develop new treatments.


 

References

[1]       K. Famm, B. Litt, K. J. Tracey, E. S. Boyden, and M. Slaoui, “Drug discovery: a jump-start for electroceuticals,” Nature, vol. 496, no. 7444, p. 159, 2013.

[2]       G. Finnegan. (2016) Could tweaking a nerve beat obesity? Horizon.

[3]       S. Miller and M. S. Matharu, “The Use of Electroceuticals and Neuromodulation in the Treatment of Migraine and Other Headaches,” in Electroceuticals: Advances in Electrostimulation Therapies, A. Majid, Ed. Cham: Springer International Publishing, 2017, pp. 1-33.

[4]       A. Majid, Electroceuticals: Advances in Electrostimulation Therapies. Springer, 2017.

[5]       S. K. Moore, “Follow the wandering nerve,” IEEE Spectrum, vol. 52, no. 6, pp. 78-82, 2015.

[6]       F. A. Koopman et al., “Vagus nerve stimulation inhibits cytokine production and attenuates disease severity in rheumatoid arthritis,” Proceedings of the National Academy of Sciences, vol. 113, no. 29, pp. 8284-8289, July 19, 2016 2016.

 

The Success of Polypyrrole Based Neural Sensors

By Tyler Shewbert

Abstract

            The study of neural activity can be performed with implanted electrodes. One of the major drawbacks that researchers face when using typical flat, metal electrodes is that the impedance caused by the growth of scar tissue around the implant renders the collection of data impossible within weeks [1-4]. A proposed solution is to develop electrodes that have been organically enhanced using polymers and peptides that would allow the electrode and neurons to have a more intimate connection that would last longer. The polymer polypyrrole (Ppy) and various peptides were added to metallic conductors of gold and iridium by a team at the University of Michigan. They were found to improve the implanted electrode’s ability to study neural activity [1-4]. The success of this research shows that the use of organic electrodes for the study of neural activity is possible and potentially better than their non-organic counterparts.

Introduction

            An electrode is a basic electrical device used for conduction. When used as neural sensors, they are implanted [4]. However, for neural applications, flat, metallic electrodes are surrounded by scar tissue caused by inflammation. This renders the device useless within a matter of weeks due to the increasing impedance caused by scarring [1-3].

To improve and optimize such sensors three things are needed: Improved capacitance, convex surfaces, and better biocompatibility [3]. Low impedance is necessary when an electrode is being used to measure neural signals [3]. The capacitance between the electrode and the area where it is implanted is modelled as in series with the impedance caused by the tissue. Therefore, by increasing the capacitance of the electrode, the electrodes efficiency can be increased [3]. Convex surfaces would allow electrodes to form more intimate connections with the tissue around the implant [3]. Iridium and gold have both been used electrode contacts for neural sensors because of their known biocompatibility [3]. Unfortunately, long-term recordings using these devices fail [3].

Electroactive polymers and peptides have shown promising results in modifying electrodes to improve all three of these areas. Ppy is an organic, conducting polymer [4]. Ppy in combination with the synthetic peptide DCDPGYIGSR was found to improve the results of in vitro neural recordings within guinea pigs [1]. Ppy was also used in conjunction with the nonapeptide CDPGYIGSR to improve the surface of the electrode to enhance its ability to connect with the surrounding tissue [2]. The use of Ppy in combination with various other biological materials was shown to increase the area of connection between the neuron and electrode, increasing capacitance and reducing impedance [1-4].

Results and Discussion

David Martin and his team at the University of Michigan published a series of papers about using organic materials to enhance implantable neural electrodes capabilities. They first began by exploring how Ppy doped with polystyrene sulfonate (PSS) could be used to change the topology of the electrode [3]. Next, they examined how Ppy and the peptides could be used to increase the attraction of neural filaments to the electrode. They found that using this combination allowed them to gain the desired convex shape that would improve connection between the electrode and the surrounding tissue [2]. Finally, they made electrodes which were composed of Ppy and the synthetic peptide DCDPGYIGS. They implanted these within guinea pigs to study whether or not the changed surfaces improved data recording when compared to a control group of guinea pigs implanted with flat-surfaced electrodes, and also tested the environmental effects on the electrodes using deionized water [1, 4].

In the first paper, the combination of Ppy and PSS was grown onto neural electrodes made of either Au or Ir [3]. The structure of the Ppy/PSS on the electrode was controlled precisely and reproducibly by a charge passing through the system [3]. The topology of the structure was complex enough that the efficient surface area for a Ppy/PSS film was estimated to be 26 times greater than the surface area for a flat gold electrode. As this surface area increased, the capacitance increased [3]. Impedance spectroscopy showed that the coated electrode had impedance values of one to two times less than that of a flat Au electrode [3]. Thickness of the film was varied from 5 to 20 mm. The best thickness for the film was found to be 13 mm [3]. Neural implementation of the electrodes within guinea pigs showed that a Ppy/PSS coated electrode could record high-quality neural data [3]. The ability to reduce the impedance by as much as two orders of magnitude and the ability to increase the surface area by 26 times proved neural electrodes efficiency could be improved by the addition of polymers.

The team then examined the possibility of adding biomaterials to the Ppy film in hopes of increasing the development of the connection between the tissue and the electrode [2]. The nonapeptide CDPGYIGSR and fibronectin fragments (SLPF) were added to the Ppy film [2]. Impedance spectroscopy once again showed that the impedance for the Ppy/SLPF material was an order of magnitude lower at the biologically important frequency of 1 kHz [2]. Next, glial cells from rats and neuroblastoma cells were grown on electrodes both with and without biological coating [2]. The Ppy/SLPF coating attached to the glial cells and the Ppy/CDPGYIGSR attached to the neuroblastoma cells better than the control groups of electrodes without biological coating [2]. The results also verified the idea that a convex, highly complex morphology between the tissue and the electrode was the best for establishing a connection between the two [2]. The most important result out of this paper was the ability to add cell-binding biomaterial to the polymer film to increase the chance that a well-developed connection between the tissue and the electrode could be established.

The teams third paper in 2003 studied the long-term effects of the film-enhanced electrode in the environment and its ability to record data over the period of several weeks [1]. Ppy and a synthetic peptide DCDPGYIGSR were now used as the film deposited on Au  [1]. First, the electrodes were soaked in de-ionized water for several time periods up to seven weeks [1]. It was found that the peptides did not diffuse after seven weeks, which had been a major concern [1]. After to probes had been soaked for seven weeks, they were then implanted in guinea pigs [1]. A control group of guinea pigs also had non-coated electrodes implanted [1]. The impedance was measured at 1 kHz at one week, two weeks and three weeks [1]. Recording of data was also performed periodically [1]. The electrodes were also stained for microfilaments to show the amount still connected between the neurons and the electrodes [1].  The following table summarizes the results:

Coated Electrodes Non-Coated Electrodes
· Impedance: Stable for the first week and then increased by 300% by then end of week three. · Impedance: Decreased for the first week and the jumped to 300% by end of third week.
· Recording: 62.5% still recording after second week. · Recording: No data found at end of week two.
· Filaments: At the end of week one: 83%. End of week two: 67%. · Filaments: At the end of week one: 10%. End of week two: 6%.

Table 1. Comparison of the results of the coated and non-coated electrodes implanted in guinea pigs (data from [1])

From Table 1, the importance of the filaments being connected and the ability for electrodes to record data is obvious. The ability for the electrode to maintain recordings is directly related to the number of filaments are still connected [1]. The main advantage using biologically enhanced electrodes is in recording neural data. It would be interesting to see the results of a study that compared the neural filament connections for a Ppy/PSS film versus a film enhanced with Ppy/DCDPGYIGSR to see how much of an effect the peptide has on enhancing the connection.

Outlook and relevance of work

            The University of Michigan team has shown that the for neural sensing, biologically enhanced electrodes are more effective than their non-coated counterparts. The ability to implant neural sensors that have longer lifetimes has the advantage of being able to perform long-term studies on neural activity and reducing the need for surgery to implant the electrodes. The lower impedance that is seen for the first two weeks, as in the third study, allows for a more accurate collection of data. Further studies can reveal even better peptides than promote connectivity between neurons and the electrodes, potentially for longer periods of times. Various other polymers are being studied also such as polythiophene, poly(3,4-ethylenedioxythiophene) (PEDOT), and polyaniline [5]. Further research on the potential toxicity of such electrodes is needed before large-scale human studies can be performed. Results from a 2009 study of a PEDOT based electrodes showed no toxic effects in rats [6]. While bioelectronic solutions might not solve all the problems they are being applied to, it seems that organically enhanced electrodes for neural sensing is the correct solution, but further refinement is necessary.

 

 

References

[1] Cui X, Wiler J, Dzaman M, Altschuler RA, Martin DC. In vivo studies of polypyrrole/peptide coated neural probes. Biomaterials. 2003;24:777-87.

[2] Cui X, Lee VA, Raphael Y, Wiler JA, Hetke JF, Anderson DJ, et al. Surface modification of neural recording electrodes with conducting polymer/biomolecule blends. Journal of biomedical materials research. 2001;56:261-72.

[3] Cui X, Hetke JF, Wiler JA, Anderson DJ, Martin DC. Electrochemical deposition and characterization of conducting polymer polypyrrole/PSS on multichannel neural probes. Sensors and Actuators A: Physical. 2001;93:8-18.

[4] Berggren M, Richter‐Dahlfors A. Organic bioelectronics. Advanced Materials. 2007;19:3201-13.

[5] Guimard NK, Gomez N, Schmidt CE. Conducting polymers in biomedical engineering. Progress in Polymer Science. 2007;32:876-921.

[6] Asplund M, Thaning E, Lundberg J, Sandberg-Nordqvist A, Kostyszyn B, Inganäs O, et al. Toxicity evaluation of PEDOT/biomolecular composites intended for neural communication electrodes. Biomedical Materials. 2009;4:045009.

 

Magnetoencephalography as a Method for Studying Deep Brain Stimulation

By Tyler Shewbert

Abstract

            Magnetoencephalography (MEG) has been shown to be an effective method to study the effects of deep brain stimulation in patients with chronic pain, Parkinson’s disease (PD), and Essential tremor (ET). The advantages that MEG provides over other neural imaging methods, functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), are that MEG has less intense magnetic fields than fMRI, so the DBS equipment is not harmed, and the ability for temporal resolution in the millisecond range which neither PET or MRI scans can provide [1, 2]. The main disadvantage of MEG is that is does not allow for accurate localization of deeper brain activity. MEG is the technology that is best suited for studying deep brain stimulation (DBS) due to its ability for accurate temporal and spatial resolution, and its lack of negative effects on the DBS device while it is functioning [1, 3, 4].

Introduction

            Magnetoencephalography (MEG) is a neural imaging technique first developed by David Cohen at MIT in 1968. MEG records the activity of the brain’s magnetic fields on the outside of the head [1, 4-6]. Cohen reported that these fields are on the order of 1 pico-tesla in strength [6]. MEG works by using a superconducting quantum interface (SQUID) which converts sub-quanta magnetic field changes into voltage changes [5]. The SQUID is connected to superconducting coils placed as close as possible to the head of a patient [5]. Due to the weak magnetic fields caused by the brain’s currents, the potential for outside interference is high. The external interference is reduced by magnetically shielding the room and the MEG device using Mu-metal, Al and other materials with differing magnetic properties [5-7]. Also, a gradiometer is placed on the other side of the superconducting current coil to help reduce external magnetic noise by evening out the external signals over the two coils [5]. Software is used to filter out signal noise by using reference sensors placed in positions where they can pick up mostly external noise and this can then be subtracted out of the harmonics of the superconducting coil data [5]. The following image shows the basic setup of an MEG recording device:

Figure 1. The basic setup of an MEG showing the superconductor, gradiometer, input coil and SQUID. (from [5])

The major drawback is that MEG cannot accurately locate brain functioning within the cortex. Computational methods are used for localization deeper within the brain but for those methods to work accurately there needs to be further study of how the deeper regions of the brain function [5]. This is because the two main methods of localization, dipole fitting and minimum based approaches for spatial reconstruction, both require assumptions of how the brain works to localize activity [5].

Deep brain stimulation has proven successful in treating patients who have otherwise not responded to other types of treatment for various neurological disorders [8]. However, there is a lack of understanding why DBS is successful in treating these disorders  [8]. Brain activity is difficult to accurately record and the implantation of DBS devices makes this more difficult [1, 4, 8]. The use of fMRI imaging can cause overheating or movement of DBS electrodes, or the associated pulse generator which has been implanted within the patient due to the strong magnetic fields that the fMRI machine uses [2]. PET scans have temporal resolution in the order of minutes which does not allow researchers to observe changes in brain activity that the DBS device is causing accurately. Therefore, research has been performed on the use of MEG as a method of imaging brain activity while the DBS implants are functioning. Results and Discussion

Several studies have been conducted on the effectiveness of using MEG imaging to study the causes of why DBS is successful at treating a range of neurological disorders including chronic pain, PD, and ET [1, 3, 4]. In each of these studies the conclusion was reached that MEG allowed the researchers a valuable way of studying the mechanisms behind the success of DBS treatments [1, 3, 4]. MEG allowed researchers to use DBS in both high and low frequency situations [1, 3, 4]. MEG also allowed researchers accuracy in the order of milliseconds for temporal resolution allowing researchers to see detailed neurological changes while the DBS devices were being turned on and off with spatial resolution of ~5 mm3 [1, 3, 4]. The results of three studies will be discussed here.

Since little was known about the neural mechanisms that alleviate pain in patients with chronic pain, a study was performed in 2006 to see whether MEG would be useful in imaging a patient’s brain while the DBS implant was operating and while it was off [1]. The selected patient had phantom limb pain which was being treated by a low frequency (7 Hz) stimulation [1]. The patient’s brain was initially recorded using MEG while the device was on for ten minutes and then switched off for ten minutes a total of four cycles. He reported the pain as increasing in each off cycle and the pain diminishing during on cycles [1]. The researchers found that the periods of DBS did not affect the MEG imaging  [1]. They also compared the brain activity data during the periods when the DBS device was turned off to data from fMRI scans previously taken and these were similar, showing that MEG was accurate at studying the effects of DBS at low frequencies  [1].

The same research team later tested their hypothesis that the success of MEG imaging was only possible due to the low, 7 Hz frequency that the patient’s DBS device had been using in the previous experiment [1, 4]. They proceeded to examine a patient who was using 7 Hz and 180 Hz DBS stimulation for treatment of cluster-headaches. The researchers assumed that the electromagnetic noise produced by the high frequency stimulation would possibly interfere with the MEG imaging [2]. The researchers were able to accurately image the areas of the brain that had been reported in earlier studies using fMRI as activating when the DBS stimulation was on and off with MEG imaging [2]. However, they found activity in the periaquaductal grey (PAG), which is deep within the cortex, and while consistent with fMRI studies of pain and pain relief, the researchers believed that the PAG measurements were not as locally accurate fMRI imaging due to the limitations of MEG localization within the deeper cortex [2]. Even with that, the researchers concluded that MEG was a still a reliable method of imaging neurological impacts of DBS while using high frequency stimulation [2].

The third study was published in 2013. The researchers examined if MEG imaging would be useful in studying the motor tremors that Parkinson’s patients suffer from and how DBS devices reduce these tremors [3]. Prior to 2013 researchers had found that MEG imaging would be successful in studying patients using high-frequency DBS to treat PD [3]. The researchers were successfully able to use MEG imaging to investigate the motor tremors of the patients when the DBS device was on and off [3].

All three of these studies and several others not discussed here have reached the same conclusion: MEG imaging is an accurate way to study the functioning of the brain in while a DBS device is on [3]. This is a powerful tool for researchers since it is not as electronically disruptive as fMRI scans and can be performed safely on patients while the device is on and allows for detailed temporal and spatial resolution of ~5 mm3. The use of MEG as a research tool for DBS is still in its early phase therefore further studies will be needed and improvements made to the methodology.

Outlook and relevance of work

MEG is essential to furthering the study of why deep brain stimulation works. Being able to temporally resolve brain functions on the scale of milliseconds will provide researchers insight into how the devices are working. The spatial resolution of 5 mm3 is like that of an fMRI scan. There is currently a lack of information about how DBS works and this complicates its use as a reliable treatment. The ability to study the brain’s response while DBS is occurring is the main advantage of MEG imaging and this will help expand knowledge of the device’s impact on neurological conditions.

MEG has the potential to be the best way to study DBS in the future but improvements are needed. The drawback of not being able to spatially localize the activity deep within the cortex can be improved as general knowledge of the deeper cortex is gathered from PET and fMRI scans is applied to the MEG localization algorithms. Broader study is needed. In each one of these studies only one patient was studied. They were performed as a proof-of-concept. To gain further knowledge on how to properly implement MEG imaging as a method of studying DBS, large studies with many participants will be needed. This will allow researchers to have a better foundation of what to look for and what errors are occurring in their studies.

The necessary improvements to MEG imaging for DBS studies will be made. The potential for helping patients whose only option is DBS treatment is too great. However, to improve those treatments doctors need a better understanding on how DBS is working in the brain. Improved MEG techniques will allow this to be accomplished.

 

 

References

[1] Kringelbach ML, Jenkinson N, Green AL, Owen SL, Hansen PC, Cornelissen PL, et al. Deep brain stimulation for chronic pain investigated with magnetoencephalography. Neuroreport. 2007;18:223-8.

[2] Ray NJ, Kringelbach ML, Jenkinson N, Owen SLF, Davies P, Wang S, et al. Using magnetoencephalography to investigate brain activity during high frequency deep brain stimulation in a cluster headache patient. Biomedical Imaging and Intervention Journal. 2007;3:e25.

[3] Bajwa J, Connolly A, Johnson M. Magnetoencephalography of Deep Brain Stimulation in a Patient with ET/PD Syndrome (P06.089). Neurology. 2013;80:P06.089.

[4] Ray N, Kringelbach ML, Jenkinson N, Owen S, Davies P, Wang S, et al. Using magnetoencephalography to investigate brain activity during high frequency deep brain stimulation in a cluster headache patient. Biomedical imaging and intervention journal. 2007;3.

[5] Barnes GH, A; Hirata, M. Magnetoencephalogram. Scholarpedia. 2010;5:3172.

[6] Cohen D. Magnetoencephalography: evidence of magnetic fields produced by alpha-rhythm currents. Science. 1968;161:784-6.

[7] Cohen D. Magnetoencephalography: detection of the brain’s electrical activity with a superconducting magnetometer. Science. 1972;175:664-6.

[8] Kringelbach ML, Jenkinson N, Owen SL, Aziz TZ. Translational principles of deep brain stimulation. Nature Reviews Neuroscience. 2007;8:623-35.

 

From the Voltage Clamp to the Patch-Clamp

By Tyler Shewbert

Abstract

Alan L. Hodgkin and Andrew F. Huxley wrote a series of five papers in 1952 in which they developed an electrical model for the action potential within the membrane of the squid axon. This model was the first quantitative model describing the electrical workings in nerve cells [1]. The experimental technique that they used was the voltage clamp method, which was improved by Hodgkin by eliminating the differences in membrane potential, allowing for the measurement of the ion current flowing in and out of the cell [1, 2]. The success of the H-H model led to the development of the patch-clamp method by Bert Sakmann and Erwin Neher in the 1970s [1]. The patch-clamp method has revolutionized the study of ionic current within cell membranes because it allows accurate measurement to be taken of small, excitable and nonexcitable cells, and the ability to measure the currents within single ion channels. However, Sakmann and Neher’s success was built upon the success of the H-H model and the voltage clamp method showing the importance of research that lays the foundation for major breakthroughs. [1, 3, 4].

Introduction

The voltage clamp method is thought to have been first used by Kenneth Cole and George Marmots of Wood Hole as a method for measuring squid axons [1]. However, the breakthrough use of the voltage clamp was developed by Hodgkins and Huxley. In previous experiments there had been an issue of electrode polarization which they overcame by using two electrodes, creating the same potential across the squid membrane.  Hodgkins and Huxley then could accurately measure the ionic currents flowing in and out of the membrane [1, 2]. This enabled Hodgkins and Huxley to develop a mathematical model for current flow through the membrane. This model became the basis for future electrophysiological research.

The voltage clamp method did not allow for the measurement of individual ionic current channels within the membrane or smaller sized cells.  The patch-clamp method that Bert Sakmann and Erwin Neher developed in the 1970s allowed for the measurement of individual ionic current channels, even in small cells, including mammalian cells [1, 3, 4]. The patch-clamp technique has been improved since the 1970s allowing researchers to improve the accuracy of their current measurements and examine single channels within most cell types [3, 4]. This technique has been a boon to electrophysiological researchers ever since.

Results and Discussion

            The key to Hodgkins and Huxley success in the 1952 papers was the adjustments they made to the voltage clamp method that enabled the membrane of the squid axon to be kept at the same potential so that accurate measurements of the current flowing through the membrane could be recorded [1, 2]. There were limitations to the voltage clamp technique. The individual ion channels flowing in and out of the membrane could not be measured [1, 5]. The accuracy was effected by signal noise[1]. The method could only be used on nerve cells large enough to attach the pipettes necessary for current measurement to, hence the use of the squid axon [1, 5]. Even with these limitations, Hodgkins and Huxley developed their mathematical model of action potential through nerve membranes with remarkable accuracy that still serves as a basis for modern studies.

Bert Sakmann and Erwin Neher began developing the patch-clamp method in the 1970s [5]. This technique revolutionized the study of the action potential and ionic current channels. The main contributions of the patch-clamp method was its ability to reduce the signal to noise ratio of the measurement, the ability to take measurement of currents flowing through single ionic channels, and the ability to measure the ionic channels of smaller cells, including mammals [1, 3, 4].

The patch-clamp method has its roots in the voltage clamp method used by Hodgkins and Huxley. Instead of using two electrodes to overcome the polarization of the membrane, Sakmann and Neher used small, heat polished pipettes with electrodes the size of 0.5-1.0 mm which were filled with a saline solution and electrically sealed to the membrane of the cell through the application of a slight suction to the pipette [4]. Sakmann and Neher had transistors available to improve the amplification of the measured current while Hodgkins and Huxley only had vacuum tubes available to them [4]. Sakmann and Neher found that by using this technique they could achieve an electrical seal around 50 MW which allowed high resolution current measurements of single ion channels [3, 4]. However, Sakmann and Neher found that while this enabled accurate measurements of the ion channels within mammalian and other smaller cells to be performed, there was noise from the saline bath and pipette, and the current from the pipette and membrane was different [3-5]. A basic overview of the patch-clamp method can be seen in Figure 1.

Figure 1: An overview of the basic concept of the patch-clamp technique (from [5]).

In a 1981 paper Hamill, Neher, Sakmann, and Sigworth presented an “improved patch-clamp technique” [3]. In this paper, the authors described an improved method that would allow the electrical seal between the pipette and membrane to achieve resistances of in the gigaohm range [3]. This was accomplished by taking extra precautions to make sure the pipette surface was kept clean and suction was applied to the pipette interior  [3]. As the resistance of the electrical seal is increased, the noise is reduced allowing for improved resolution in the recording of the current [3-5]. They reported that they were able to get gigaohm seals almost all of the cell types they tried [3].

This order of magnitude improvement from the original technique has had profound impacts on the study of electrophysiology. The patch-clamp method has enabled researchers in neuroscience to examine the ion channels within nerve cells [5]. In the past twenty years, the patch-clamp method has been used in a “variety of excitable and nonexcitable cell types, ranging from neurons to lymphocytes”, therefore expanding its use outside of the realm of neuroscience [6].

Since Hodgkins and Huxley first measured the action potential in the squid axon, their mathematical model has held. This was revolutionary since it finally proved the hypothesis that Galvani had proposed 150 years before that there was some sort of electricity within animals. Once Hodgkins and Huxley had developed a mathematical foundation other methods could be developed such as the patch-clamp. Hodgkins and Huxley did the best they could with the resources they had. The current measured from the membrane needed to be amplified, but the transistor was not yet in common use, so they were working with vacuum tubes [1]. For Sakmann and Neher, the understanding of the voltage clamp method and the H-H model coupled with the advances in amplification technology allowed them to break through the restrictions that Hodgkins and Huxley faced. By developing the patch-clamp method, Sakmann and Neher opened electrophysiology to new cells types of all sizes, with improved resolution [5]. Hodgkins and Huxley laid the groundwork for Sakmann and Neher’s breakthrough which has contributed to the electrophysiology research in the last forty years.

 

Outlook and relevance of work

            The research performed by both teams won the Nobel Prize in Physiology or Medicine: Hodkgins and Huxley in 1963, and Sakmann and Neher in 1991 [1]. This recognition is well deserved. The H-H model has stood up to testing in the past six decades since its origination [1]. Hodkgins and Huxley finally formalized ideas that had been put forth by Galvani 150 years before. Their improvement of the voltage clamp method was essential in the development of the field of electrophysiology. If they had not been able to create an isopotential membrane, their experiments would not have been successful [1, 2, 6]. The reliability of the experimental methods that they presented and their mathematical model enabled further researchers to build off of their discovery, culminating with the patch-clamp method which has revolutionized the research of electrophysiology since the 1970s [5]. The ability for researchers to study nonexcitable cells ion channels as well as individual channels within neurons and other excitable cells has been a boon to researchers since the 1970s [5, 6].

The track from Galvani’s initial famous frog leg experiments to modern research using the patch-clamp is a testament to the resolve of science as an institution. Over 200 years have passed since Galvani’s initial experiments, simply involving the electrical stimulation of frog nerves, to being able to measure the individual ionic currents within those nerves. Research in an overarching field such as electrophysiology is not a fast process. The lesson to be learned from its success is that solid foundational work is necessary for the future improvements and successes in the field. Without the work of other researchers prior to Hodkgins and Huxley such as Cole and Marmots, the revolutionary isopotential membrane created by a dual-electrode voltage-clamp would not have happened. The revolutionary patch-clamp was built upon the earlier work of Hodgkins and Huxley, and this method has allowed electrophysiology researchers to expand into many different cell types.

 

 

References

 

[1] Schwiening CJ. A brief historical perspective: Hodgkin and Huxley. The Journal of Physiology. 2012;590:2571-5.

[2] Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology. 1952;117:500.

[3] Hamill OP, Marty A, Neher E, Sakmann B, Sigworth F. Improved patch-clamp techniques for high-resolution current recording from cells and cell-free membrane patches. Pflügers Archiv European journal of physiology. 1981;391:85-100.

[4] Sakmann B, Neher E. Patch-clamp techniques for studying ionic channels in excitable membranes. Annual review of physiology. 1984;46:455-72.

[5] Veitinger DS. The Patch-Clamp Technique: An Introduction Science Lab by Leica Microsystems2011.

[6] Cuevas J. Electrophysiological Recording Techniques.  xPharm: The Comprehensive Pharmacology Reference. New York: Elsevier; 2007. p. 1-7.

 

The Ghosts of Hill 88

The top of Hill 88 in the Marina Headlands

By Tyler Shewbert

North of San Francisco is the Marin Headlands. Now part of the National Park Service, all of this area north of the Golden Gate was once belonged to the U.S. military. As one travels about the area, the remnants of its military past can be seen. I have spent a decent amount of time exploring this area but my hike up Hill 88 on June 28th, 2016 had probably the most profound impact of any of my journeys in the area.

I started at Rodeo Beach, with the intention of just walking up a hill because it was there. I did not have any idea what I would find. As I climbed, I climbed past the history of the area. First was Battery Townsley, which guarded the Golden Gate until the end of World War Two. This was a mere three-quarters of a mile hike, and not that high above the beach. I continued hiking, spying a hill in the distance that appeared to be the highest in the immediate area, and therefore the one that I would to climb.

As I approached the middle point between Battery Townsley and that hill, I found the concrete remnants what appeared to be more fortifications. These still had a World War Two feel about them, and from this place I could see my eventual goal even better. It was surrounded by a fence, so I was not even sure if I would be able to make it all the way to the top, except that I had seen people coming down from there.

Hill 88 from a distance

I continued, finally arriving at the top. I was greeted by the guard house.

The guard house

I was intrigued by what I found. The top of this hill, 1053 feet above sea level, had been flattened and there was what appeared to be some sort of former military installation.

The stands for the radar domes

The site was covered in graffiti. The concrete design screamed Cold War to me. This was not a World War Two facility. Upon walking around, I found the old helicopter pad, which solidified my reasoning that this was a Cold War facility and not part of the batteries from WWII.

Helicopter pad

I continued to explore. I found some ravens who were enjoying the amazing weather.

Ravens enjoying the view

The view from the top was amazing. The wind was light. In the East Bay, the temperatures were nearing triple-digits, but at the top of this hill it was nice and cool, with a light breeze.

Facing San Francisco

Looking towards the exit of the Golden Gate

Looking towards the Financial District and the East Bay

The site had an ominous vibe to it. I found a place to sit and eat my lunch, and having great cell phone service, I proceeded to look up what this place was. I found this site. It said that Hill 88 had been the site of the radar control station for the Nike missile base that existed in the area during the Cold War.

Having visited a Nike missile site across the valley a few years before, I understood their purpose. The Nike site SF-88 in the Marin headlands used Nike Hercules missiles. These had a range of 87 miles, and could carry a 20 kiloton nuclear warhead, if desired, or a conventional payload. At sites in the United States, the payload was almost always the nuclear payload. Fitted with such a payload, the missile could theoretically be launched and destroy several high-altitude bombers or missiles that were inbound to a target. There were over 145 such sites in the United States, until they began to phase out during the 1970s.

For me, this was a reminder of a frightful time before I was born that my parents had spoke of. This was the time of duck and cover drills and nuclear brinksmanship. I am grateful that somehow the United States and Soviet Union managed to wade through this tenuous time without destroying one another and a good chuck of the planet with them. I hope that those lessons are not forgotten by my generation and that nuclear disarmament continues.

Hill 88 in operation

Nukes and Floods

By Tyler Shewbert

Since the end of the Second World War, international institutions such as the United Nations, World Health Organization and International Monetary Fund have proliferated. These institutions serve various purposes, but their greatest success has been facilitating diplomatic relations between global powers that has successfully prevented a nuclear conflict. While there has been nonstop warfare since the end of World War Two, no nuclear weapons have been used since the United States dropped the bombs on Hiroshima and Nagasaki. Preventing a nuclear catastrophe has been the greatest success of increased international cooperation, even in times when proxy conflicts between the great powers happens.

The era of nuclear warfare has forever changed how wars are fought. The days are gone when U.S. generals like Gens. LeMay and MacArthur advocated for limited use of nuclear weapons against the enemy. Now, the belief among the military class in nuclear armed states is that the use of nuclear weapons, particularly among the well armed states of the U.S., Russia, and China, would spark a disaster that would propel the planet into many years of misery and chaos, including their own nation. This is something that no responsible leadership class would want to be remembered for.

In this era of anti-globalization, it is important to remember the value of the international bodies set up in the years that followed WWII. The U.S. and Soviet Union fought proxy wars, but never did these conflicts escalate into nuclear strikes. This can partly be attributed to having reasonable diplomatic classes on both sides that understood the results if they failed. Institutions such as the U.N. have allowed these parties to resolve issues at the Security Council table rather than the battlefield. This has not always succeeded, but these diplomatic solutions have prevented nuclear conflicts, which itself is a great success.

Other global bodies have facilitated economic growth across the planet. Sometimes this has come in the form of direct aid, but often it has come in trade deals that have increased cross-border trade. This is essential. International trade brings countries together economically which in turn makes them reliant on other nations for their success. When countries cooperate economically, they are in turn less likely to go to war with each other.

It can be argued that President Nixon going to China was one of the definitive diplomatic overtures in the past fifty years. The enabled the Chinese leadership to eventually to begin economic reforms under Deng Xiaoping in 1978. They knew the West would be open to conducting business with them. The increased trade between the U.S. and China is partly responsible for the lack of armed conflict between the two countries, along with consistent diplomatic relations.

However, for factory workers in the industrialized nations of Europe and the U.S., this increased trade has led to economic instability due to the transfer of production to China and other developing nations. This economic instability is not solely the responsibility of over-shoring production. A decrease in union membership in the United States and the increase in automation have also played significant roles in creating economic hardships for production workers.

These hardships are a reality, and they must be properly dealt with. For many years this reality has been ignored by politicians, and now it has manifested as resentment against globalization which threatens the relative global peace that has existed for the past seventy years. The continued success and progress of the human race is dependent on this stability, facilitated by global institutions and trade. This means that politicians in the West must recognize the plights of those who feel threatened by globalization and believe that tearing down these institutions is the only solution. The Brexit vote and the election of Donald Trump are pleas for help from a class of citizens that feel disorientated in a globalized world, and if they are ignored it will mean continued attacks on the global order that has facilitated great numbers of people being brought out of the dredges of poverty.

This same anti-globalization attitude also threatens the institutions that bring about strong diplomatic relations that prevent the disaster of nuclear warfare. These institutions must also be kept intact for the purposes of mitigating climate change damages over the next century, and preventing wars caused by the displacement of population that is likely to occur.

For the sake of the continued progress of humanity, the anti-globalists must have their grievances heard. If these people prosper economically, there will be one less reason for them to attack the idea of globalization. However, if these grievances are ignored, and the international order begins to break down, we all will face increased risk of nuclear conflict, and also will be unable to deal with the displacement and disasters that climate change will cause.

My Nuclear Fantasy

ITER: the world’s largest Tokamak (courtesy ITER
Tyler Shewbert

I have been a proponent of nuclear power, both fission and fusion, since I was very young. I became fascinated with nuclear energy’s potential around age nine when I began to read about physics. Science fiction was the medium that peaked my interest in these subjects. My parents came of age in the 1950s and 60s, and therefore had a mixed view of nuclear energy. They had their concerns as many people did, and still do, about its potential. However, they always allowed me to explore topics independently and develop my own opinions. Within a few years, after reading many of the arguments for and against the use of fission power, my mind was set that this was the energy source that could change human civilization. I accepted that the technical problems with breakeven fusion energy might make it unattainable, but as an optimist I hoped that it would be successful, and that it could revolutionize the world.

Through my teen years and early twenties this idea cemented, but was rarely discussed. I diverged into other interests and rarely looked again at nuclear energy. In the background of my mind, the necessity of providing many terawatt hours of power for the planet’s energy needs was always there, and eventually in my later twenties this brought me back to nuclear energy.

After Fukushima, the growth that the nuclear energy sector had been seeing globally slowed dramatically. This disappointed me. I thought that opinions beginning to swing back in favor of nuclear energy were permanent. It only took one incident to drastically alter those opinions. Plants were shut down in Japan, Germany and many other countries. The anti-nuclear movement caught its breath again against the rising tide of pro-nuclear environmentalists. Once again, my nuclear fantasy was put on hold.

I have envisioned a world where by using nuclear energy air pollution is greatly reduced. Without the need for fossil fuels for energy sources, the air would begin to clear. Nuclear sources mixed in with solar, wind, hydro and other sources would create an energy boom that would lift the developing world out of poverty. As the air cleared, and poverty was reduced, the Earth would become a calmer place.

I know this is a fantasy. Fission produces waste. This can be dealt with somewhat, and as new technologies such as the Waste Annihilating Molten Salt Reactor develop, the waste issue can be dealt with even more effectively. The cost of environmental damage from a meltdown can be catastrophic to a nation. These are rare but with each new reactor the probability of an incident would increase. The most significant risk in developing nuclear energy infrastructure is the chance it will be used to develop weapons. The economics of fission energy are not practical for developing nations.

I will still defend fission. I have come to terms with its downsides and understand that these are problems which can either be solved or mitigated. I know that it is necessary to include fission energy in the energy mix to reduce climate change. It is immoral to ask the people in developing countries to not to use energy on the scale the developed countries do. To be able to provide billions of people with carbon free energy will allow economies to grow, people to come out of poverty and live richer lives. To do this nuclear energy must grow.

Fusion is another topic all together. It is always called the technology that is “twenty years away”. However, there is good news coming out of the organizations researching fusion. If we achieve the coveted breakeven power production, it will still take time to make fusion energy production economical, particularly for the impoverished nations around the world which are in dire need of energy. Yet this is a goal that is worth striving for and I will gladly spend my lifetime working towards it to pass the baton to the next generation which might finally usher in the era of fusion power. With that, I believe everything will change.

This is mostly speculative. I know there is no magic bullet to solving the world’s energy and climate issues. It will take a mixture of solutions and international cooperation that has not been seen in human history. These are the great tasks for the next hundred years. With a damaged climate, civilization will rip apart. Without developing nations providing energy to their populations, global inequality in incomes and standard of livings will tear the world apart. I am an optimist though. I know that humanity is both capable of great terror and beautiful progress, but history seems to tell us the progress typically wins out over the terror. I can only play my role in helping to find solutions to the problems.

Save

The Future of Space is Nuclear

NEXIS ion thruster undergoing testing as part of Project Prometheus
Tyler Shewbert

Since the beginning of the Space Age, the relationship between space exploration and development, and nuclear power as a source of propulsion, heating and electricity was seen as symbiotic. Before Sputnik was launched in 1957, the development of nuclear thermal propulsion (NTP), in NERVA/Rover programs, had already been going on for two years. These programs continued until their cancellations in 1972. During the nearly two decades of development, a solid foundation of knowledge was acquired about nuclear thermal rocket (NTR) technology. The program was cancelled, based on political, not technological, reasons.

Since the beginning of the United States space program, radioisotope thermoelectric generators (RTG) have been used as sources of heat and power on missions ranging from Apollo to New Horizons. The farthest human object in space, Voyager 2, is powered by a RTG. The “Nuclear Power Assessment Study” released by John Hopkins Applied Physics Laboratory in 2015 states that the newer radioisotope power systems will continue to power Humanity’s robotic exploration of the Solar System.

Inspection of Cassini spacecraft RTGs before launch

Nuclear systems allow for more energy than than either chemical or solar sources. Due to this increase in available power, many of the restrictions limiting the exploration and settlement of space can be overcome. The main advantages of space nuclear applications are smaller volume, reasonable mass, long lasting operational times, independence from the Sun’s energy, the ability to deploy kilowatt and megawatt power sources, and reliable operations.

Space is a harsh environment. For power needs closer to the Sun, solar power can provide much of the power needed for most current space applications. As we journey out farther from the Earth, the efficiency of solar power declines. For most exploration in space out past the Earth, nuclear power sources become necessary. They provide the necessary heat and electricity for instruments to function properly. For future missions, both human and robotic, to Mars and the outer planets, nuclear energy will be necessary to power and heat the science packages that will further human knowledge of our neighborhood in space.

For serious human exploration and eventual economic development of space, both nuclear fission systems and nuclear propulsion will need to be developed. Nuclear fission plants will provide the necessary electricity and heat to settle the Moon and Mars. Solar energy will compliment both, but it is well documented that small nuclear reactors would give an advantage to settlers that solar would not.

Nuclear energy sources would also be necessary for any large-scale, local resource development. The power needs of any space mining operation could be met much easier with nuclear energy. Any such operations would rely almost entirely on nuclear energy to develop resources, due to the necessary heat requirements. In situ resource utilization (ISRU), the collection and processing of materials in space for human uses, could be done with nuclear power on a large-scale.

Sketch of nuclear thermal rocket

Nuclear propulsion methods, both nuclear thermal and nuclear electric, would allow for more efficient use of propellant. Nuclear thermal rockets, which have been studied at length by both the United States and Soviet Union/Russia, involve heating a fluid, typically hydrogen, in a nuclear reactor and expanding it out of a rocket nozzle to produce thrust. This leads to higher specific impulse, almost double that of chemical propulsion. Specific impulse (usually abbreviated Isp) is a measure of the efficiency of rocket. This allows for reduced travel times. This would allow any future explorers on Mars to stay longer on the surface. Many Mars mission designs have used nuclear thermal rockets as their preferred choice of propulsion. This was one of the main goals of the NERVA/Rover programs, and also one of the reasons it was cancelled. Solid core nuclear thermal rockets have been well-researched and ground tested. Liquid core and gaseous core engines theoretically would lead to even higher specific impulses, therefore opening up the outer Solar System to human exploration and eventually settlement.

Where do we stand today? Since the cancellation of NERVA/Rover, there have been a few starts and stops to serious nuclear propulsion and fission power systems. Project Timberwind was part of the Strategic Defense Initiative developing NTR for defense purposes, but was cancelled before ground testing began. However, there were still some advances in materials technology made. Project Prometheus began in 2003 with the purpose of developing smaller fission reactors for space applications. This was to be a team effort between NASA and the U.S. Navy. Rather than developing nuclear thermal propulsion, the fission reactors developed in Project Prometheus were to be used in nuclear electric propulsion (NEP), using a reactor to run ion engines. This was to culminate in the Jupiter Icy Moons Orbiter (JIMO), both of which were cancelled in 2005. Recently, NASA’s Marshall Spaceflight Center have been testing out nuclear fuels for nuclear thermal propulsion for a human Mars mission.

For any significant human exploration and settlement of the Solar System to take place, fission power systems, and nuclear thermal and nuclear electric propulsion systems need to be researched, ground tested, space tested, and deployed into operation. These technologies need to be treated as a long-term, space-infrastructure project.

NERVA/Rover engines were being developed not only for a possible Mars mission, but also for a Lunar shuttle. Some engines were designed to be turned on and off up to sixty times, allowing for such a shuttle. A similar set of goals needs to be established and studied. Developing NTP designs with only the goal of getting us to Mars is shortsighted. A more expansive set of goals guiding the development needs to be established. A cislunar nuclear shuttle would allow for the development of Moon settlements. Supplying any permanent Mars or Moon settlement would require large amounts of supplies to be sent until advanced ISRU was well established, and NTP could do this. With liquid and gaseous core engines, it is possible to shorten travel times between Earth and Mars, and a functioning interplanetary economy could eventually develop. These cores would also open up the resources in the asteroid belt and exploration and possible settlement of the outer planet’s moon systems. Without NTP, none of this is practical.

Fission power systems would allow settlements on Mars and the Moon to have more energy than solar alone could provide. This would lead to better resource development and utilization, and therefore the foundation of a self-sustaining space economy. Economies and settlements can only grow as much as their energy resources allow, and fission power would allow for scalable energy systems that could provide the necessary excess energy for economic expansion. Just enough energy is not enough, there must be excess for there to be any sort of successful economic development. Mining and processing asteroids would require large amounts of energy, particularly heat energy, which is much easier to deploy using nuclear power systems.

There is already a large knowledge-base for some of these technologies, however, it is spread mostly between various Department of Energy and NASA programs. Research projects in these fields have unfortunately been subject to cancellation time and time again, subject to the whims of politics. This has led to significant strides in technology development, only to be shut down on the verge of taking the next step. Without this technology, any sort of permanent human presence in space is not possible. Until we take the development of nuclear space applications seriously, we will remain in low Earth orbit, and the only significant economic use of space will be satellites. Due to legal regulations, private companies such as SpaceX and ULA developing nuclear based propulsion solutions is not practical at this time, therefore the onus is on government agencies. A framework similar to the ISS, ITER or CERN that spreads the cost among several different developed nations would make it cost-effective. This would also allow for the continuation of the project if a backing country’s political climate changes and no longer sees this as a worthwhile endeavor.

The future of Humanity’s presence in space depends on the long term development of nuclear space systems for settlement and exploration. It is an undertaking that will not reap immediate rewards, but needs to be treated as a long-term research and development project, similar to the quest for nuclear fusion, because the long-term benefits to humanity are immense. It is the destiny of humanity to explore and settle the Solar System, and this is only possible through nuclear technology.

Originally on: http://www.adastranuclear.com

Save