Saturday 19 March 2011

History of research

Brief overview

The idea of using human-initiated fusion reactions was first made practical for military purposes in nuclear weapons. In a hydrogen bomb, the energy released by a fission weapon is used to compress and heat fusion fuel, beginning a fusion reaction that releases a large amount of neutrons that increases the rate of fission. The first fission-fusion-fission-based weapons released some 500 times more energy than early fission weapons.

Attempts at controlling fusion had already started by this point. Registration of the first patent related to a fusion reactor[6] by the United Kingdom Atomic Energy Authority, the inventors being Sir George Paget Thomson and Moses Blackman, dates back to 1946. This was the first detailed examination of the pinch concept, and small efforts to experiment with the pinch concept started at several sites in the UK.

Around the same time, an expatriate German proposed the Huemul Project in Argentina, announcing positive results in 1951. Although these results turned out to be false, it sparked off intense interest around the world. The UK pinch programs were greatly expanded, culminating in the ZETA and Sceptre devices. In the US, pinch experiments like those in the UK started at the Los Alamos National Laboratory. Similar devices were built in the USSR after data on the UK program was passed to them by Klaus Fuchs. At Princeton University a new approach developed as the stellarator, and the research establishment formed there continues to this day as the Princeton Plasma Physics Laboratory. Not to be outdone, Lawrence Livermore National Laboratory entered the field with their own variation, the magnetic mirror. These three groups have remained the primary developers of fusion research in the US to this day.

In the time since these early experiments, two new approaches developed that have since come to dominate fusion research. The first was the tokamak approach developed in the Soviet Union, which combined features of the stellarator and pinch to produce a device that dramatically outperformed either. The majority of magnetic fusion research to this day has followed the tokamak approach. In the late 1960s the concept of "mechanical" fusion through the use of lasers was developed in the US, and Lawrence Livermore switched their attention from mirrors to lasers over time.

Civilian applications are still being developed. Although it took less than ten years for fission to go from military applications to civilian fission energy production,[7] it has been very different in the fusion energy field; more than fifty years have already passed[8] without any commercial fusion energy production plant coming into operation.
Pinch devices
A "wires array" used in Z-pinch confinement, during the building process

A major area of study in early fusion power research is the "pinch" concept. Pinch is based on the fact the plasmas are electrically conducting. By running a current through the plasma, a magnetic field will be generated around the plasma. This field will, according to Lenz's law, create an inward directed force that causes the plasma to collapse inward, raising its density. Denser plasmas generate denser magnetic fields, increasing the inward force, leading to a chain reaction. If the conditions are correct, this can lead to the densities and temperatures needed for fusion. The trick is getting the current into the plasma; this is solved by inducing the current from an external magnet, which also produces the external field the internal field acts against.

Pinch was first developed in the UK in the immediate post-war era. Starting in 1947 small experiments were carried out and plans were laid to build a much larger machine. When the Huemul results hit the news, James L. Tuck, a UK physicist working at Los Alamos, introduced the pinch concept in the US and produced a series of machines known as the Perhapsatron. In the Soviet Union, a series of similar machines were being built, unknown in the west. All of these devices quickly demonstrated a series of instabilities in the fusion when the pinch was applied, which broke up the plasma column long before it reached the densities and temperatures needed for fusion. In 1953 Tuck and others suggested a number of solutions to these problems.

The largest "classic" pinch device was the ZETA, including all of these upgrades, starting operations in the UK in 1957. In early 1958 John Cockcroft announced that fusion had been achieved in the ZETA, an announcement that made headlines around the world. When physicists in the US expressed concerns about the claims they were initially dismissed. However, US experiments demonstrated the same neutrons, although measurements suggested these could not be from fusion reactions. The neutrons seen in the UK were later demonstrated to be from different versions of the same instability processes that plagued earlier machines. Cockcroft was forced to retract the fusion claims, which tainted the entire field for years. ZETA ended its experiments in 1968, and most other pinch experiments ended shortly after.

In 1974 a study of the ZETA results demonstrated an interesting side-effect; after the experimental runs ended, the plasma would enter a short period of stability. This led to the reversed field pinch concept which has seen some level of development ever since. Recent work on the basic concept started as a result of the appearance of the "wires array" concept in the 1980s, which allowed a more efficient use of this technique. The Sandia National Laboratory runs a continuing wire-array research program with the Zpinch machine. In addition, the University of Washington's ZaP Lab has shown quiescent periods of stability hundreds of times longer than expected for plasma in a Z-pinch configuration, giving promise to the confinement technique.

In 1995, the staged Z-pinch concept was introduced by a team of scientist from University of California Irvine (UCI). This scheme can control one of the most dangerous instability that normally disintegrate conventional Z-pinch before the final implosion. The concept is based on a complex load of radiative liner plasma embedded with a target plasma. During implosion the outer surface of the liner plasma becomes unstable but the target plasma remains remarkably stable, up until the final implosion, generating a very high energy density stable target plasma. The heating mechanisms are shock heating, adiabatic compression and trapping of charge particles produced in fusion reaction due to a very strong magnetic field, which develops between the liner and the target. Details of this concept are shown in various publications available on the web page of MIFTI [1].
Early magnetic approaches

The U.S. fusion program began in 1951 when Lyman Spitzer began work on a stellarator under the code name Project Matterhorn. His work led to the creation of the Princeton Plasma Physics Laboratory, where magnetically confined plasmas are still studied. Spitzer planned an aggressive development project of four machines, A, B, C, and D. A and B were small research devices, C would be the prototype of a power-producing machine, and D would be the prototype of a commercial device. A worked without issue, but even by the time B was being used it was clear the stellarator was also suffering from instabilities and plasma leakage. Progress on C slowed as attempts were made to correct for these problems.

At Lawrence Livermore, the magnetic mirror was the preferred approach. The mirror consisted of two large magnets arranged so they had strong fields within them, and a weaker, but connected, field between them. Plasma introduced in the area between the two magnets would "bounce back" from the stronger fields in the middle. Although the design would leak plasma through the mirrors, the rate of leakage would be low enough that a useful fusion rate could be maintained. The simplicity of the design was supposed to make up for its lower performance. In practice the mirror also suffered from mysterious leakage problems, and never reached the expected performance.
Gun Club, MHD, instability; progress slows

By the mid-1950s it was clear that the simple theoretical tools being used to calculate the performance of all fusion machines were simply not predicting their actual behaviour. Machines invariably leaked their plasma from their confinement area at rates far higher than predicted.

In 1954, Edward Teller held a gathering of fusion researchers at the Princeton Gun Club, near the Project Matterhorn (now known as Project Sherwood) grounds. Teller started by pointing out the problems that everyone was having, and suggested that any system where the plasma was confined within concave fields was doomed to fail. Attendees remember him saying something to the effect that the fields were like rubber bands, and they would attempt to snap back to a straight configuration whenever the power was increased, ejecting the plasma. He went on to say that it appeared the only way to confine the plasma in a stable configuration would be to used convex fields, a "cusp" configuration.[9]

When the meeting concluded, most of the researchers quickly turned out papers saying why Teller's concerns did not apply to their particular device. The pinch machines did not use magnetic fields in this way at all, while the mirror and stellarator seemed to have various ways outs. However, this was soon followed by a paper by Martin David Kruskal and Martin Schwarzschild discussing pinch machines, which demonstrated instabilities in those devices were inherent to the design. A series of similar studies followed, abandoning the simplistic theories previously used and introducing a full consideration of magnetohydrodynamics with a partially-resistive plasma. These concepts developed quickly, and by the early 1960s it was clear that small devices simply would not work. A series of much larger and more complex devices followed as researchers attempted to add field upon field in order to provide the required field strength without reaching the unstable regimes. As cost and complexity climbed, the initial optimism of the fusion field faded.
The tokamak is announced

A new approach was outlined in the theoretical works fulfilled in 1950–1951 by I.E. Tamm and A.D. Sakharov in the Soviet Union, which first discussed a tokamak-like approach. Experimental research on these designs began in 1956 at the Kurchatov Institute in Moscow by a group of Soviet scientists led by Lev Artsimovich. The tokamak essentially combined a low-power pinch device with a low-power simple stellarator. The key was to combine the fields in such a way that the particles wound around the reactor a particular number of times, today known as the "safety factor". The combination of these fields dramatically improved confinement times and densities, resulting in huge improvements over existing devices.

The group constructed the first tokamaks, the most successful being the T-3 and its larger version T-4. T-4 was tested in 1968 in Novosibirsk, producing the first quasistationary thermonuclear fusion reaction ever.[10] The tokamak was dramatically more efficient than the other approaches of that era, on the order of 10 to 100 times. When they were first announced the international community was highly skeptical. However, a British team was invited to see T-3, and having measured it in depth they released their results that confirmed the Soviet claims. A burst of activity followed as many planned devices were abandoned and new tokamaks were introduced in their place - the C model stellarator, then under construction after many redesigns, was quickly converted to the Symmetrical Tokamak and the stellarator was abandoned.

Through the 1970s and 80s great strides in understanding the tokamak system were made. A number of improvements to the design are now part of the "advanced tokamak" concept, which includes non-circular plasmas, internal diverters and limiters, often superconducting magnets, and operate in the so-called "H-mode" island of increased stability. Two other designs have also become fairly well studied; the compact tokamak is wired with the magnets on the inside of the vacuum chamber, while the spherical tokamak reduces its cross section as much as possible.

The tokamak dominates modern research, where very large devices like ITER are expected to pass several milestones toward commercial power production, including a burning plasma with long burn times, high power output, and online fueling. There are no guarantees that the project will be successful; previous generations of tokamak machines have uncovered new problems many times. But the entire field of high temperature plasmas is much better understood now than formerly, and there is considerable optimism that ITER will meet its goals. If successful, ITER would be followed by a "commercial demonstrator" system, similar in purpose to the very earliest power-producing fission reactors built in the era before wide-scale commercial deployment of larger machines started in the 1960s and 1970s.

Even with these goals met, there are a number of major engineering problems remaining, notably finding suitable "low activity" materials for reactor construction, demonstrating secondary systems including practical tritium extraction, and building reactor designs that allow their reactor core to be removed when its materials becomes embrittled due to the neutron flux. Practical commercial generators based on the tokamak concept are far in the future. The public at large has been disappointed, as the initial outlook for practical fusion power plants was much rosier; a pamphlet from the 1970s printed by General Atomic stated that "Several commercial fusion reactors are expected to be online by the year 2000."
Laser inertial devices

The technique of implosion of a microcapsule irradiated by laser beams, the basis of laser inertial confinement, was first suggested in 1962 by scientists at Lawrence Livermore National Laboratory, shortly after the invention of the laser itself in 1960. Lasers of the era were very low powered, but low-level research using them nevertheless started as early as 1965. A great advance in the field was John Nuckolls' 1972 paper that ignition would require lasers of about 1 kJ, and efficient burn around 1 MJ. kJ lasers were just beyond the state of the art at the time, and his paper sparked off a tremendous development effort to produce devices of the needed power.

Early machines used a variety of approaches to attack one of two problems - some focused on fast delivery of energy, while others were more interested in beam smoothness. Both were attempts to ensure the energy delivery would be smooth enough to cause an even implosion. However, these experiments demonstrated a serious problem; laser wavelengths in the infrared area lost a tremendous amount of energy before compressing the fuel. Important breakthroughs in this laser technology were made at the Laboratory for Laser Energetics at the University of Rochester, where scientists used frequency-tripling crystals to transform the infrared laser beams into ultraviolet beams. By the late 1970s great strides had been made in laser power, but with each increase new problems were found in the implosion technique that suggested even more power would be required. By the 1980s these increases were so large that using the concept for generating net energy seemed remote. Most research in this field turned to weapons research, always a second line of research, as the implosion concept is somewhat similar to hydrogen bomb operation. Work on very large versions continued as a result, with the very large National Ignition Facility in the US and Laser Mégajoule in France supporting these research programs.

More recent work had demonstrated that significant savings in the required laser energy are possible using a technique known as "fast ignition". The savings are so dramatic that the concept appears to be a useful technique for energy production again, so much so that it is a serious contender for pre-commercial development. There are proposals to build an experimental facility dedicated to the fast ignition approach, known as HiPER. At the same time, advances in solid state lasers appear to improve the "driver" systems' efficiency by about ten times (to 10- 20%), savings that make even the large "traditional" machines almost practical, and might make the fast ignition concept outpace the magnetic approaches in further development.

The laser-based concept has other advantages. The reactor core is mostly exposed, as opposed to being wrapped in a huge magnet as in the tokamak. This makes the problem of removing energy from the system somewhat simpler, and should mean that a laser-based device would be much easier to perform maintenance on, such as core replacement. Additionally, the lack of strong magnetic fields allows for a wider variety of low-activation materials, including carbon fiber, which would reduce both the frequency of such neutron activations and the rate of irradiation to the core. In other ways the program has many of the same problems as the tokamak; practical methods of energy removal and tritium recycling need to be demonstrated.
Other inertial devices

Philo T. Farnsworth, the inventor of the first all-electronic television system in 1927, patented his first Fusor design in 1968, a device that uses inertial electrostatic confinement. This system consists largely of two concentric spherical electrical grids inside a vacuum chamber into which a small amount of fusion fuel is introduced. Voltage across the grids causes the fuel to ionize around them, and positively charged ions are accelerated towards the center of the chamber. Those ions may collide and fuse with ions coming from the other direction, may scatter without fusing, or may pass directly through. In the latter two cases, the ions will tend to be stopped by the electric field and re-accelerated toward the center. Fusors can also use ion guns rather than electric grids. Towards the end of the 1960s, Robert Hirsch designed a variant of the Farnsworth Fusor known as the Hirsch-Meeks fusor. This variant is a considerable improvement over the Farnsworth design, and is able to generate neutron flux in the order of one billion neutrons per second. Although the efficiency was very low at first, there were hopes the device could be scaled up, but continued development demonstrated that this approach would be impractical for large machines. Nevertheless, fusion could be achieved using a "lab bench top" type set up for the first time, at minimal cost. This type of fusor found its first application as a portable neutron generator in the late 1990s. An automated sealed reaction chamber version of this device, commercially named Fusionstar was developed by EADS but abandoned in 2001. Its successor is the NSD-Fusion neutron generator.

Robert W. Bussard's Polywell concept is roughly similar to that of the Fusor, but replaces the problematic grid with a magnetically contained electron cloud, which holds the ions in position and provides an accelerating potential. The polywell consists of electromagnet coils arranged in a polyhedral configuration and positively charged to between several tens and low hundreds of kilovolts. This charged magnetic polyhedron is called a MaGrid (Magnetic Grid). Electrons are introduced outside the "quasi-spherical" MaGrid and are accelerated into the MaGrid due to the electric field. Within the MaGrid, magnetic fields confine most of the electrons and those that escape are retained by the electric field. This configuration traps the electrons in the middle of the device, focusing them near the center to produce a virtual cathode (negative electric potential). The virtual cathode accelerates and confines the ions to be fused which, except for minimal losses, never reach the physical structure of the MaGrid. Bussard had reported a fusion rate of 109 per second running D-D fusion reactions at only 12.5 kV (based on detecting a total of nine neutrons in five tests. Bussard claimed a scaled-up version of 2.5–3 m in diameter, would operated at over 100 MW net power (fusion power scales as the fourth power of the B field and the cube of the size)[11]

A recent area of study is the magneto-inertial fusion (MIF) concept, which combines some form of external inertial compression (like lasers) with further compression through an external magnet (like pinch devices). The magnetic field traps heat within the inertial core, causing a variety of effects that improves fusion rates. These improvements are relatively minor, however the magnetic drivers themselves are inexpensive compared to lasers or other systems. There is hope for a sweet spot that allows the combination of features from these devices to create low-density but also low-cost fusion devices. A similar concept is the magnetized target fusion device, which uses a magnetic field in an external metal shell to achieve the same basic goals.
Other systems

Over the years there have been a wide variety of fusion concepts. In general they fall into three groups - those that attempt to reach high temperature/density for brief times (pinch, inertial confinement), those that operate at a steady state (magnetic confinement) or those that try neither and instead attempt to produce low quantities of fusion but do so at an extremely low cost. The later group has largely disappeared, as the difficulties of achieving fusion have demonstrated that any low-energy device is unlikely to produce net gain. This leaves the two major approaches, magnetic and laser inertial, as the leading systems for development funding. However, alternate approaches continue to be developed, and alternate non-power fusion devices have been successfully developed as well.

Focus fusion takes place in a dense plasma focus produced by a dense plasma focus[dubious – discuss], which typically consists of two coaxial cylindrical electrodes made from copper or beryllium and housed in a vacuum chamber containing a low-pressure gas, which is used as the reactor fuel. An electrical pulse is applied across the electrodes, producing heating and a magnetic field. The current forms the hot gas into many minuscule vortices perpendicular to the surfaces of the electrodes, which then migrate to the end of the inner electrode to pinch-and-twist off as tiny balls of plasma called plasmoids. The electron beam collides with the plasmoid, heating it to fusion temperatures, will in principle yield more energy in the beams than was input to form them.[citation needed]

In April 2005, a team from UCLA announced it had devised a way of producing fusion using a machine that "fits on a lab bench", using lithium tantalate to generate enough voltage to smash deuterium atoms together. However, the process does not generate net power. See Pyroelectric fusion. Such a device would be useful in the same sort of roles as the fusor.

2 comments:

  1. This is a really instructive post, you're an incredibly skilled blogger. I've joined your blog searching for a more noteworthy measure of your brilliant post. Moreover, I have shared your site on my informal communities!
    iso certification in saudi arabia
    iso 9001 certification in saudi arabia

    ReplyDelete