In this post, I’m going to consider the power density issue for fusion energy. The focus for now is on the technical obstacles, rather than the economic motivation for increasing power density. I’ll be referring back to the previous post summarizing the Jassby & Lidsky critiques.
The deuterium-tritium (D-T) fusion reaction involves significant disadvantages due to tritium & high-energy neutrons. If we eliminated radioactivity and tritium, that would nullify all the political/societal issues, two of the three economic issues, and one of the purely technical issues, while simplifying another. Lidsky proposed changing research policy to look for reactor concepts that could use alternative fuels. So, what’s the catch?
The maximum fusion reaction power density (at a given reactor pressure) is ~50x more for D-T than for any other fuel choice. This is not only bad for the reactor economics, but may prevent the reactor from functioning at all. Lowering the power density at given conditions also makes it that much harder to achieve net energy gain. For instance, if a reactor using D-T had a gain of 25, switching to aneutronic fuel would make the gain <1, so the reactor would not produce power.
The next best fuel choice, deuterium – helium-3, requires mining the moon or the gas giants – this rules it out for the foreseeable future. It also only yields a 10x reduction in neutrons (or maybe more, but at a further cost to the power density). This reaction would eliminate concerns about tritium leakage and breeding. It might relieve some of the difficulty of materials selection, replacement rates, and radioactive waste, but the reactor would still be too radioactive to repair easily, and could still produce weapons material easily.
The deuterium-deuterium reaction has the 3rd highest reaction rate. Like deuterium – helium-3, this choice also avoids tritium breeding and handling. Better yet, it doesn’t require exotic space mining. However, neutron production is only reduced by a factor of ~2 compared to D-T. (Possibly the neutron production could be lowered somewhat further, but again this would come at the expense of power density.) Pure deuterium reactors might be useful in hundreds/thousands of years if lithium (used for breeding tritium) becomes scarce – the oceans have enough deuterium for millions of years of consumption.
The 4th reaction is hydrogen-boron. The peak power density is about 500x lower than D-T. In fact, it’s just barely above the power losses due to X-ray radiation – leaving very little room for any other losses to be allowed. However, this reaction produces very little neutron radiation & activation, and does not involve tritium. Boron & hydrogen are both plentiful. This would be the ideal fusion fuel – if massive breakthroughs in plasma confinement could be made.
The up-shot is that in order to maximize the power density, D-T is the best choice. The next questions are, how much power density can we get, and how much do we need?
Power density: getting it
The thermal output power density of a light water reactor is around 50-100MW/m3, considering the volume of the pressure vessel — can fusion match this? The answer is ‘yes’ — at least in principle. This doesn’t imply that fusion can compete economically with fission, nor that pushing the power density this high optimizes the economics of fusion. Nonetheless, it’s an area where Lidsky’s critique doesn’t hold up any more.
Fusion power density scales as the plasma pressure squared: at the optimum temperature, the DT reaction yields 0.34 MW/(m3 bar2). For a magnetically-confined plasma, the plasma pressure is less than or equal to the magnetic field ‘pressure’ (energy density) — the ratio of the two pressures is called ‘beta.’ The magnetic pressure scales as the magnetic field squared, with a coefficient of about 4 bar/T2.
The ‘beta’ ratio needs to be as high as possible — this favors concepts like the FRC, (100%), Z-pinch (100%), magnetic mirror (40-60%), spheromak (~40%), or reversed-field pinch (~25%), compared to tokamaks, which top out around 10%, and stellarators (1-5%). (The Z-pinch is unique because it doesn’t have external magnetic coils – a current flowing through plasma supplies the magnetic field. The maximum achievable field strength is not limited by the capability of superconductors.)
To put some numbers on the power density: Assuming the ‘beta’ ratio is 100%, then for a 5 T magnetic field, the maximum possible fusion power density is around 3400 MW/m3. (5 T is about the limit with existing ‘low temperature’ superconductors.) However, when averaging over the plasma volume, the achievable power density is perhaps 20% of this number, because the pressure rises gradually from the plasma edge to the center. Still, that’s around 680 MW/m3. Suppose the plasma is cylindrical with radius about 1 meter, and it has a shield of about 1.5 meters thickness surrounding it (to breed tritium, extract heat, and protect the magnets). The power averaged over the volume of the outer cylinder would be around 100 MW/m3.
Thus, it’s possible in principle to have comparable power density to a fission reactor, even using magnetically-confined fusion. Lidsky assumed a tokamak with ~5 T magnetic field (the limit given the existing superconductors at the time), which only has about ~10% ‘beta.’ Thus, the power density would be 100x lower, around 1 MW/m3.
The high-beta approach is one way to attack the power density problem. Higher magnetic field possible with new REBCO superconductors is another avenue. If 16 T is possible, as seems to be the case, then power density of a 10%-beta tokamak would be the same as a linear device with 5 T field and 100% beta – around the 100MW/m3 mark. Combining high-field superconductors with a 100%-beta reactor could potentially allow advanced fuels to reach power density near 100MW/m3 as well.
Power density: dealing with it
In the majority of this section, I’m assuming we stuck with D-T fuel. I’ll address hydrogen-boron at the end of this section. For the D-T reaction, 20% of the fusion power is released as charged particles (helium nuclei), which heats the plasma. For our hypothetical 1-meter radius cylindrical plasma column, the ratio of the charged-particle power to wall surface area is about 20 MW/m2. This is comparable to the heat load on re-entry, and only 1/3 of the heat flux at the surface of the sun! Beam dumps and divertors for tokamaks are required to withstand 10-20 MW/m2. This appears to be close the limits of what is achievable with known materials. Also, the thermal conductivity of materials tend to degrade under neutron radiation, as the crystal structures become disorganized. Thus, even if we can produce comparable power density to a fission reactor, we may not be able to cope with the resulting heat flux. Is there a work-around?
For toroidal devices (tokamak, stellarator, RFP), the plasma is topologically trapped inside the coils, so the charged-particle portion of the power must exit through the wall. Linear systems like the FRC and mirror get a free pass — the magnetic field lines can extend out of the cylindrical vessel and flare out, so that the heat is deposited over a larger surface area. Some of the power will still be radiated onto the vessel wall, but it might be as little as 10% of the total heat flux for D-T plasmas. (Note that in tokamak designs, the heat flux is concentrated at the divertor, so the problem is even worse than if the heat were distributed uniformly. It may be possible to spread the heat uniformly by intentionally introducing impurities to increase the X-ray radiation from the plasma, however.)
However, there’s another problem beyond heat flux. For our hypothetical D-T reactor, the neutron flux escaping the plasma is 80 MW/m2. This translates to about 900 displacements per atom (dpa) per year at the first surface, for steel. Steel is likely to only survive about 100-200 dpa before needing replacement. Replacing the first wall several times per year is probably a show-stopper, as it: (1) eats into the capacity factor (2) increases operations & maintenance costs (3) results in a large volume of (low-level) radioactive waste.
Stellarator & tokamak designs typically call for ship-in-a-bottle robotic assembly of the blanket & plasma-facing components inside the cage formed by the magnet coils. The estimated time to repair/replace the first wall is in the range of months — clearly this cannot be done every month! Hence, existing tokamak designs are driven to low power density in order to prolong the life of the first wall. The ARC study proposed disassembling the magnets & lifting the inner components out in one piece — still far from simple & quick.)
Linear devices have an advantage from a maintenance perspective, compared to toroidal designs. However, even if replacement is quick & simple, it’s better to maximize the lifetime of the wall components, to reduce radioactive waste. Some materials may survive longer than others, but there are not many elements to chose from, in order to avoid producing high-level waste.
An obvious way forward is to replace the solid material walls with flowing liquid metal or liquid salt. The liquid should contain lithium for breeding tritium. A layer of 50 cm of liquid FLiBe salt would reduce the flux from 900 to around 10 dpa/yr, allowing a 20-year lifetime for the first solid surface. Another option is lead/lithium alloy. Liquid first walls solve both the neutron damage problem and the heat flux problem, if the flow is fast enough. There are several drawbacks, however:
- Splashing of droplets into the plasma must be prevented — splashes could extinguish the plasma unexpectedly
- Plasma sensors and actuators (RF antennas, particle beam or pellet injection, etc) would be hard to accommodate
- If the liquid is metal, the magnetic field can increase drag and result in large energy consumption for pumping the liquid
- There may be corrosion problems, especially for liquid metals, but also for salts
- The coolant temperature must be kept low enough not to poison the plasma due to heightened vapor pressure — this restricts the thermodynamic efficiency of the turbines used to produce electricity.
Power handling: with hydrogen-boron
For the hydrogen-boron reaction, essentially all the charged-particle heat flux would emerge as X-rays hitting the wall. At 100MW/m2, a liquid first surface is probably a necessity for this fuel as well, to handle the heat flux. The choice of liquid is more flexible, since tritium breeding is not required.
Fusion ‘Fuel Rods’
Fission reactors reduce the heat flux challenges they face by splitting the fuel up into many long, thin rods to reduce the volume-to-surface-area ratio. In principle, this could be done for fusion as well. The problem with splitting up the fusion plasma is that fusion gain is dependent on having good thermal insulation of the plasma – and reducing the volume-to-surface-area reduces the insulation value, so to speak. For fission reactors, this effect is actually beneficial, as it keeps the temperature at the center of the fuel rods below the melting point of the fuel, whereas fusion reactions need to be kept hot at the center. I don’t want to dive into the physics of plasma transport at this point in series, but for now I’ll say that it seems unlikely that the ‘fuel rod’ approach would work.
All of engineering is trade-offs. Optimizing individual components of a system in isolation doesn’t generally lead to the optimal system. The optimal fusion reactor might not involve pushing the power density all the way to the maximum. Nonetheless, it appears to be possible in theory to achieve power density comparable to a fission reactor, contrary to the assertions of critics. The trade-off is that liquid first walls would probably be required, to cope with the extreme neutron &/or heat fluxes produced. Liquid first walls have their own disadvantages. Power densities around 10 MW/m3 or less would be more feasible. It remains to be seen if this is sufficient to make fusion economical.
 (See Fig. 4 of “Fusion reactivity of the p-B11 plasma revisited” by S.V. Putvinskiet al,Nuclear Fusion 59 076018 (2019))
 It might be possible to convert most of the plasma heat exhaust to some form of directed energy (the Carnot efficiency of a heat engine operating at thermonuclear temperature is > 99%), but it’s not been demonstrated for a thermal plasma.