I work in inertial confinement (e.g. laser) fusion research, and I second the comment that we don't know what the best approach to implementing fusion is yet. There are radically different approaches within laser fusion, within inertial confinement more broadly, within magnetic fusion (e.g. tokamaks) - and we won't know which are the real winners until a) we ignite some burning plasmas and study how they really behave in practice, and b) several of the different approaches get enough funding to really evaluate how well they work. There are plenty of researchers and engineers who work on these problems: the issue is bankrolls.
]]>I suspect you mean: systems that
1. are capable of autonomous behavior in an arbitrary domain (e.g. not restricted to finite-scale problems like driving/chess/Jeopardy),
2. are capable of identifying and taking actions that pursue a particular goal.
3. include a concept of self, alongside other concepts, and
4. have a goal of self-preservation and/or self-improvement.
Item 3 may not be necessary (we may think of animals as being sentient at some level, but lacking a concept of self.) Similarly item 4 may not be necessary (e.g. Asimov's Robots might still be considered sentient if the 3rd law was removed.)
In any event, I think that without really robustly defining sentience it's impossible to make a good estimate of what it would take to build a sentience.
]]>