Skip to main content

Appendix A. Methodological Issues in Theoretical Applied Science

A.1. The role of theoretical applied science

This appendix discusses issues at a higher (but less mathematical) level of abstraction than the rest of this volume. It considers methodological issues associated with efforts to understand technological possibilities, examining how trade-offs in goals and intellectual effort can make possible surprisingly robust and fault-tolerant reasoning. It offers a perspective from which to judge studies of molecular nanotechnology; this perspective may be useful to researchers exploring technological possibilities in other fields.

Part II of this volume is an exercise in theoretical applied science,1 a mode of research which aims to describe technological possibilities as constrained not by present-day laboratory and factory techniques, but by physical law. (To provide a definite basis for analysis, this is taken to be physical law as presently understood.) Within conventional (experimental) applied science, theoretical studies are used to design realizable instruments and experiments, and then to interpret their results. This use of theory thus centers around the experimental demonstrations of useful new phenomena. Theoretical applied science, however, like theoretical physics, produces no experimental results: a typical product is instead a theoretical analysis demonstrating the possibility of a class of as-yet unrealizable devices, including estimated lower bounds on their performance. Theoretical applied science is likewise distinct from engineering, which pursues the economical, near-term production of physical devices. Theoretical applied science fills a gap in the matrix resulting from the familiar distinctions of theoretical vs. experimental and pure vs. applied (Figure A.1).

There are several reasons for undertaking research of this kind: First, theoretical applied science can be viewed as a branch of theoretical physics, studying certain time-independent consequences of physical law; it is thus of basic


Figure A.1. Theoretical applied science in a matrix of familiar distinctions.

scientific interest. Second, theoretical applied science can expose otherwiseunexpected rewards from pursuing particular research directions in laboratories today; it can thus improve the allocation of scientific resources. Finally, a better understanding of physical possibilities, when linked to feasible development programs (as in Part III), can yield a better-informed picture of future capabilities; this is of broad importance if one wishes to make realistic plans.

The study of technological possibilities is closely allied with-yet distinct from-more familiar forms of research in science and engineering. Its technical content (drawing extensively from physical theory and experimental results) and the nature of its product (knowledge, rather than hardware) link it closely to scientific research. Yet it is also closely akin to engineering: studying technological possibilities poses problems of design and analysis. The products of theoretical applied science can be termed exploratory designs, although some take the form of a rather abstract analysis.

In some instances, the systems under study may be so many steps removed from present fabrication technologies (or their capabilities may be of so little practical value) that the research is as theoretical and noncommercial as the purest science. In other instances, the systems under study may be so useful and accessible that exploratory studies can promptly transition into development. Part III concentrates on more accessible systems; Part II concentrates on useful but less accessible systems. This appendix examines methodological issues common to both. It proceeds largely by examining similarities and differences among engineering, pure science, and theoretical applied science (some of the remarks regarding engineering could, with some adaptation, be applied to experimental applied science).

As illustrated in this volume, familiar principles of science and engineering can be used to formulate and evaluate technological concepts that are beyond the reach of implementation by present fabrication techniques. In addition to the concrete technical issues raised by molecular nanotechnology, studies of this sort raise basic issues of objective and methodology. To understand the requirements for successful reasoning in theoretical applied science, one must understand what it does and does not attempt to achieve, and how the sacrifice of one set of goals can facilitate the achievement of another. Theoretical applied science operates under stringent constraints stemming from the inability to make and test the objects it studies: It is hardly economical to develop a complex, detailed design if it is known beforehand that a system cannot be built (who would fund the work, and why?). More basically, if a system cannot be built, it cannot be tested. In the context of standard engineering, either of these constraints would be fatal to manufacturing and to marketplace success. The following sections show how these and related constraints can be accepted-and exploited-where the objective is knowledge rather than competitive manufactured products.

A.2. Basic issues

A.2.1. Establishing upper vs. lower bounds

Theoretical applied science, as pursued here, sets lower bounds on the maximum possible performance of devices by establishing the possibility of certain capabilities. Every existing device sets a lower bound on what is possible: it constitutes a proof by example. To go beyond this direct form of proof, one must apply physical theory. A brief examination of some issues associated with the study of upper and lower bounds on possible device performance may clarify the nature of theoretical applied science.

The performance of any device for some well-specified purpose is given by some mathematical function of its physical properties (the existence of such a function can be taken to define a "well-specified purpose"). The maximum possible value of this function is determined by physical law (Section A.2.2), and one can attempt to estimate this value by bracketing it between upper and lower bounds. Upper bounds-which are beyond the scope of the present work-are typically sought by attempting to determine whether performance at or beyond some specified level would or would not contradict a given set of physical laws. Studies of the theoretical physical limits of computation (Toffoli, 1981; Bennett, 1982; Fredkin and Toffoli, 1982; Landauer, 1982; Likharev, 1982; Feynman, 1985; Landauer, 1987) provide notable examples of this kind of work (which yields surprisingly many null results). When such studies analyze a hypothetical device, they resemble theoretical applied science; they frequently differ, however, in their neglect of certain properties of real materials and systems. Studies of computation have assumed (for example) perfect initial conditions (Fredkin and Toffoli, 1982), idealized quantum Hamiltonians (Feynman, 1985), or mechanisms made from physics-textbook solids and components [in which, for example, rigid volumes exclude other rigid volumes (Bennett, 1982; Toffoli, 1981)]. These idealized models have shown, for example, that neither the basic principles of thermodynamics nor those of classical or quantum mechanics prohibit the construction of devices that perform arbitrarily complex combinational logic operations without dissipation of free energy, so long as the input can be determined from the output: logical reversibility permits (but does not guarantee) thermodynamic reversibility. (This is illustrated in Section 12.3.8b.)

Successful research in theoretical applied science sets lower bounds to possible capabilities. This demands that capabilities be consistent with all relevant physical constraints, not just those needed to illustrate basic consequences of physical law. To show this, one typically engages in a process of design and analysis paralleling that of engineering. Indeed, the tested products of engineering make good points of departure for theoretical applied science: by holding dimensionless numbers constant in a design based on an existing product, a complex design process can sometimes be reduced (in part) to a simple scaling analysis (Chapter 2 discusses scaling laws and their limitations).

A.2.2. Are there objective, physical limits to device performance?

The concept of upper and lower bounds to maximal achievable device performance presumes the existence of a limiting, maximal value. Does this make sense?

According to the present understanding of physical law, a finite spherical volume with a bounded total energy, containing a finite number of particles chosen from a finite number of kinds, can exhibit only a finite number of quantum states; its possible contents are therefore selected from a finite set. This implies that the set of devices of finite volume (and so forth) is itself finite. (One usually regards multiple quantum states as corresponding to a single device, but each such many-to-one mapping reduces the size of the set of devices).

Given a function on a domain consisting of a finite set of elements, there exists a single maximum value (treating infinity as a value, for present purposes) that corresponds to one or more elements. Thus, for every clearly defined objective function (and bounding volume), there must exist one or more devices that maximize the function (unlike the set of integers, in which for every integer NN there is a larger integer N+1N+1 ). The capabilities of the function-maximizing device mark the limit of the possible, relative to the given measure of performance. To assert that a limit exists implies knowledge neither of its numerical value nor of the nature of the device that exhibits it.

The only constraints on possibility assumed in the preceding paragraph are those of physical law, excluding limitations on design and fabrication. These have a different character. Design limitations, though real, are never absolute: given a formal device-specification language, the output of a random number generator may correspond to the solution of even the hardest design problem, though typically with a negligible probability. Likewise with fabrication limitations: in the approximation that physical law is time reversible, any structure that could be destroyed can be created, though perhaps with vanishingly small probability (Drexler, 1986a). (More rigorously, given the charge-parity-time invariance of present physical theory, any device can be created with some finite probability so long as a geometrical mirror-image made of antiparticles could be destroyed.) The observation that thermal fluctuations have a finite probability of creating silicon wafers from sand, however, is of no use to semiconductor technology. To describe practical possibilities, theoretical applied science must take account of limitations on design and fabrication, even though these are not absolute in a purely physical sense.

A.2.3. Certainties, probabilities, and possibilities

Research in theoretical applied science can produce results of differing confidence levels. Statements regarding simple systems can be virtually certain: "A flawless ring of diamond can be spun with a rim speed in excess of two kilometers per second without rupturing" (this follows from elementary mechanics and from the density and tensile strength of diamond). Statements regarding illdefined complex systems can likewise be virtually certain: "A physically-possible system of molecular machines can build a copy of itself from simple chemical compounds" (this follows from the existence of bacteria). Less certain propositions can also be of interest; indeed, the theory of rational decisions (Raiffa, 1968) indicates that information of the form "One cannot have confidence >.95>.95 that capability X\mathrm{X} is impossible" may in some circumstances be valuable information indeed.2

In practice, the nature of theoretical applied science places a premium on establishing results with near certainty, because these results frequently serve as foundations for further design and analysis. In building a complex analysis, one must take care to ensure that each required assumption is reliable, lest the overall analysis prove weak. Relative to engineering, theoretical applied science favors designs with increased safety margins and decreased commitment to a single detailed specification (i.e., retaining redundant options); both differences can make conclusions more reliable (Sections A.4.4 and A.4.6). Likewise, the standards of criticism should be stringent: if weak evidence suggests a failure mechanism, and no strong evidence counters it, then failure should be assumed to occur. Engineering complex, reliable analyses has much in common with engineering complex, reliable systems.

A.3. Science, engineering, and theoretical applied science

A.3.1. Science and engineering

Science and engineering differ in their goals. Science strives to understand how things work; engineering strives to make things work. Science takes an object as given and studies its behavior; engineering takes a behavior as given and studies how to make an object that will exhibit that behavior. Science modifies objects to probe them; engineering modifies objects to improve them. ("Science" in sentences like these can be read as shorthand for a phrase like "scientists, when acting in direct pursuit of purely scientific goals"; "engineering" and other impersonal abstractions treated as grammatical subjects can be read similarly.) This sharp conceptual distinction between science and engineering does not imply a sharp distinction among researchers or among research programs: when designing experimental apparatus, scientists do engineering; when studying novel physical systems, engineers do science. Knowledge guides action, and action yields knowledge.

Science and engineering differ radically in their ability to describe their own future accomplishments. Knowledge of the content of a future scientific discovery is logically impossible: if one were to know today what one will "discover" tomorrow, it would not be a discovery. Knowledge of feasible engineering achievements is a different matter: since engineering aims to dod o rather than to discover, no logical problem arises. No contradiction arose in the early 1960s when engineers reiterated the feasibility of landing a person on the Moon. When scientists predict future knowledge, they describe not what they will learn, but what they will learn about (e.g., that science, aided by engineering, would learn about the composition of the lunar surface).

In the following comparisons, the chief model of science is theoretical physics. This has two motivations: (1) physical theory is the foundation for much engineering work and (2) theoretical physics has been a popular model for philosophers of science. Indeed, it has been objected that much "philosophy of science" might better be termed "philosophy of theoretical physics" and that other fields of science often-quite properly-have distinct aims and methodologies (Bartley, 1987a).

Ideally, theoretical physics produces a set of foundational principles supporting precise models of well-specified physical systems. These models are subject to rigorous experimental testing and (again, ideally) pass all tests because they provide a uniquely correct description of observable physical reality.

Engineering produces devices, not theories. Again, experimental testing and thorough specification of physical systems play key roles. Engineering, however, faces requirements of manufacturability and competitive performance that have no parallel in physics. Further, no rigid standard of correctness demands unique answers in engineering, although competitive pressures sometimes force convergence toward a unique optimum.

A.3.2. Engineering vs. theoretical applied science

Like physics, theoretical applied science produces theoretical results, but these describe the capabilities of specific classes of devices rather than universal laws of nature. The following section explores the requirements for sound reasoning in this domain, where experiments are, for the moment, impossible. A major theme will be advantages that can be had during the design process by sacrificing one or more of the usual objectives of engineering, such as precise specification, present-day manufacturability, and competitive performance. Theoretical applied science offers no free lunch, no way to achieve physical engineering objectives without applying standard engineering methods. Instead, it exploits tradeoffs among objectives to achieve a general understanding of what can be made to work. Table A. 1 provides a compact overview of how various issues interact in different domains.

A.4. Issues in theoretical applied science

A.4.1. Product manufacturability

Engineering attempts to produce functional products, usually within a one-toten-year planning horizon. Engineers must accordingly adapt their designs to the limitations of current or near-term manufacturing technologies. If a design cannot be translated into hardware (perhaps after some adjustment), it is a failure.

Successful theoretical applied science produces reliable analyses that establish the possibility of a class of devices. In theoretical applied science, planning horizons and near-term manufacturing capabilities are of no fundamental significance. For a class of devices to be truly possible, however, it must be possible to

Table A.1. Engineering, theoretical applied science, and theoretical physics.

applied science
- Not necessary:
can demonstrate
- Necessary: is itself
the product
- Necessary: is itself
the product
- Margin for error
often can
- Ideally, complete
and precise
- Thorough,
to enable
- Partial
often can suffice
- Ideally, complete
and precise
- Must be possible
in the present
or near future
- Must be possible,
but no time limit
- Not relevant
- Must compete
with current
- Must (easily)
current systems
- Not relevant
- Unimportant- Goal is to
formulate a uniquely
true theory
  • Denotes an issue or activity that is crucial in this area.
  • Denotes an issue or activity that is relatively unimportant.
  • Denotes an issue or activity that is irrelevant or impossible.

construct instances of the class with possible tools. Molecular nanotechnology falls within the scope of theoretical applied science because it will require (and provide) manufacturing capabilities that do not yet exist.

A.4.2. Product performance

The products of engineering often face competitive tests in markets or battlefields. Competition forces engineers to pursue small advantages, though this may incur considerable expense and development risk. Competitive pressures affect engineering in far-reaching ways.

The requirements of theoretical applied science, in contrast, give little reason to seek small advantages. The calculated performance of an exploratory design may far exceed that of analogous existing technologies, but this is typically a direct consequence of a powerful new technology, not of cleverness in stretching an existing technology past its present limits. Sacrificing a factor of two in this high performance-sometimes a factor of a thousand-may well be worthwhile if it makes an analysis simpler or more robust. (Accordingly, Chapter 12 examines relatively simple, mechanical systems for logic and computation, despite good reasons for expecting electronic nanocomputers to be far faster.)

In engineering, one can seldom afford to make an enormous sacrifice in performance merely to simplify analysis, therefore the potential gains from doing so are largely unfamiliar. An architectural example shows the principle. It may be difficult to design an economically viable five-kilometer-tall office complex, but it is easy to show that, with lavish use of steel, one could build a habitable, five-kilometer-tall structure. Indeed, by relaxing concerns regarding cost, aspect ratio, and the volume fraction of habitable space by a factor of a thousand, the basic question of feasibility can be answered with little more than a reference to the strength-to-density ratio of steel. The cost of analysis thus drops from that of a major design project to that of a few keystrokes on a calculator.

In theoretical applied science, the threat is not that a marginally superior competitor will dominate the market, but that a flawed analysis will bring down a large (and costly) intellectual structure. Sacrificing performance can help by permitting simpler and hence more reliable analyses. By attempting to leave room for advance, it can reduce the likelihood of being forced to retreat.

These considerations also decrease the value of finding and solving hard problems. By solving hard problems, an engineer is apt to gain an edge in performance, leaving competitors behind precisely because of the problem's difficulty. In theoretical applied science, this advantage disappears: the simplest analysis may still entail solving hard problems, but there is no such systematic reward for seeking them.

A.4.3. Direct experimentation

In theoretical applied science, one cannot build systems and hence cannot experiment with them. This presents a key challenge.

The absence of direct experimentation does not isolate theoretical applied science and exploratory designs from contact with empirical data. Lacking the ability to build a system, one may still be able to build (or find) examples of its components, and test those. In studying molecular nanotechnology, experimental data on model compounds, materials, and surfaces answers many questions. Exploratory designs can also be built on (and judged in terms of) tested scientific theories. Testing proposals against theories does not constitute direct experimentation, yet it ties proposals to the world of experimental reality (albeit via possibly inaccurate models). Computational experiments have become widespread in science and are equally applicable here.

The inability to experiment with systems nonetheless has a profound effect on the practical scope of theoretical applied science. Again, engineering provides a natural comparison.

Competitive pressures drive engineers to seek and exploit any genuine, reliable technical advantage, regardless of whether its physical basis is fully understood. For example, engineers must minimize manufacturing costs, confronting them with the complexities of manufacturing systems. They must seek the best materials, confronting them with the complexities of metallurgy, ceramics, and polymer chemistry. They sometimes must push the limits of precision, cleanliness, purity, and complexity, as in state-of-the-art microprocessor production. Almost any production process is likely to exploit many tested, reproducible, but inexplicable tricks: cleaning a surface with detergent of one brand results in a good adhesive bond in the next step, cleaning with another brand doesn't, no one knows why, and no one has any good reason to find out. Success in manufacturing demands experience more than theory, and experience means experimentation.

Almost any attempt to plan and execute a major innovation in engineering will require experimentation. Usually, one must either (1) exploit phenomena that are poorly understood and characterized, or (2) sacrifice performance and lose to the competition (or both). But relying on such phenomena without testing one's assumptions through experiment risks outright failure, and taking several such gambles in a single system would virtually assure failure. Accordingly, experimentation is universally recognized as necessary.

This case for the necessity of experimentation does not apply to theoretical applied science: if competitive pressures do not force reliance on phenomena that are poorly understood and characterized, the preceding considerations do not make experimentation a condition of success. In theoretical applied science, it is legitimate and often practical to choose well-understood components that can be combined into well-understood systems. If a device or process is too poorly understood, then it cannot be used in exploratory designs. (A related argument for experimentation-the likelihood of bugs in a design based on wellunderstood components-is addressed in Section A.4.5.)

A.4.4. Accurate modeling

Theoretical applied science often relies on mathematical models of physical systems. As suggested previously, the evolution of competitive systems usually requires more than just modeling, because modeling alone is less powerful than modeling combined with experimentation. Competitive pressures forbid engineers the intellectual luxury of staying on well-understood ground.

What constitutes "well-understood ground" depends in large measure on the required accuracy of the model, and accuracy varies widely. Shortcomings are of several kinds: Fundamental physical theory has known shortcomings (e.g., at high energies). Where physical theory is adequate, practical computational models may still have shortcomings (e.g., in solid-state physics). Where computational models are in principle adequate, they may in practice be too slow and expensive for a problem of interest (e.g., in ab initio quantum chemistry). Finally, a model may represent a substantial (or dramatic) simplification of physical reality, made for the sake of clarity and convenience.

Since mathematical models of complex physical systems are always inaccurate to some degree, it is important to consider tolerance for error. This, too, varies widely, depending on the objective. (Variable tolerance for error helps explain the use of different models to describe the same phenomena.)

In theoretical physics, approximations are valued in practice, yet the ideal of a complete, accurate description of nature leaves-at least in principle-no tolerance for error. Physics deals in equations, that is, in precise equalities.

Engineering requirements, in contrast, are expressed by inequalities. For example, to know that an aircraft is safe against wing failure, one need know neither the precise strength of its wings, nor the precise stresses they will encounter: the requirement is that strength exceed stress-or, more generally, that capacities exceed requirements. Consequently, engineers can tolerate inaccurate models if the extent of their inaccuracy is (approximately) known. If stresses and strengths are both known within 10%10 \%, then it suffices to make components having 1.3 times the strength demanded by the model. If stresses are uncertain by a factor of 5 , and strengths by a factor of 2 , then it suffices to make components having 15 times the strength demanded by the model. In practice, however, competitive pressures seldom allow engineers the luxury of 15 -fold safety margins. It usually makes more sense to pay for a better analysis (or for a series of experiments) than to accept the cost and performance penalties of enormous safety margins.

In theoretical applied science the incentives differ: analysis is the main cost, experiments are unavailable, and safety margins are, in a sense, free of cost. Accordingly, simple models and large safety margins often make sense, even where engineering would demand a more efficient design based on more accurate data. In a specific instance, of course, physical constraints may preclude designs having ample safety margins to cover errors in the model; this is one of many ways in which a design may become unreliable. (If constraints impose a negative safety margin, then the analysis suggests an impossibility.)

In an unknown environment, uncertainties themselves may be unknown. Accordingly, theoretical applied science is easier when one can assume a simple, well-defined environment. The assumption of a sealed environment containing no loose molecules, for example, simplifies the design and analysis of nanomechanisms. This assumption is made throughout most of Part II and is supported by the analysis of walls, seals, and pumps in Chapter 11.

As H. Simon discusses in "The Architecture of Complexity" (Simon, 1981), structuring a system as a hierarchy of relatively independent subsystems simplifies design and analysis. One reason is that this architecture places each subsystem in a simpler, better-understood environment, reducing the difficulty of modeling its interactions with adequate accuracy.

A.4.5. Physical specification

Theoretical applied science deals with the feasibility of classes of devices rather than with the implementation of specific designs, hence it need not provide a full and detailed specification for every proposal. In a superficial paradox, sacrificing commitment to a detailed specification can make for a more reliable conclusion.

Returning momentarily to the issue of tolerance for modeling errors, it is important to remember that not all constraints are of kinds to which large safety margins can be applied. Engineers sometimes require not a simple inequality (e.g., B>AB>A ), which in principle permits an arbitrarily large safety margin, but satisfaction of a tolerance (e.g., C>B>AC>B>A ). If CAC \approx A, then BAB \approx A, a constraint which approximates that of equality. It may be that the best available model relating designs to values of BB leaves an uncertainty in this parameter greater than CAC-A. If so, then showing that some designs yield large values of BB no longer suffices to show that the constraint can be satisfied.

One can instead show that choosing from a range of designs allows BB to be adjusted within a range of values from D>CD>C to E<AE<A, with a distribution of choices that assures the existence of a design with C>B>AC>B>A. Thus, modeling inaccuracies may both permit the conclusion that one or more choices are guaranteed to satisfy the constraint, yet preclude the conclusion that any particular design will do so. For example, a design constraint might require that a rotor be almost perfectly balanced, and yet be made of two parts of differing materials and uncertain density, located on opposite sides of the shaft. Any specified size and shape for the two parts would, for most values of their densities, yield an unbalanced rotor. Yet so long as the designer can adjust the dimensions of at least one part to bring the assembly into balance, the unknown densities permit confidence in satisfying the constraint. The required range of choice in dimensions depends directly on the range of uncertainty in the ratio of the densities, and the adequacy of a given range of choice can be verified or rejected. The result of such an analysis would be an exploratory design in which neither the densities nor the dimensions of the parts are specified, but in which satisfaction of the balance criterion can be guaranteed. A design exercise incorporating such a rotor (in a housing that, inefficiently, provides room for the full range of possible dimensions) could then proceed with confidence.

A particular design for a complex system may have a bug, that is, an unanticipated interaction that causes failure. Engineers commonly detect bugs by testing (that is, by experimentation), then remove them by redesign. If the bugs in systems of some class can consistently be removed by redesign, then a description of such systems at a level of abstraction that omits features of the sort changed during routine bug removal can be regarded as bug free (it requires no change). Although it may be impossible to design a working system in full detail without testing, one may nonetheless have confidence that the available range of choice includes designs that work.

In considering nanotechnology, there is a temptation to demand atom-byatom specifications of structures. This is necessary for sufficiently small structures, but unnecessary for many others (Section 9.5). In engineering macroscale objects today, one never specifies the positions of all the atoms, even though these objects are subject to the full constraints of engineering practice, including manufacturability and success in a competitive marketplace. In molecular nanotechnology (at least during the theoretical applied science stage), the requirements for detailed specification need not always be more strict.

A.4.6. Confidence despite reduced detail

Several forms of reasoning illustrate how a partial (even nonexistent) physical specification can establish the physical feasibility of certain classes of devices. For example:

a. Biological analogy. Biological systems can demonstrate capabilities without revealing their underlying physical structures and mechanisms. The original case for programmable molecular { }^{\circ}assemblers (Drexler, 1981) exploited analogies with enzymes (which guide chemical reactions) and with ribosomes (which execute a programmed series of operations). Likewise, the simplest case for the possibility of systems of molecular machinery able to build similar systems of molecular machinery is the existence and replication of bacteria.

b. Continuum models. In studying nanotechnology, useful design calculations can sometimes be performed based on a model of components made of homogeneous materials of a certain density, stiffness, surface stiffness (Sections 9.3 and 9.4), and so forth. The chief hazard is the use of such an approximation on the wrong scale (Chapter 2). A continuum model constitutes a partial specification that often can give an adequate description of the engineering possibilities.

c. Encompassing options. In this approach, one attempts to establish that a range of options is broad enough and rich enough to include a solution, without specifying which option will actually satisfy the requirements (Section 10.3 provides an example). For a nanomechanical example, any of over 107510^{75} different structures can be built within a single cubic nanometer (Section 9.5.2c); this range of options should suffice to join two slim rods of diamond with a bend of any desired angle and twist, satisfying tight tolerances on both. The chief hazard in this sort of argument is that a combination of constraints (angle and twist and offset and modulus and surface structure), each easily satisfied in isolation, may each eliminate a large fraction of the apparent options, perhaps leaving none that satisfy all the conditions simultaneously.

d. Engineering analogy. The possibility of parts that are analogous to those composing a system X\mathrm{X} will often make possible a system analogous to X\mathrm{X} itself. For example, given mechanical parts analogous to conduction paths, transistors, and so forth, mechanical computers will be possible (Chapter 12). Given parts equivalent to conventional gears, bearings, motors, and so forth (Chapters 10 and 11), robotic arms will be possible (Chapter 13). To reach these conclusions, one need not specify every detail of a particular computer or robotic arm; accordingly, one can postpone enormous efforts in design, analysis, and debugging to the engineering phase. In this mode of reasoning, the chief hazard is an inadequately close analogy between the sets of parts. For example, if one proposes devices intended to be analogous to transistors, then one must consider not only the basic logic operations that each gate performs, but other characteristics important in building systems: noise tolerance, signal fan-out, and logic-level restoration. Shortcomings in these areas have caused the demise of many proposed computer devices (Keyes, 1985; Keyes, 1989). To be reliable, analogies must either be sufficiently exact, or be buttressed by an analysis of the functionally significant differences.

A.4.7. Unique answers (and confidence from "uncertainty")

As outlined in Section A.3, theoretical applied science is less concerned with uniquely correct answers than is theoretical physics or even engineering. In physics, only one set of theoretical predictions can be completely true; discrepancies between two theories imply that at least one must be false. In engineering, many designs may work, hence there is no uniquely correct answer in any strict sense. If each design differs in performance, however, then in a hypothetical world of idealized competition and unbounded experimentation, engineering development would converge on uniquely optimal designs.

In engineering, different designs that serve the same purpose compete for the same market; in theoretical applied science, they cooperate in supporting the same conclusion. If a capability can seemingly be realized in multiple ways, then no single error is likely to invalidate the conclusion that this capability is indeed possible.

If a design is incompletely specified, this leaves uncertainty regarding how it is to be completed. Since "uncertainty" is an antonym of "confidence," it might naively seem that uncertainty regarding how a design will be completed must erode confidence in whether the design (when completed) will work. Yet we have seen (Section A.4.5) that uncertainty in one area (e.g., the accuracy of a parameter in a mathematical model) can be neutralized by freedom of choice elsewhere (e.g., choice of a parameter in a design). Uncertainty in the model leads to uncertainty in the design, but the two uncertainties more nearly cancel than add. This kind of uncertainty reflects not a risky gamble but an opportunity for a deferred problem-solving choice. Uncertainty (of the right kind) can thus increase confidence (in a distinct but related conclusion).

a. Uncertainties in different areas. Uncertainties play different roles in science, engineering, and theoretical applied science. The intuitive rule regarding uncertainty in large sets of ideas or proposals is simple: if a conclusion or design rests on layer upon layer of shaky premises, it will surely fall. But this intuition sometimes misleads. To see where it works and where it fails, consider an imaginary proposition PP in science ("Theory YY is true"), and a proposition PP^{\prime} in theoretical applied science ("A machine can do ZZ "). Each proposition is assumed to have NN essential constituents PaP_{a} or Pa(a=a1,a2,,aN)P_{a}^{\prime}\left(a=a_{1}, a_{2}, \ldots, a_{N}\right), and each constituent is assumed to be selected from a set of M=10M=10 equally plausible possibilities PabP_{a b} or Pab(b=b1,b2,,bM)P_{a b}^{\prime}\left(b=b_{1}, b_{2}, \ldots, b_{M}\right). For a wide range of parameters in which both NN and MM are large, the result would be an absurdly speculative theory in science, but a robust proposal in theoretical applied science.

In the imaginary theory, each of the five constituents is a hypothesis regarding a distinct question. For example, a theory regarding a geological structure might comprise N=5N=5 hypotheses regarding (1) the interpretation of a seismogram, (2) the cause of bands in certain mineral grains, (3) the species identification of associated microfossils, (4) the origin of carbonaceous material, and (5) the source of a particular trace element. For each question, only one hypothesis can be true, and the stated assumption of MM equally plausible possibilities implies than any given hypothesis will have (at best) a 1/M1 / M probability. If M=10M=10, the probability of making 5 correct choices is .15=.00001.1^{5}=.00001 (assuming independent probabilities). For a theory to be true, each part must be true, and this becomes unlikely as low-confidence assumptions multiply. (Figure A. 2 provides a graphical representation of this situation.)

Figure A.2. One path through an array of many choices: if only one choice is correct at each step, then only one of the 10510^{5} possible paths can be correct. If none of the known choices for some step is correct, then no path can be correct.

Consider a superficially similar problem in theoretical applied science: analyzing the possibility of a nanomechanical system that consists of 5 essential subsystems, each serving a particular function: (1) a motor, (2) a power supply, (3) a vacuum pump, (4) a pressure sensor, and (5) a gas-tight wall. Again we assume that for each there are 10 equally plausible possibilities-but unlike alternative scientific hypotheses, design options in theoretical applied science are not mutually exclusive. Thus, we can assume (for concreteness) that each of the M=10M=10 options for a subsystem has an arbitrarily chosen .5 probability of being a workable design (rather than a 1/M=.11 / M=.1 probability of being the "one true design"). Further, the problem is to determine the possibility of a mechanism with a particular function, not to specify a single detailed design. (Figure A. 3 shows a graphical representation.)

Assuming (for the moment) independent probabilities, the probability that all ten options will fail, leaving no workable choice for a particular subsystem, is .510.001.5^{10} \approx .001. Taking account of the risk of unworkability associated with each of the 5 necessary subsystems, the overall probability that a successful combination exists is (1.510)5.995\left(1-.5^{10}\right)^{5} \approx .995.

In this example, a near certainty emerges from a combination of possibilities, each of which is as likely to fail as to succeed. Real examples can yield still more confidence for at least two reasons: First, one or more options may be essentially sure bets. Second, the probabilities of the various options may not be independent. For example, a set of options taken together may encompass a range that is guaranteed to contain a workable solution, even though any individual option, taken alone, is improbable. (A lack of independence caused by all options sharing a dubious assumption would have an adverse effect.)

Thus, uncertainties in theoretical applied science need not combine adversely, as do the superficially similar uncertainties of science. Engineering, however, has a somewhat closer resemblance to science: one must propose a single, specific design, build it, and live with the consequences. Time and budgets are limited, and the failure of a large system may leave no resources for another try. In an adaptation of the preceding model, this would mean making five choices with a .5 probability of success in each, yielding an overall probability of success .03\sim .03. Concerns of this sort motivate engineers to analyze and test components with care before building complex, expensive systems.

Figure A.3. Many paths through an array of many choices: if many choices are likely to be correct at each step, then many (though perhaps a small fraction) of the 10510^{5} possible paths are likely to be correct.

A.4.8. Reliable reasoning

Different kinds of errors have different effects on an analysis. Errors of optimism regarding performance have no obvious upper bound, but errors of conservatism plainly have a lower bound: present capabilities. Further, errors of optimism place costly intellectual structures at risk, since work built on such assumptions is apt to be undermined by their failure. Errors of conservatism, in contrast, strengthen intellectual structures because they can often provide a margin of safety able to compensate for inadvertently optimistic assumptions made elsewhere. A systematic bias toward errors of conservatism can make analyses more robust.

Several considerations combine to make theoretical applied science more feasible than it might seem, chiefly by relaxing certain constraints of standard engineering practice. Exploratory designs need not be manufacturable using available technologies: they need only be physically possible. They need not compete with other, similar designs: they need only be workable (P. Morrison notes that this has parallels in the design of scientific instruments and elsewhere in experimental applied science). They can be grossly overdesigned to compensate for uncertainties. Since their purpose is to establish a possibility, not to guide the setup of a manufacturing process, they can omit details and include room for corrections. Finally, forms of uncertainty whose closest analogues in science and engineering would be intolerable prove to be perfectly acceptable in theoretical applied science. All these considerations aid in constructing reliable chains (and networks) of reasoning.

It may be objected that relaxing these constraints sacrifices most of the value of standard engineering, but human activities occur in a web of trade-offs that routinely forces the sacrifice of one goal to further another. This is true within an engineering design space and is equally true in intellectual work, such as design itself. The sacrifice of standard engineering goals is precisely what makes farranging exploratory design in theoretical applied science a feasible enterprise.

Theoretical applied science cannot substitute for experimental applied science or for engineering. Even the most reliable reasoning about a system is inferior to a physical example: real products both prove their own feasibility and enable physical accomplishments. The virtue of disciplined research in theoretical applied science is its ability to provide a partial survey of a field before experimentation and engineering become possible, offering some measure of knowledge when the alternative is ignorance.

A.5. A sketch of some epistemological issues

A.5.1. Philosophy of science (i.e., of physics)

Considerable attention has been given to the problem of knowledge in science; a few notes on the problem of knowledge in engineering and theoretical applied science may be useful here. These amount to no more than a sketch, taking the views of certain philosophers of science as a point of departure.

The philosophical view that an exact, general physical theory could be proved by experiment was dealt a mortal wound by the displacement (after long success) of Newtonian mechanics. As K. Popper points out, experiments cannot verify such theories; they can at best (provisionally) falsify them (Popper, 1963). More generally, concord between theory and experiment cannot show either to be correct, but discord between them shows a defect somewhere in the system of ideas (Lakatos, 1978).

W. W. Bartley III, Popper's student and biographer, has described a generalization of the Popperian position, termed pancritical rationalism (Bartley, 1987b). This holds that views cannot be proved in any ultimate sense, but that they can be criticized in terms of background assumptions that are for the moment considered nonproblematic. This seems to reflect actual practice in the scientific community, in which theories are themselves criticized in terms of their consistency with experiments and other theories, while experimental results are criticized in the same fashion. Ideally, nothing is taken as dogma, everything is open to criticism, and ideas are winnowed in the resulting Darwinian competition. Science thus is viewed as an evolutionary process.

A.5.2. Philosophy of engineering

Epistemological issues in engineering appear to have received little attention (though the practicalities of gathering and using knowledge in engineering have received massive attention). These epistemological issues appear to differ from those raised by physical theories.

A general physical theory characteristically makes precise statements about all forms of matter everywhere, asserting the truth of an equation. Engineering, in contrast, characteristically makes a statement about the behavior of a specific device, expressible as the satisfaction of a set of inequalities and tolerances. Although no finite set of measurements can show that a general theory holds in all instances, and no real physical measurement can show that an exact theory holds true even in a single instance, a single observation can show that a particular device sometimes works. Further, a series of observations can provide good evidence that devices of a particular type will work with high reliability, given certain conditions.

This comparison (1) neglects the problem, common to physics and engineering, of defining devices and conditions, including the ceteris paribus problem, (2) treats the notion of experimental observations as nonproblematic, (3) neglects the logical possibility that the laws of the universe might change in any manner at any time, and so forth. Nonetheless, it shows that typical propositions in engineering (e.g., "Device DD can accomplish goal GG ") can, in an important sense, be better supported by experiment than can typical propositions in physics. This is a direct consequence of their lower precision and lesser generality.

A.5.3. Philosophy of theoretical applied science

Theoretical applied science, like engineering, makes assertions of lesser generality than does physics. A typical assertion might be of the form "Some device DD can accomplish goal GG." In engineering, such assertions can be verified by demonstration (in the approximation that the experimental results are nonproblematic), but the conditions of theoretical applied science do not permit this (when demonstrations become possible, theoretical applied science is over). Further, statements of this form cannot be experimentally falsified under any circumstances, since no finite set of experiments can test all possible devices.

How can propositions in theoretical applied science be tested against reality, if they cannot be tested experimentally? A close parallel occurs in the design phase of an engineering project. Here, too, experimentation is deferred, but this does not leave engineers adrift in fantasy. They test designs against generally accepted facts and theories about physical systems, which have themselves been tested (though not proved) by experimentation. Further, because engineering works with inequalities and tolerances, its conclusions can be more reliable than are its theoretical premises. For example, most engineering calculations are based on Newtonian mechanics, and can yield reliable engineering conclusions even though Newtonian mechanics is false. The accuracy of these conclusions is well insulated from most plausible revolutions in theoretical physics. Even in the quantum domain, current theories regarding the behavior of electrons on a molecular scale seem well insulated from uncertainties regarding systems involving (for example) neutrinos, subnuclear dimensions, or extreme energies.

To summarize, theoretical applied science takes the body of generally accepted fact and theory amassed by science and engineering as nonproblematic background knowledge. By testing propositions in theoretical applied science for consistency with this body of knowledge, they can be criticized and their likelihood judged, even when they are formally unfalsifiable by direct experimentation.3 Propositions in theoretical applied science are falsifiable if one adopts the rule that any conflict with established scientific knowledge constitutes falsification. (Note that ambiguities like those in "established scientific knowledge" also appear in "experimental results": either can be clear or disputed, and both depend on theory and interpretation.) For example, all classes of device that would violate the second law of thermodynamics can immediately be rejected. A more stringent rule, adopted in the present work, rejects propositions if they are inadequately substantiated, for example, rejecting all devices that would require materials stronger than those known or described by accepted physical models. By adopting these rules for falsification and rejection, work in theoretical applied science can be grounded in our best scientific understanding of the physical world.

A.6. Theoretical applied science as intellectual scaffolding

Theoretical applied science can provide intellectual scaffolding for further study of a field. A scaffold serves the goals of architecture and must be structurally sound, yet it is judged by criteria different from those used to judge the ultimate architectural product. Like scaffolding, theoretical applied science analyses are adapted for rapid construction, and their parts may all be removed and replaced as work progresses.

A.6.1. Scaffolding for molecular manufacturing

The case for molecular manufacturing is today an example of theoretical applied science. Like a scaffold, it consists of parts that join to form a structure. These parts support the physical feasibility of various capabilities. They include reasons for expecting that we can, given suitable tools and effort:

  • Engineer complex molecular objects
  • Assemble more complex systems from these molecular objects
  • Build and control molecular machine systems
  • Use molecular machine systems to perform molecular manufacturing
  • Use molecular manufacturing to build nanocomputers
  • Use nanocomputers to control molecular manufacturing
  • Use manufacturing systems to build more manufacturing systems
  • With the preceding, achieve thorough control of the structure of matter

These arguments interlock: establishing the feasibility of the whole requires support for the pieces, and the apparent feasibility of the whole then motivates greater scrutiny of each piece, either to criticize it or to improve it. Given molecular manufacturing, improved nanocomputer designs become more interesting. Given nanocomputers, improved molecular manipulator designs become more interesting. Given the feasibility of the whole, implementation strategies become more interesting.

As exploratory designs grow more detailed, they come to resemble descriptions of experiments in applied science, or of engineering prototypes. As the tools required for fabrication become available (perhaps speeded by a better understanding of what they can build), engineering practice will encroach on theoretical applied science, and theoretical studies will give way to experiment, production, and use. The scaffolding will then have been replaced with brick.

A.7. Conclusions

Theoretical applied science draws on the enormous body of knowledge amassed by science and engineering, but exploits that knowledge for different purposes using different methodologies. Its aim is neither to describe nature nor to build devices, but to describe lower bounds to the performance achievable with physically possible classes of devices.

Theoretical applied science can achieve its goals only by sacrificing many of the goals of experimental science and of engineering. It produces analyses, not devices, and thus avoids the stringent requirement that its designs be fully specified, manufacturable, and competitive. This latitude can be exploited to mitigate the problems posed by inaccurate models and the infeasibility of direct experimentation. Research in theoretical applied science typically makes no pretense of designing systems that can be built today, or that will be built tomorrow. Today we lack the tools; tomorrow we will have better designs.

In an ideal world, theoretical applied science would consume only a tiny fraction of the effort devoted to pure theoretical science, to experimentation, or to engineering. The resulting picture of technological prospects can nonetheless be of considerable value: it can indicate areas of science and technology that are likely to prove especially fruitful, and it can help us understand the opportunities and challenges that technological development is likely to bring.


  1. This has previously been called exploratory engineering (Drexler, 1991a), a term that does not adequately convey the theoretical nature of the studies. The term theoretical applied science could be applied to theoretical studies performed in direct support of experimental applied science, but applied theoretical science seems a better term. No clearer alternative name for the present topic has yet been suggested.

  2. For example, if a war were in progress, and X\mathrm{X} could be used to destroy the country, and a defense against X\mathrm{X} would cost one billion dollars, then the expected value of this information could be on the order of 5%5 \% of the value of the country, minus one billion dollars. If determining the possibility or impossibility of X\mathrm{X} with near certainty can be achieved promptly at a cost of only one million dollars, so much the better.

  3. The idea that experimentally unfalsifiable statements can be tested against established theory has been discussed in the philosophy of science literature by Wisdom (1963).