Chapter 4

Why Nanotechnology?

Abstract

Industrialists are often unsure whether to introduce nanotechnology: As well as finding out what it can do for their business, they want to know how much it will cost. The principal drivers for the introduction of nanotechnology are examined, including novelty through miniaturization, increased efficiency of fabrication, and improved performance. Nanotechnology generally implies massive parallelization of fabrication procedures. The most spectacular progress in miniaturization has been in information storage and processing. The process cannot be considered without limits before the atomic scale is reached, however; a fundamental problem of ultraminiature manufacturing is outlined. The convergence of nanotechnology, biotechnology, information technology, and technologies based on the cognitive sciences is considered; each is associated with an irreducible unit that can be manipulated, respectively the atom or nanoblock, the gene or operon, the bit or byte, and the neuron. Scaling of performance with size is considered. It has been demonstrated that digitization causes economic growth; there may be a similar effect of nanification. The basic approach to cost–benefit analysis is outlined.

Keywords

Drivers; Novelty; Miniaturization; Nanification; Vastification; Fabrication efficiency; Agility; Performance; Cost

With almost every manufactured product, if the same performance can be achieved by using less material, there will be a cost advantage in doing so. A well-known example is the metal beverage can. Improvements in design—including the formulation of the alloy from which it is made—have led to significantly less material being used for the same function (containing a beverage, with certain protective requirements). In this example, there are concomitant, secondary advantages of miniaturization (e.g., because the can is also lighter in weight, it costs less to move around).

Note that in this case the locus of miniaturization is the thickness of the wall of the can. The basic functional specifications of the can include the volume of beverage that must be contained, which cannot be miniaturized. On the other hand, if the wall could be made of a nanoplate, and still fulfill all requirements for strength and impermeability, the can would have become a nanoproduct.

In the case of engineering products fulfilling a structural or mechanical purpose, their fundamental scale of size and strength is set by the human being. The standard volume of beverage in a can is presumably based on what a human being likes to drink when quenching his thirst. Perhaps the innovator carried out experiments, much as George Stephenson determined the gauge standard for his railways by measuring the distance between the wheels for a hundred or so farm carts in the neighborhood of Stockton and Darlington and taking the mean, which happened to be 4812Image [1].

The length and mechanical strength of a walking stick must be able to support the person using it. Miniaturization of such products therefore generally implies the use of thinner, stronger materials, which might well be nanocomposites, but nevertheless the length of the stick and the dimensions of the hand grip cannot be miniaturized.

Another major class of product deals with processing and displaying information. The venerable example is that of the clock, which computes (in a sense) and displays the time of day. Human eyesight places a lower limit on the useful size of the display and other man/machine interfaces for input and output. In the case of mechanical clocks there is a fabrication issue: although a tiny wristwatch uses less material than a standard domestic interior clock, it is more expensive to make, both because the parts must be finished with the higher precision and because it is more troublesome to assemble them.

But what is the intrinsic lower limit of the physical embodiment of one bit of information (presence or absence)? Single electron electronics and Berezin's proposal for isotopic data storage [2] suggest that it is, respectively, one electron or one neutron; in other words one quantum, considered as the irreducible minimum size of matter. But a quantum is absolutely small, in the sense that observing its state will alter it [3]—which seems to suggest that it is useless for the intended purpose. Only in the quantum computer is the possibility that a quantum object can exist in a superposition of states exploited (observation generally forces elimination of the superposition). Quantum logic therefore implies virtually unlimited parallelism (superposition) in computation, hence an enormous increase in power compared with conventional linear (sequential) logic—provided entanglement with the external environment can be avoided. Although intensive research work is currently being undertaken to develop quantum computers, it has yet to bear fruit in the shape of a working device and, therefore, falls outside the scope of this book, which is focused on actual products.

Conventional logic, in which something is either present or absent and in which the superposition of both presence and absence does not exist, must be embodied in objects larger than individual quanta. The lower size limit of this physical embodiment seems to be a single atom. In principle, therefore, it seems that information storage (memory) could be a based on cells capable of containing a single atom (provided what is being observed is not a quantum state) without any loss of functionality. The most dramatic progress in miniaturization has, therefore, occurred in information storage and processing [4]. In this case, the fabrication technology has undergone a qualitative change since Jack Kilby's first integrated circuit. Making large-scale integrated circuitry in the same way that the first integrated groups of components were made—the mode of the watchmaker—would be prohibitively expensive for a mass-market commodity. Semiconductor processing technology, however, combines miniaturization with parallelization. Not only have the individual components become smaller, but the area processed simultaneously has dramatically increased (measured by the standard diameter of the silicon wafers, which has increased from 3 inches up to 12 inches, with 18 inches anticipated).

Within the processor, miniaturization means not only having to use a smaller quantity of costly material, but also shorter distances between components. Since information processing speed is limited by the time taken by the information carriers—electrons—to traverse a component, processing has become significantly faster as a result [5]. The miniaturization has therefore gone beyond maintaining the same performance using less material, but has actually enhanced performance.

Nevertheless, regardless of the actual sizes of the circuits in which the information processing takes place, the computer/human interface has perforce had to remain roughly the same size. Even so, the nature of the interface has undergone a profound change. Formerly, the processing units were contained in a large room maintained at a fixed temperature and humidity. Requests for jobs were typically handed to an operator, who would load them onto the computer and in due course collect a printout of the results, placing them for pickup by the requester. Miniaturization of the processing units has revolutionized computing in the sense that it has enabled the creation of the personal computer. The owner directly feeds instructions into it, and the results are displayed as soon as the computation has finished. The largest parts of the personal computer are typically the keyboard with which instructions are given, and the screen on which results are displayed. The miniature processor-enabled personal computer has made computing pervasive and it would be hard to overestimate the social effects of this pervasiveness. It is an excellent example of the qualitative, indirect results of miniaturization.

Another issue is accessibility, which is very size-dependent. In an earlier epoch, children were much in demand as chimney sweeps, because they were small enough to clamber up domestic chimneys wielding a broom. The complexity of the circuits required for cellular telephony are such that a hand-held device containing them only became possible with the development of miniaturized, very large-scale integrated circuitry. A similar consideration applies to the development of swallowable medical devices equipped with light sources, a camera, and perhaps even sensors for physiological state and actuators for drug release.

In summary, the minute size of integrated circuit components also enables circuits of greater complexity to be devised and realized than would otherwise be possible. In addition, qualitatively different functions may emerge from differently sized devices. There are also secondary advantages of smallness, such as a requirement for smaller test facilities.

4.1 Miniaturization of Manufacturing Systems as Driver

This section takes a brief look at the consequences of realizing the Feynman vision of atomically precise nanoscale machines. They are anticipated to have significantly superior performance to conventional machines. The main features are:

Scaling. Assuming that the speed of linear motion remains constant at all scales, a miniature machine can operate at a much higher frequency compared with a macroscopic machine. Hence, throughput expressed as the fraction of the machine's own mass that it can process per unit time (relative throughput) will increase proportionally with diminishing size (assuming that the components handled by the machine are similarly miniaturized). Since mass decreases as the cube of linear dimension, absolute throughput will decrease proportionally to the square of the diminished size. This implies that vast parallelization is required to maintain throughput. The diminished mass also implies negligible inertia and gravitational influence. Stiffness, too, decreases with size, implying that the stiffest possible covalent solids should be used, which is why diamond is a preferred material for the construction of nanoscale machines and most simulations of their operation assume diamond as the material.

Wear. Covalent bonding between atoms is “digital” (or binary)—either there is a bond or there is not. Hence incremental wear, familiar in large material-processing machines, cannot occur. Instead, there is merely a damage probability, which should be much lower than for a large machine, since it is largely dependent on the stress on a part, which sharply diminishes with decreasing size. Furthermore, manufacturing precision can be retained through multiple generations of manufacturing, just as digitally encoded information can be transmitted and retransmitted many times without loss of fidelity (especially with the help of error-correcting program codes).

Precision. It is well known from macroscale machinery that friction and wear diminish with increasing precision of finishing the parts. Atomic precision is the apotheosis of this trend. The familiar microscopic post-processing machining operations such as grinding and polishing are unnecessary for products assembled atom-by-atom. A further corollary of atomic-scale precision is the great intricacy with which products can readily be made.

Parallelization. In order to make humanly useful quantities of products, atomic assemblers first have to make replicas of themselves. The key parameter is the doubling time. A few dozen doublings suffice for a single nano-assembler to produce 1 kg of copies. This strategy also makes the cost of the machines very low, since the development costs for the first machine can be amortized by all the extra copies.

4.2 Facile Fabrication as Driver

Provided performance can be maintained, the smaller a device, the less material is used, leading to cost savings per device. Miniaturization also facilitates massively parallel fabrication procedures—indeed great use has already been made of this possibility in the semiconductor processing industry, as evinced by the continuing validity of Moore's law.

Nanoparticles made from electronically useful materials can be printed at high speed onto a suitable substrates (making printed electronics). This innovation enables single-use (disposable) devices, with obvious advantages in applications such as medicine, avoiding the extra work (and expense and resources) of sterilization and the risks of cross-patient infection.

At present, products with features a few nanometers in size (i.e., not quite atomically precise) are not produced by assemblers, nor will they in the foreseeable future. Devices such as single-electron transistors have to be made by top–down semiconductor processing technology, with which one-off nanoscale artifacts suitable for testing can be made in the laboratory. Low-cost, high-volume manufacture, however, has the following three requirements [6]:

  • •  A superior prespecified performance achieved with uniformity, reproducibility, and reliability;
  • •  A high yield to an acceptable tolerance;
  • •  A simulator for both reverse engineering during development and right-first-time design (enabling results to be modified to order).

These requirements are fulfilled by the mainstream micro- (becoming nano-)electronics industry. But is there a lower size limit on what can be got out of devices made using conventional semiconductor processing technology (i.e., epitaxy, lithography, and etching)? Suppose we are trying to make an array consisting of 3 nm diameter features on a 6 nm pitch [6]. Each layer of each pillar contains about 80 atoms and is formed by adding or removing atoms one at a time, by deposition or etching, respectively. These processes can be considered to obey Poisson statistics (i.e., the variance equals the mean). Hence the coefficient of variation (the standard deviation, equal to the square root of the variance, divided by the mean) of the area of the pillars is 12%, whereas the reliable performance of the transistors on a VLSI chip requires a coefficient of variation of less than 2%. Fundamental statistics intrinsic to the process prevent manufacturability below feature sizes of around 10 nm. Possibly this statistical barrier could be traversed by fabrication in a eutactic environment (i.e., by assemblers), but as already mentioned they are not currently realizable.

Nevertheless, the computational requirements for some applications, such as presenting visual and audio data (i.e., pictures and music) might be able to tolerate circuits less uniform and reliable than those of current VLSI chips [7].

While devices assembled from preformed nano-objects such as carbon nanotubes might not be subject to Poisson statistics (e.g., if they are assembled by a tip-based manipulator—although this is scarcely a high-throughput manufacturing technology at present), another problem then arises—that of the uniformity, reproducibility, and reliability of the nano-objects. One of the principle ways of making carbon nanotubes is plasma-enhanced chemical vapor deposition, a process that is subject to Poisson statistics.

Nevertheless, it may be possible to evade this intrinsic limitation. Using highly specialized and intricate machinery in the nanoscale, living systems managed to make highly uniform objects in the nanoscale, most notably proteins which serves as a powerful inspiration to achieve the same artificially.

4.3 Performance as Driver

Performance may often be enhanced by reducing the size. If the reason for the size reduction is accessibility or ease of fabrication, the scaling of performance with size must be analyzed to ensure that performance specifications can still be achieved. It is worth noting that the performance of many microsystems (microelectromechanical systems, i.e. MEMS) devices actually degrades with further miniaturization [8], and the currently available sizes reflect a compromise between performance and other desired attributes.

If vast quantities of components can be made in parallel very cheaply, devices can be designed to incorporate a certain degree of redundancy, immunizing the system as a whole against malfunction of some of its components [9]. Low unit cost and low resources required for operation make redundancy feasible, hence high system reliability. A more advanced approach is to design the circuit such that it can itself detect and switch out faulty components. However, for malfunctions that depend on the presence of at least one defect (e.g., an unwanted impurity atom) in the material constituting the component, if the defects are spatially distributed at random, the smaller the component, the smaller the fraction that are defective [10].

4.4 Agile Manufacturing

Outside the nanoworld, manufacturing is being transformed by digitization. Hitherto digitization has been closely tied up with information transmission and processing (based on the general-purpose digital computer). Economists have found that economic growth is correlated with the degree of digitization, and a recent refinement of the analysis of the phenomenon suggests that economic growth is actually caused by digitization [11]. In terms of manufacturing, this has meant the computer control of machine tools in factories, in turn permitting the widespread introduction of robots. These factories were still engaged in the mass production of identical objects. The concept of additive manufacturing (AM), in which objects are built up by microwelding small lumps of metal (or other materials) to each other, while initially developed about 30 years ago for the rapid prototyping of artifacts that would ultimately be manufactured conventionally, has permitted digitization to penetrate more deeply into manufacturing practice: AM (also known as three-dimensional (3D) printing) is now used as a manufacturing technology in its own right. Miniaturized down to atomic dimensions, it becomes Feynman–Drexler bottom-to-bottom assembly. A similar trend has occurred in the printing industry: books can now be printed digitally on demand. The traditional economies of scale associated with assembly lines no longer apply: it is possible to make customized objects for practically the same cost as identical ones (mass customization).

In essence, nanotechnology represents the “digitization” of materials as well as of the information, in which objects are fabricated from preformed nanoblocks. By analogy, we can propose that this further digitization will have a similarly beneficial effect on economic growth. The fundamental reason for these effects of digitization, which do not appear to have been discerned by the economists, must be the universalization enabled by digitization. A digital computer can be programmed to carry out any kind of computation; a digital storage medium can store any kind of (digital) information—text, music or images; and so forth. This in turn enormously increases the adaptability of the system as a whole. Ironically enough, if this is considered as a revolution, it makes further industrial development easier because new challenges can be met with only minimal need for brand-new materials; it will suffice to tweak the specifications of a nanoblock and the assembly procedure to create something with quite different properties, for example—the very antithesis of a revolution based on disruptive technology. It also enables everything to be precisely tailored for optimal use. To give just a simple example, at present hip prostheses are provided in about half a dozen standard sizes and the surgeon selects the best match to the patient needing a replacement. The digital concept means that a prosthesis will be rapidly machined using patient data in order to precisely fit the body. For the same amount of work, or indeed less work, by the surgeon the result will be far superior, with the tangible economic benefit in terms of the activity of the patient post-operation, as well as other intangible benefits of a superior quality of life.

The nearer nanotechnology approaches the ultimate goal of productive nanosystems (see Section 18.1), the more flexible manufacturing should be. One aspect that needs careful consideration is the fact that agile (adaptive) production systems are necessarily rooted in algorithms: hence, agile factories must necessarily be computer-controlled, due to the large volume of information that has to be processed very rapidly (i.e., in real time) during production. The computer must (at least with present technology) run according to a preloaded program, which is necessarily closed and hence cannot but reflect present knowledge. The control center of the factory is therefore intrinsically ill-equipped to adapt to the ever-unfolding events that constitute the course of existence, which is largely constituted by the unknowable part of the future. That the financial turbulence of 2008 turned out to have serious industrial consequences is an all-too-obvious illustration of this truism. Different sectors seem to be intrinsically more or less sensitive to future uncertainty—rational planning demands that this sensitivity be quantified to determine the sectorial appropriateness of agile manufacturing. We need to anticipate that real-world contexts will raise challenges of increasing uncertainty and diversity and so require agility to be achieved by means that are suitably resilient and adaptable to change (“agile agility” or “adaptable adaptability”); that is, agility needs to be explicitly designed for the unknown and unexpected, not merely to cope with well-understood tempos and boundaries. Naturally this will have a cost: presumably the more adaptable an industrial installation, the more expensive, hence it will be appropriate to determine a suitable limit to the necessary adaptability for a particular sector.

4.5 Nano–Info–Bio–Cogno (NIBC)

The quartet of advanced modern technologies—nanotechnology, information technology, biotechnology, and cognitive technology—are often considered to be converging and becoming inseparable and indistinguishable. Practically speaking, it is therefore supposed to make more sense to consider them as a unified whole. While the reasoning behind this idea often seems to be either trivial or obscure, it is nevertheless striking to note that all four have a common feature, namely the discretization of their core elements. In nanotechnology it is the atom—engineering with atomic-scale precision; in information technologies it is the bit (binary digit)—all information must be digitized before it can be processed further; in biotechnology it is the gene—genes can be introduced into, or removed from organisms discretely at will, as required; and in the cognitive technologies it is the neuron—as represented by a discrete cell in a cellular automaton, for example.

In practice the irreducible units are somewhat larger: a nanoblock (e.g., a nanoparticle) containing at least some hundreds of atoms; 8 bits are grouped into a byte, which can represent an 8-bit word (such as any ASCII character); a gene requires a promoter and several genes and may be grouped into an operon; neurons work in groups, although here our knowledge is still very imperfect.

Just as discrete mathematics is different and distinct from continuous mathematics, so is the discrete nature of these advanced technologies different from the continuous nature of traditional, established technologies.

Working at a smaller scale, albeit with nanoblocks rather than atoms, nanotechnology can be viewed as a kind of additive manufacturing process. If so, nanotechnology is qualitatively different from ultraprecision engineering in the nanoscale, despite the fact that it was the first technology to be labeled “nanotechnology” (Section 1.1).

Analog computers are indeed quite different in their architecture from digital computers. The latter do not, however, resemble cellular automata, some classes of which may be capable of universal computation [12].

Modern biotechnology may be predicated on the “one gene, one enzyme” notion of Beadle and Tatum, but this was shown to be wrong by Wright more than 80 years ago [13]. Practical gene engineering does, in fact, require the introduction of many auxiliary genes into an organism to ensure the expression of one desired new gene introduced into its DNA. Much criticism of the use of genetically-modified plants as food for humans and domestic animals rests on the possibly deleterious effects these auxiliary genes may have on human health.

Immense efforts are now being put into understanding the human brain by modeling it. The flagship effort in this direction is currently the “Human Brain Project” led by Henry Markram at the Ecole polytechnique fédérale de Lausanne (EPFL) and funded by the European Union (EU). It is felt that success in this endeavor holds the key to developing artificial intelligence beyond sophisticated expert systems.

Undoubtedly in this sense NIBC represents a clearly identifiable concept of discreteness of matter and process. This aligns it closely with cybernetics, in which machines—information processors—are analyzed by considering their discrete states [14].

Prior to the full nano–info–bio–cogno convergence, we already have nanobiotechnology (cf. Section 1.5)—the application of nanotechnology to biology, including biotechnology and medicine (see Chapter 6 for nanomedicine)—and bionanotechnology, which is the application of biology to nanotechnology, which mainly means biomimetics (Section 10.7) at the nanoscale [15].

4.6 Cost–Benefit Analysis of Nanotechnology

There are two main motivations for undertaking such an analysis: from the viewpoint of an individual company, (i) is it worth adopting a “nano-solution” (that is, a solution involving nanotechnology) to a given problem?—and (ii) from the viewpoint of the government of the country, responsible for creating a favorable environment for its industrial and commercial development, is it worth investing taxpayers' money into nanotechnology infrastructure, such as common facilities for nanofacture and nanometrology centers, in which expensive instruments are acquired for shared use?

Regarding (i), one can differentiate between a manufacturing company considering introducing nano-objects into an existing product, and a group of investors considering whether to launch a nanofacturing company. In the former case, the calculation is straightforward. If a simple substitution is being considered, a fraction 1/xImage of the quantity of existing, non-nano, material being used may be required to achieve the same performance. If the nano:non-nano ratio of costs per unit quantity is less than x, then it may be worth making the substitution, provided that the expense associated with modifying the production process is small. This expense will generally be incurred only once, when the modification is undertaken. Another one-off expense is that of tests carried out on the modified product to demonstrate that it is the same, or better than, the existing product. This testing might, however, be carried out under the R& D program of the company and would include determination of the optimal value of 1/xImage as well as the optimal choice of nano-objects from a range differing in such attributes as size and surface properties. Once that value is established, the cost of making the substitution in production can be evaluated.

If the product is improved by incorporating nano-objects into it, then the demand for it may increase and this also needs to be evaluated. The overall goal is to determine the likely return on the investment needed to adopt the nano-solution. For this determination, investors may avail themselves of the kinds of predictions of nanotechnology market size presented in Chapter 5, although those predictions are of a generic nature and based on extrapolating past trends, and may not be reliable for evaluating the likely market for a specific product, especially if it is new.

Regarding (ii), the required analysis is (or should be) also an evaluation of the return on investment. Will the proposed facilities cause economic growth that will generate increased tax revenues exceeding the expenditure on the facilities? The degree of uncertainty in making such an evaluation can be expected to be so great that it is scarcely worth doing. For example, it is difficult to predict whether the existence of the facilities will encourage new manufacturing companies to be formed. The encouragement could be boosted by appointing indefatigable “nano-ambassadors” who would engage with industrialists to make sure that they were aware of the latest technology, but this might be considered to be too interventionist and run into political difficulties. On the whole, the track record of governments in doing this kind of thing is not good. If there were real demand for a nanometrology center, then it would surely be a worthy object for private investment.

References

[1] It is said that Edward Pease, who led the consortium of businessmen promoting the Stockton & Darlington Railway, ordered him to make the width of the track equal to that of local country carts. This is quite instructive as an example of how not to proceed. Railways represented a discontinuity (a disruptive technology) with respect to the technology of farm carts. It was recognized by Stephenson's rival Isambard Brunel, who chose his gauge standard of 7 feet by considering the intrinsic possibilities of the new technology. Despite its technical superiority, the reputedly indomitable will of the Stephenson brothers ultimately prevailed, even rejecting the Rennie brothers' reasonable compromise of 56Image—not only in Britain, but also in much of the rest of the world. It is surprising that the high-speed railways (“shinkansen”) in Japan were constructed on the “standard” 4812Image gauge, since the national railway system had anyway a different gauge of 36Image; a broader gauge would have allowed even greater speed, stability and on-board luxury.

[2] A.A. Berezin, Stable isotopes in nanotechnology, Nanotechnol. Percept. 2009;5:27–36.

[3] P.A.M. Dirac, The Principles of Quantum Mechanics, §§1 and 2. 4th edn. Oxford: Clarendon Press; 1958.

[4] Nevertheless, there is still a long way to go—a memory cell 100×100×100Image nm in size still contains of the order of 109 atoms.

[5] Since information processing is irreversible, heat is dissipated, and miniaturization also miniaturizes the quantity of heat dissipated per logical operation (although the diminution is less than the increased density of components undertaking logical operations, hence managing heat removal is becoming an even greater problem than hitherto.

[6] M.J. Kelly, Nanotechnology and manufacturability, Nanotechnol. Percept. 2011;7:79–81.

[7] A. Lingamneni, et al., Designing energy-efficient arithmetic operators using inexact computing, J. Low Power Electron. 2013;9:1–13.

[8] C. Hierold, From micro- to nanosystems: mechanical sensors go nano, J. Micromech. Microeng. 2004;14:S1–S11.

[9] See C.E. Shannon, J. McCarthy, eds. Automata Studies. Princeton: University Press; 1956.

[10] This is an elementary application of the Poisson distribution. See A. Rényi, Probability Theory. Budapest: Akadémiai Kiadó; 1970:122–125.

[11] N. Czernich, et al., Broadband infrastructure and economic growth, Econ. J. 2011;121:505–532.

[12] S. Wolfram, Statistical mechanics of cellular automata, Rev. Mod. Phys. 1983;55:601–644.

[13] S. Wright, Character change, speciation and the higher taxa, Evolution 1982;36:427–443.

[14] W.R. Ashby, An Introduction to Cybernetics. London: Chapman & Hall; 1956.

[15] B. Layton, Recent patents in bionanotechnologies: nanolithography, bionanocomposites, cell-based computing and entropy production, Recent Pat. Nanotechnol. 2008;2:72–83.

Further Reading

[16] R.A. Freitas, Economic impact of the personal nanofactory, N. Bostrom, et al., ed. Nanotechnology Implications: More Essays. Basel: Collegium Basilea; 2006:111–126.

[17] J.J. Ramsden, Bioinformatics: an Introduction. 3rd edn. London: Springer; 2015 —for descriptions of biological machinery in the living cell.

[18] T. Toth-Fejel, A few lesser implications of nanofactories, Nanotechnol. Percept. 2009;5:37–59.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset