Chapter 12

The Business Environment

Abstract

Nanomaterials are incorporated into other products, making nanotechnology biased towards upstream production, a corollary of which is that the technology can rapidly become pervasive. Many nanotechnology companies were spun out of universities, and close company–university collaboration is a feature of the industry. It tends to bias the industry in favor of research and for many companies research and development grants, rather than the commercial trading of the materials or devices that they produce, constitute their main source of income. Except in cases where a nanoproduct can be substituted for non-nano product with a clear benefit, it has turned out to be difficult to introduce nanoscale innovation into many industries. The fragmentation of production, with each small company making its own unique material, is understandable when considering the academic origin of many of the companies, but for any large downstream user this state of affairs makes it almost impossible to seriously consider incorporating nanomaterials into a manufactured product. Even if standards did exist and a small company reliably produced materials according to the standard, it is unlikely that the company would be able to sustainably produce large quantities. This is despite the fact that scaling up production in nanotechnology is typically simply a matter of duplicating an existing production line, but generally the commercial prospects do not look attractive enough to potential investors in the enhanced production. A quantitative procedure for assessing the value of innovation is presented.

Keywords

Universality; Clusters; Demand; Readiness levels; Business models; Innovation value; Patents; Reasons for failure

Factors considered to contribute to the success—or failure—of the nanotechnology industry are considered as in this chapter, although the fiscal and regulatory environments are mainly dealt with in separate chapters following this one. Chapter 16 deals with some company case studies.

12.1 The Universality of Nanotechnology

Reference is often made to the diversity of nanotechnology. Indeed, some writers insist on referring to it in the plural as “nanotechnologies”. Inevitably, a technology concerned with building matter up atom-by-atom is a universal technology with enormous breadth [1], which can be applied to virtually any manufactured artifact. Nanostructured materials are incorporated into nanoscale devices, which in turn are incorporated into many products, as documented in Part 2. An artifact is considered to be part of nanotechnology if it contains nanostructured materials or nanoscale devices even if the artifact itself is of microscopic size; this is the domain of indirect (or enabling) nanotechnology (Section 1.4). The fact that the feature sizes of components on semiconductor microprocessor chips are now smaller than 100 nm, and hence within the nanoscale, means that practically the entire realm of information technology has now become absorbed into nanotechnology. Nanotechnology is, therefore, already pervasive [2]. The best previous example of such a universal technology is probably information technology, which is used in countless products, even in nowadays relatively mundane domestic items such as rice cookers.

Any universal technology—and especially one that deals with individual atoms directly—is almost inevitably going to be highly upstream in the supply chain. This is certainly the case with nanotechnology at present. Only in the case of developments whose details are still too nebulous to allow one to be anything but vague regarding the timescale of their realization, such as quantum computers and general-purpose atom-by-atom assemblers, would we have pervasive direct nanotechnology.

Universal technologies form the basis of new value creation for a broad range of industries; that is, they have “breadth”. Such technologies have some special difficulties associated with their commercialization because of their upstream position far from the ultimate application (see Figure 12.1).

Image
Figure 12.1 Diagram of immediate effects showing the supply chain from research to consumer product. The dashed lines indicate optional pathways: the route to the original equipment manufacturer (OEM) is very likely to run via one or more component suppliers. Parallel innovations may be required for realization of the equipment. These include legally binding regulatory requirements (particularly important in some fields—e.g., gas sensors). Note that most of the elements of M. Porter's value chain are included in the last arrow from OEM to consumer product.

The most important difficulty is that the original equipment manufacturer (OEM) needs to be persuaded of the advantage of incorporating the nanoscale component or nanomaterial into the equipment. The most convincing way of doing this is to construct a prototype. But if the technology is several steps upstream from the equipment, constructing such a prototype is likely to be hugely expensive (presumably it will anyway be outside the domain of expertise of the nanotechnology supplier, so will have to be outsourced). The difficulty is compounded by the fact that many OEMs, especially in the important automotive branch, as well as “Tier 1” suppliers, rarely pay for prototype development. The difficulty is even greater if a decision to proceed is taken at the ultimate downstream position, that of the consumer product itself. The nanotechnology supplier, which as a start-up company is typically in possession of only proof-of-principle, often obtained from the university laboratory whence the company sprang, is likely to be required to make its most expensive investments (e.g., for a prototype device or an operational pilot plant) before it has had any customer feedback, which is so important for determining the cost-effectiveness.

This distance between the technology and its ultimate application will continue to make life difficult for the technologist even if the product containing his technology is introduced commercially, because the point at which the most valuable feedback is available—from the consumer—is so far away. There is perhaps an analogy with Darwin's theory of evolution here, in its modern interpretation incorporating knowledge of the molecular nature of the gene—variety is introduced at the level of the genome (e.g., via mutations), but selection operates a long way downstream, at the level of the organism. The disparity between loci is especially acute when the exigencies of survival include responses to potentially fatal sudden threats.

The further upstream one is, the more difficult it is to “capture value” (i.e. generate profit) from one's technology [3]. Hence cash tends to be limited, in turn limiting the possibilities for financing the construction of demonstration prototypes.

The difficulty of the position of the upstream technologist is probably as low as it can be if the product or process is one of substitution. As will be seen later (Chapter 16) this is likely to be a successful path for a small nanotechnology company to follow. In this case, demonstration of the benefits of the nanotechnology is likely to be relatively straightforward, and might even be undertaken by the downstream client.

On the other hand, the nanotechnology revolution is unlikely to be realized merely by substitutions. Much contemporary nanotechnology is concerned with a greater innovative step, that of miniaturization (or nanification, as miniaturization down to the nanoscale is called)—see Figure 1.1. As with the case of direct substitution, the advantages should be easy to describe and the consequences easy to predict, even if an actual demonstration is likely to be slightly more difficult to achieve.

A curious, but apparently quite common difficulty encountered by highly upstream nanotechnology suppliers is related to the paradox (attributed to Jean Buridan) illustrated by an ass placed equidistantly between two equally attractive piles of food, hence unable to decide which one to eat first, and which starved to death through inaction. Potential buyers of nanoparticles have complained that manufacturers tell them “we can make any kind of nanoparticle for you”. This is unhelpful for many downstream clients, because their knowledge of nanotechnology might be very rudimentary, and they actually need advice on the specification of which nanoparticles will enhance their product range. Start-up companies that offer a very broad product range typically are far less successful than those that have focused on one narrow application, despite the allure of universality (see Chapter 16), not least because—ostensibly—it widens the potential market.

On the other hand, for a larger company universal technologies are attractive commercial propositions. They allow flexibility to pursue alternative market applications, risks can be diversified, and research and development costs can be amortized across separate applications. The variety of markets are likely to have a corresponding variety of stages of maturity, hence providing revenue opportunities in the short, medium, and long terms. As commercialization develops, progress in the different applications can be compared, allowing more objective assessments of performance than in the case of a single application; and the breadth and scope of opportunity might attract more investment than otherwise [4].

12.2 The Radical Nature of Nanotechnology

But nanotechnology is above all a radical, disruptive technology whose adoption implies discontinuity with the past (cf. Section 3.3). In other words, we anticipate a qualitative difference between it and preceding technologies. In some cases, this implies a wholly new product; and at the other extreme an initially quantitative difference (progressive miniaturization) may ultimately become qualitative. While a generic technology has breadth, a radical technology has depth, since changes, notably redesign, might be needed all the way down the supply chain to the consumer; they affect the whole of the supply chain, whereas an incremental technology typically only affects its immediate surroundings. Insofar as the very definition of nanotechnology includes words such as “novel” and “unique” (see Section 1.6), “true” or “genuine” nanotechnology can scarcely be called anything but radical, otherwise it would not be nanotechnology [5].

The costs of commercialization are correspondingly very high. Redesign at a downstream position is expensive enough, but if it is required all the way, the costs of the introduction might be prohibitive. Furthermore, the more radical the technology, the greater the uncertainty in predicting the market for the product. High uncertainty is equivalent to high financial risk, and the cost of procuring the finance is correspondingly high. “Cost” might mean simply that a high rate of interest is payable on borrowings, or it might mean that capital is difficult to come by at all. This is in stark contrast to an incremental technology, for which the (much smaller) amount of capital required should be straightforward to procure, because return on the investment should be highly predictable.

In addition, the more radical the innovation, the more likely it is that other innovations will have had to be developed in parallel to enable the one under consideration to be exploited. If these others are also radical, then maybe there will be some favorable synergies since comprehensive redesign is anyway required even for one. There may also be regulatory issues, but at present nanotechnology occupies a rather favorable situation, because there is a general consensus among the state bureaucracies managing regulation that nanoparticulate X, where X is a well-known commercially available chemical, is covered by existing regulations governing the use of X in general. This situation stands in sharp contrast to the bodies (such as the FDA in the USA) entrusted with granting the nihil obstat to new medicinal drugs, which following the thalidomide and other scandals have become extremely conservative. Things are, however, likely to change, because one of the few clearly articulated recommendations of the influential Royal Society of London–Royal Academy of Engineering report on nanotechnology was that the biological effects of nanoparticles required more careful study before allowing their widespread introduction into the supply chain (cf. Chapter 14) [6]. This conclusion created a considerable stir and triggered a flurry of government-sponsored research projects. Nevertheless, given the considerable literature that already existed on the harmful effects of small particles (e.g., [7])—and the already widespread knowledge of the extreme toxicity of long asbestos fibers, the sudden impact of that report is somewhat mystifying.

The implications go even further, because an existing firm's competences may be wholly inadequate to deal with the novelty. Hence the infrastructure required to handle it includes the availability of new staff qualified for the technology, or the possibility of new training for existing staff.

Nanotechnology is, then, both radical and universal. This combination is in itself unusual, and justifies the need to treat nanotechnology separately from other technically-based sectors of the economy.

12.3 Intellectual Needs

As well as material capital, the innovating company also has significant intellectual needs. It is perhaps important to emphasize the depth of those needs. Although the scientific literature today is comprehensive and almost universally accessible, simply buying and reading all the journals would not unlock the key to new technology: one needs to be an active player in the field just to understand the literature, and one needs to be an active contributor to establish credibility and allow one to participate in meaningful discussions with the protagonists [8].

Science, technology, and innovation all require curiosity, imagination, creativity, an adventurous spirit, and openness to new things. Progress in advanced science and technology requires years of prior study in order to reach the open frontier, and to perceive unexplored zones beyond which the frontier has already passed. Governments mindful that innovation is the wellspring of future wealth do their best to foster an environment conducive to the advance of knowledge. Hence it is not surprising that many states typically play a leading rôle in the establishment of research institutes and universities.

Nevertheless, in this “soft” area of human endeavor it is easy for things to go awry. The linear Baconian model has recently recaptured the interest of governments wishing to expand the controlled legal framework supposedly fostering commercially successful innovations (such as the system of granting patents) by extending their control upstream to the work of scientists. Even the Soviet Union under Stalin, a world steeped in state control, realized that extending it this far was inimical to the success of enterprises (such as the development of atomic weapons) that were considered to be vital to the survival of the state.

This lesson seems to have been forgotten in recent decades. In the UK, the system of allocating blocks of funds to universities every five years or so and letting them decide on their research priorities was some time ago replaced by an apparatus of research councils to which scientists must propose projects, for which funds will be allocated if they are approved, and in 2017 the research councils were subsumed into a body called UK Research & Innovation (UKRI), which is in turn part of a government department [9]. Hence, the ultimate decision on what is important to investigate is taken away from the scientists themselves and put in the hands of bureaucrats (some of whom, indeed, are themselves former scientists, but obviously cannot maintain an acute awareness of the cutting edge of knowledge). To any bureaucrat, especially one acting in the public interest, the file becomes the ultimate object of importance (for, as C.N. Parkinson points out [10], there may subsequently be an inquiry about a decision, and the bureaucrat will be called upon to justify it). Therefore great weight is placed on clearly measurable outcomes (“deliverables”) of the research, which should be described in great detail in the proposal, so that even an accountant would have no difficulty at the end of the project in ascertaining whether they had indeed been delivered. The most common criticism of proposals by reviewers seems to be that they “lack sufficient detail”, a criticism that is frequently fatal to the chances of the work being funded. Naturally, such an attitude does nothing to encourage adventurous, speculative thinking. Even de Gaulle's Centre National de la Recherche Scientifique (CNRS), modeled on the Soviet system of Academy institutes, and offering a place where scientists can work relatively free of constraints, is now in danger of receiving a final, mortal blow (in fact, for years the spirit of the endeavor had not been respected; the resources available to scientists not associated with any particular project had become so minimal that they were only suitable for theoretical work requiring neither assistants nor apparatus).

One can hardly imagine that such a system could have been introduced, despite these generally recognized weaknesses [11], were there not failings in the alternative system. Indeed we must recognize that the system of allocating a block grant to an institute only works under conditions of “benign dictatorship”. Outstanding directors of institutes (such as the late A.M. Prokhorov, former director of the General Physics Institute of the USSR Academy of Sciences [12]) impartially allocated the available funds to good science—“good” implying both intellectually challenging and strategically significant. Unfortunately, the temptations to partiality are all too frequently succumbed to, and the results from that system are then usually disastrous. A possible alternative is democracy: the faculty of science receives a block grant, and the members of the faculty must agree how to divide it among themselves. It is perhaps an inevitable reflexion of human nature that this process almost invariably degenerates into squabbling. Besides, the democratic rule of simple majority would ensure that the largest blocs appropriated all the funds. Hence, in order for democracy to yield satisfactory results, it has to be accompanied by so many checks and balances it ends up being unworkably cumbersome.

Is there a practical solution? Benign dictatorship would appear to yield the best results, but depends on having an inerrant procedure for choosing the dictator; in the absence of such a procedure (and there appear to be none that are socially acceptable today) this way has to be abandoned. The opposite extreme is to give individual scientists a grant according to their academic rank and track record (measured, for example, by publications). This system has a great deal to commend it (and, encouragingly, appears to be what the Research Directorate of the European Commission is aiming at with its recently introduced European Research Council awarding research grants to individual scientists [13]). The only weakness is that, almost inevitably, scientists work in institutes, with all that implies in terms of possibilities for partiality in the allocation of rooms and other institutional resources by those in charge of the administration, who are not necessarily involved in the actual research work.

A more radical proposal is that of Selfridge—calling for half-baked ideas, from which a national commission would select some for funding, not so much on the basis of obvious merit (for those should be picked up anyway by private enterprise), but more on the basis of fancy [14]. One would have thought that the potential benefits were sufficiently large that it would be worth allocating at least 10% of the regular funding budget to a trial of Selfridge's idea.

12.4 Company–University Collaboration

The greatest need seems to be to better align companies with university researchers. Many universities now have technology transfer offices, which seem to think that great efforts are needed to get scientists interested in industrial problems. In reality, however, rarely are such efforts required—a majority of devoted scientists would agree with A.M. Prokhorov about the impossibility of separating basic research from applied (indeed, these very expressions are really superfluous) [15]. On the contrary, university scientists are usually highly interested in working with industrial colleagues; it is usually the institutional environment that hinders them from doing so more effectively. Somehow an intermediate path needs to be found between the consultancy (which typically is too detached and far less effective for the company than access to the available expertise would suggest should be the case), the leave of absence of a company scientist spent in a university department (which seems to rapidly detach the researcher from “real-life” problems), and the full-time company researcher, who in a small company may be too preoccupied by daily problems that need urgent attention, or who in a larger company might be caught up in a ponderous bureaucracy. Furthermore, companies are typically so reticent about their real problems that it is almost impossible for the scientist to make any useful contribution to solving them. One seemingly successful model, now being tried in a few places, is to appoint company “researchers in residence” in university departments—they become effectively members of the department, but would be expected to divide their time roughly equally between company and university. Such schemes might be more effective if there were a reciprocal number of residencies of university researchers in the company. Any expenses associated with these exchanges should be borne by the companies, since it is they who will be able to gain material profit from them; misunderstanding over this matter is sometimes a stumbling block. It is profoundly regrettable that the current obsession with gaining revenue from the intellectual capital of universities has to some degree poisoned relationships between them and the rest of the world. The free exchange of ideas is thereby rendered impossible. In effect, the university becomes simply another company. If the university is publicly funded, then it seems right to expect that its intellectual capital should be freely available to the nation funding it. In practice, “nation” cannot be interpreted too literally; it would be contrary to the global spirit of our age to distinguish between nationals and foreigners—whether they be students or staff—and if they are roughly in balance, as they typically are, there should be no need to do so.

A further danger attending the growth of company–university collaboration is the possibility of intellectual corruption rooted in venality [16]. It seems unlikely that this can be prevented by statute; only by rigorous personal integrity. It is, of course, helpful if the relevant institutional environments favor it rather than the opposite.

12.5 Clusters

Evidence for the importance of personal intellectual exchanges comes from the popularity, and success, of clusters (also called innovation hubs) of high-technology companies that have nucleated and grown, typically around important intellectual centers such as the original Silicon Valley in California, the Cambridges of England and Massachusetts, and the Rhône–Alpes region of south-eastern France [17].

An additional feature of importance for radical nanotechnology is the availability of centralized fabrication and metrology facilities, the use of which solely by any individual member of the cluster would scarcely be at a level sufficient to justify the expense of installing and maintaining them.

Some clusters are initiated and succored by government support. Examples are the semiconductor, digital display, and notebook PC clusters in Taiwan; the biomedical research cluster in Singapore, the micro-electronics cluster in Grenoble (France), and the life sciences and IT clusters in Shanghai-Pudong.

There is, as yet, no “theory of clusters” [18], but it is fascinating to note that empirical studies have found that larger cities are more innovative (as measured, for example, by patenting activity) than smaller ones [19]. This should provide some guidance for locating clusters, when they are planned rather than arising spontaneously. It should be noted that diverse clusters are likely to be more successful in generating innovation than specialized ones [20].

12.6 Assessing Demand for Nanotechnology

In contrast to Section 5.2, here we deal with the problem faced by an industrialist who proposes to introduce a novel product based on nanotechnology. When a decision has to be made regarding the viability of investment (which might be internal in the case of a large existing company rather than a start-up) in the nanotechnology venture, it is obviously desirable to predict the development costs. In more traditional industries, these costs might be extremely well determined. Given nanotechnology's closeness to the fundamental science, however, it is quite likely that unforeseen difficulties may arise during the development of a product for which proof-of-principle has been demonstrated. By the same token, difficulties may have been anticipated on the basis of present knowledge, but subsequent discoveries may enable a significant shortcut to be taken. On balance, these positive and negative factors might compensate each other; it seems, however, to be part of human nature to minimize the costs of undertaking a future venture when the desire to undertake it is high [21]. There is a strong element of human psychology here.

The development, innovation and marketing costs determine the amount of investment required. The return on investment arises through sales of the product (the market); that is, it depends on demand, and the farther downstream the product, the more fickle and unpredictable the consumer.

A starting point for assessing these costs would appear to be the elasticities of supply and demand. Extensive compilations of these elasticities have been made in the past [22]; an updated version might be useful for products of substitution and innovation. Of course, this would represent only a very rudimentary assessment, because all the (unknown) cross-elasticities also need to be taken into account. Furthermore, the concept has not been adequately developed to take degrees of quality into account (often difficult to quantify).

A perpetual difficulty is that only very rarely can the impact of the introduction of a new product be compared with its non-introduction. Change may have occurred in any case and even the most carefully constructed models will usually fail to take into account the intrinsic nonlinearities of the system. These features vitiate the accuracy of the usual kind of prediction.

12.6.1 Modeling

A decision whether to invest in a new technology will typically be made on the basis of anticipated returns. In the case of incremental technology these returns can generally be estimated by simple extrapolation from the present situation; by definition, for any radical (disruptive) technology there is no comparable basis from which to start. Hence one must have recourse to a model, and the reliability will depend upon the reasonableness of the assumptions made. Naturally, as results start to come in from the implementation of the technology one can compare the predictions of the model with reality and adjust and refine the model. An example of this sort of approach is provided by cellular telephony: the model is that the market consists of the entire population of the Earth.

One of the problems of estimating the impact of nanotechnology tends to be the overoptimism of many forecasters. The “dotcom” bubble of 2000 is a classic example. Market forecasts for mobile phones had previously assumed that almost every adult in the world would buy one and it therefore seemed not too daring a leap to assume that they would subsequently want to upgrade to the 3G technology. Although the take-up was significant it was not in line with the forecast growth of the industry—with all too obvious consequences. Nanotechnology market forecasting is still suffering from the same kind of problem; for example, will every young adult in the requisite socio-economic group buy a device like an iPod capable of showing video on a postage stamp-sized screen? The next section offers a more sober way to assess market volume.

12.6.2 Judging Innovation Value

From the manufacturer's viewpoint, any substitutional or incremental innovation that allows specifications to be maintained or surpassed without increasing cost is attractive. But how will a prospective purchaser respond to an enhanced specification available for a premium price? The J-value was originally introduced to objectively determine whether spending money on a life-extending safety measure was worthwhile in terms of an enhanced life quality index Q. We propose using it to determine whether a safety-neutral product enhances the quality of life.

The life quality index Q (assuming that people value leisure more highly than work) is defined as [23]:

Q=GqXd

Image (12.1)

where G is average earnings (GDP per capita) from work, q is optimized work–life balance, defined as

q=w/(1w)

Image (12.2)

where w is the optimized average fraction of time spent working (q=1/7Image seems to be typical for industrialized countries), and XdImage is discounted life-expectancy (reflecting the fact that money available now is valued more highly than that available in the future; a typical discount rate is 2.5% per annum). GqImage has the form of a utility function: as pointed out by D. Bernoulli, initial earnings (spent on essentials) are valued more highly than later increments (spent on luxuries).

Many consumer products, including robots, that enhance the convenience of life are now available, especially in Japan [24]. Theoretically, if the innovation allows a chore to be done faster, without any change in life-expectancy, then its purchase should be attractive if the increase of Q due to the decrease of w is more than balanced by the decrease of Q due to the diversion of some income into purchase of the robot.

12.6.3 Anticipating Benefit

In which sectors can real benefit from nanotechnology be anticipated? What is probably the most detailed analysis hitherto of the economic consequences of molecular manufacturing assumes blanket adoption in all fields, even food production [25]. Classes of commodity particularly well suited for productive nanosystems (PN) include those that are intrinsically very small (e.g., integrated electronic circuits) and those in which a high degree of customization may significantly enhance the product (e.g., medicinal drugs and “masstige” personal care products). In many other cases (and bear in mind that even the most enthusiastic protagonists do not anticipate PNs to emerge in less than 10 years), there are no clear criteria for deciding where “intermediate nanotechnology” could make a worthwhile contribution [26]; a more technologically modest introduction of nanotechnology may allow a familiar product to be upgraded more cheaply than by conventional means.

Any manufacturing activity has a variety of valid reasons for the degree of centralization and concentration most appropriate for any particular type of product and production. The actual degrees exhibited by different sectors at any given epoch result from multilevel historical processes of initiation and acquisition, as well as the spacial structure of the relevant distributions of skills, power, finance, and suppliers. The inertia inherent in a factory building and the web of feeder industries that surround a major center mean that the actual situation may considerably diverge from a rational optimum.

The emergence of a radical new technology such as nanoscale production will lead to new pressures, and opportunities, for spacial redistribution of manufacturing, but responses will differ in different market sectors. They will have different relative advantages and disadvantages as a result of industry-specific changes to economies of scale, together with any natural and historic advantages that underlie the existing pattern of economic activities. But we should be attentive to the possibility that the whole concept of economies of scale will presumably become irrelevant with the advent of productive nanosystems, and will have intermediate degrees of irrelevance for intermediate stages in the development of nanotechnology.

12.7 Technical and Commercial Readiness (Availability) Levels

The notion of Technology Readiness Level (TRL) originated in the NASA Advanced Concepts Office [27]. It was specifically designed to characterize the progress of technologies used in space missions. Despite this association with a very closed and controlled noncommercial environment, it is since then become widely adopted to characterize general industrial technologies, for which it is far from appropriate. Hodgkinson et al. proposed modifications to the scale to render it more suitable for general use, the modified scale being named one of Commercial Readiness Levels (CRL) (Table 12.1) [28]. The third column in the table has been added to characterize the nature of the activity culminating in the achievement of each level. “Science” means here “research”; that is, the application of the scientific method to make discoveries and create new knowledge; and “engineering” means “development”; that is, the application of existing knowledge to perfect something in order to enable it to deliver practical, useful results [15]. Of course, many new things are also discovered during development but the work is directed towards a specific goal, whereas science is open-ended. Science can be considered to have become technology by TRL 4.

Table 12.1

Commercial readiness levels (CRL, [27,28]). The third column identifies the nature of the work involved in arriving at that level
CRL Description of CRL Naturea
1 Basic principles observed S
2 Proof of concept S
3 Technology application formulated S/E
4 Component validation in the laboratory environment S/E
5 Component validation in the real environment E
6 Prototype system demonstration in the real environment E
7 Commercial introduction E
8 Commercial success E

a Science (S); engineering (E).

Even the modified scheme of CRL is, however, inadequate to represent the progression of nanotechnology, because of some unique features, especially its extremely rapid progression and the intermingling of academic and industrial research. Hence, a scheme of Nanotechnology Availability Levels (NAL) is proposed to more accurately describe the state of a given part of nanotechnology with respect to the desired application (Table 12.2).

Table 12.2

Nanotechnology availability levels (NAL), giving next step requirements (NSR)
NAL Description NSRb
0 Idea R1
1 Basic principles observed R1, V
2 Proof of concept I
3 Technology application(s) formulated IP
4 Demonstration for some specific application P
5 Available in some form St
6a COTSa availability R3, SC
6b Validated for a desired application R3, SC
7a Complete supply chain established R4
7b Incorporated into the product U, R4
8 History of success R4

a Commercial off-the-shelf.

b Key to next step requirements (NSR):

R1Exploratory laboratory workVVerificationIInspirational thinkingIPPatent applicationPDevelopment of a feasible production routeR2Research to establish whether the technology canbe used for the desired application and whether it issuperior to existing technologyStStandardizationR3Testing in the desired applicationUUseSCEstablishing the supply chainR4Optimization.

Image

Note that the levels are not ordered in a strict sequential hierarchy throughout: NAL 4 may well accompany NAL 3; availability at NAL 5 may only be of research grade material. Levels 0 to 6a would typically be reached in the research environment; levels 0–3 might well be carried out in an academic environment; while institutes of technology might continue work through levels 4 and 5, possibly then transferring operations to a spin-off company in order to progress the technology further. NAL 6a corresponds to the interest of the supplier and ultimate manufacturer; NAL 6b and 7b to the interest of the ultimate end-user, which is likely to have to shoulder the burden of the research work labeled R3. 6a and 6b are likely to take place in parallel; likewise for 7a and 7b. Ongoing R4 is generally required for sustainable deployment. Nowadays it is common for academic laboratories to seek to patent new technologies, which introduces some drag into the reporting—a published account of the principles (NAL 1) may only appear after levels 4 and 5 have been reached. Note also the deliberate alignment of the TRL, CRL and NAL. Therefore, in every case the actual numbers should convey roughly the same meaning.

Level 7a can be further divided into the following maturity levels [29]:

  1. 1.  Manufacturable solutions are not known
  2. 2.  Interim solutions are known
  3. 3.  Manufacturable solutions are known
  4. 4.  Manufacturable solutions exist and are being optimized.

Level 7b could already begin to be accessed once Level 7a.2 or 7a.3 is reached (as we can respectively label items 2 and 3 from the above list of manufacturing maturity levels).

Nanotechnologies that can meet short-term needs for a particular purpose have typically already been developed for some other application (i.e., NAL 3 or 4). Nevertheless, given that the material is available in some form, it is essentially a matter of straightforward engineering development to adapt it for the specific requirement, without excluding the possibility that unexpected problems arise needing more fundamental investigation of, probably, a very specific aspect. Short-term technologies already have some commercial activity, which may not be in the same area of application as the one of interest to the reader of this book. Medium-term technologies are being actively pursued in university and other academic laboratories as well as in the research laboratories of leading high technology industries (although activity in the latter is usually only revealed when a patent application is filed). Long-term technologies are currently less actively pursued in academic laboratories than might be imagined (because they are increasingly forced to rely upon short-term research contracts with government funding agencies that require detailed, prespecified outputs); the main thrust is currently from privately funded nonprofit institutions and theoretical work by academics not forced to rely on external funding.

Manufacturability. In should be borne in mind that the ultimate goal of most nanotechnology research is high-volume, low-cost manufacture of materials and devices. It seems that none, or hardly any, of the great number of research papers address this issue. Single electron devices and the like tend to be a tour de force of individual skill and ingenuity, but there is no clear route thence to the higher availability levels. Unfortunately, even the more detailed and better adapted (compared to TRL or CRL) nanotechnology availability levels (NAL) do not capture the possibility of an intrinsic limitation to the new technology that will prevent it ever getting beyond level 3. Manufacturability, therefore, needs careful attention that will probably require customized analysis, since it is not usually addressed in the literature (cf. Section 4.2).

12.8 Predicting Development Timescales

This is difficult for new technologies because they progress exponentially; the best known example is probably Moore's law, which states that the number of components on a VSLI chip now doubles approximately every two years. Hence, if we now have 109 components on a chip, we can expect there to be 26×109Image, or almost 1011, components after nine years [30]. This kind of prediction of exponential development can only be made with some hope of reliability when an entire industry is taken into consideration. The rate of development in a narrow field is much harder to predict. It usually depends strongly on the volume of available resources, especially manpower. Furthermore, discoveries in other, seemingly unrelated, fields may solve bottlenecks and have other dramatic, unpredicted effects [31].

Short-term (<5Image years) needs are typically met by substitution of a traditional material or extant device (which might already be macro) by nanotechnology. In simple cases, there is a clearly predictable benefit (e.g., less cost for the same performance, or more performance for the same cost). In more complex cases, there is a trade-off; if benefits and disbenefits are size-dependent but the dependences are of opposite sign, there will be a definite characteristic size (crossover point) below which there is no advantage to be gained. This applies, for example, to microelectromechanical systems (MEMS) used, e.g., as inertial sensors such as accelerometers; performance is degraded upon miniaturization down to the nanoscale.

These applications need essentially development, including optimization, of existing technologies. The technologies may have been developed with other purposes in mind, in which case adaptation to the specific need is required. Computer modeling may be brought to bear if experiments are difficult, in order to narrow the scope and reduce the required number of the experiments. In all these applications, the first step must be to draw up the detailed specifications of what is required. Available knowledge can then be used to select candidate solutions, and the final choice must depend on comparative experiments.

Although at first sight there seems to be a plethora of nanomaterials ripe for short-term applications, beyond niche applications (e.g., tennis balls and rackets, stain-proof cravats), the lack of standardization of raw materials is a definite handicap. Research work, therefore, has to rely on the material produced by a particular supplier, with no guarantee that the results can be used with materials from other suppliers (even if the material is nominally the same). If the research is successful and the material is shown to be fit for purpose, a similar difficulty arises at the next level because of the fragility of a supply chain depending upon a single manufacturer. That is why, rather than any specific results, the introduction of exchange-based trading of commoditized nanomaterials is a prerequisite for sustainable short-term development (cf. Section 13.4).

Medium-term (5–15 years) applications require research (of type R2 according to Table 12.2), both experimental and more or less profound theoretical analysis in order to be able to appraise the practicability of selecting them for development.

The long-term (>15Image years) applications of nanotechnology, apart from those listed in the previous section that belong to the far end of the medium term, are essentially associated with the development of nanoscale assemblers (also known as bottom-to-bottom “fabbers”, mechanosynthesizers, etc.) as universal fabrication tools, as suggested by R.P. Feynman in his 1959 Caltech lecture and later developed in much more detail by K.E. Drexler, R.A. Freitas, R.C. Merkle and others. Although these assemblers would therefore appear to constitute the very core of the mainstream of nanotechnology, and although the US National Nanotechnology Initiative (NNI) launched in 2001 makes strong reference to Feynman, that initiative has ended up giving very little support to the development of bottom-to-bottom fabrication, which is, therefore, nowadays carried out in a small number of university or private institutions such as the nonprofit Institute for Molecular Manufacturing in Palo Alto. A similar situation prevails in Europe. The situation in China is not reliably known.

The Nano Revolution will consummate the trend of science infiltrating industry that began with the Industrial Revolution and which can be roughly described in four stages of increasing complexity (Table 12.3) [32]. Note that Stage 4 also encompasses the cases of purely scientific discoveries (e.g., electricity) being turned to industrial use. Clearly nanotechnology belongs to Stage 4, at least in its aspirations; indeed nanotechnology is the consummation of Stage 4; a corollary is that nanotechnology should enable science to be applied at the level of Stage 4 to even those very complicated industries that are associated with the most basic needs of mankind, namely food and health. Traditional or conventional technologies (as we can label everything that is not nanotechnology) also have Stage 4 as their goal but in most cases are still quite far from realizing it.

Table 12.3

The infiltration of science into industry (after [32])
Stage Description Characteristic feature(s)
1 Increasing the scale of traditional industries Measurement and standardization
2 Some scientific understanding of the processes (mainly acquired through systematic experimentation in accord with the scientific method) Enables improvements to be made
3 Formulation of an adequate theory (implying full understanding of the processes) Possibility of completely controlling the processes
4 Complete integration of science and industry, extensive knowledge of the fundamental nature of the processes Entirely new processes can be devised to achieve desired ends

12.9 Patents

The patent system has always been an anomaly in the free enterprise economy. It amounts to monopoly privileges accorded to inventors enshrined in law. England appears to have the oldest continuous patenting system in the world, starting with the patent granted in 1449 to John of Utynam for making stained glass. Already by 1623, however, the Statute of Monopolies placed clear limitations on the extent of monopoly: both temporal (at that time, a maximum of 14 years), and also stipulating that the public interest must be respected.

The patent system greatly expanded with the onset of the Industrial Revolution, the number of patents roughly following the increase in national wealth. Arguments for and against them have continued to this day. The main argument in favor is that patents provide an incentive to the innovator. This was always considered to be rather specious in the case of the individual inventor, but at company level it is used, for example by the pharmaceutical industry. The argument is that were it not for the period of guaranteed monopoly, it would be difficult to recoup the tremendous costs of research and even more so of development to produce new medicinal drugs, because other companies could simply copy and sell the drugs themselves once they had been placed on the market. Although this view is widespread, it does not have empirical support [33]. It is also rather difficult to see the logic of this argument because exactly the same premises are widely used to justify the privatization of state monopolies in order to open up their services to a multiplicity of competing companies—a guaranteed monopoly is believed to promote inefficiency in the industry to which it applies, and exactly the same appears to be true of the pharmaceutical industry.

The main argument, however, against patents is that they actually stifle innovation. I.K. Brunel was a noted critic on those grounds, and refused to protect any of his own ideas [34]. Telecommunications have been a rich field (and perhaps still are) for such stifling; for example, in 1937 the US Federal Communications Commission declared that the Bell Telephone System had suppressed over 3000 unused patents in order to forestall competition. Many of these patents did not arise through work done within the company, but were acquired; they concerned alternative devices and methods for which Bell had no need itself. It has been estimated that 95% of patents are obstructive. A counterargument, albeit somewhat contrived, to the apparent disbenefit is that a great deal of ingenious research must then be done in order to circumvent the web of existing patents.

Simply looking at the history of patent rights (Table 12.4) in different countries and the contemporaneous progression of their pharmaceutical industries also renders untenable the notion that patents are essential to incentivize research. If patents were essential for the success of the pharmaceutical industry, most drugs should have been invented and produced in the UK and the USA. In contrast, Germany, Italy and Switzerland have historically been the leading countries (in 1978, when Italy changed its law, its pharmaceutical industry was the fifth largest in the world) and in World War I the USA had to import dyes from Germany. In France, the repeal of the 1959 ban virtually killed its chemical industry, which migrated to Switzerland (cf. the movie industry migrating to Hollywood to avoid Edison's patents). In the absence of patents, one notices intense innovative activity leading to the constant improvement of productivity. Furthermore, knowledge is not built from discrete, isolated entities; the invention of new products and processes builds on existing ones [35]; the progression has strong correlations.

Table 12.4

Patent laws in different countries
Country Year State of the laws
France 1959 The drug as product NOT patentable
ditto 1978 1959 ban lifted
Germany 1967 Prior to that year only processes were patentable
Italy 1978 Products patentable
Spain 1986a Products patentable
Switzerland 1907 The process to produce a drug patentable
ditto 1954 1907 law strengthened
ditto 1977 Products patentable
UK 1449 Processes and products patentable
USA 1790 The drug itself (the product) patentable
ditto 1790 The process to produce a drug patentable

a Upon entry to the European Union.

Open source software (cf. Chapter 10) obviously represents a fundamental challenge to the patent system. Open source hardware is now beginning to be explored. Ultimately, nanotechnology will make the two practically indistinguishable from each other.

12.10 Generic Business Models

Wilkinson has identified four generic business models (Figure 12.2), all beginning at the most upstream end of the supply chain, but extending progressively downstream. All these companies produce real objects. Another business model that is quite widespread among nanotechnology companies is simply to license intellectual property in the form of patents.

Image
Figure 12.2 Generic business models for nanomaterial suppliers. From left to right, Model A (e.g., Thomas Swan) produces only nanostructured materials. Model B (e.g., Zyvex) produces nanostructured materials and formulates additives. Model C (e.g., Nucryst) produces nanostructured materials, formulates additives and supplies enhanced materials. Model D (e.g., Uniqema Paint) produces nanostructured materials, formulates additives, makes enhanced materials and finished goods incorporating those materials. Reproduced from J.M. Wilkinson, Nanotechnology: new technology but old business models? Nanotechnol. Perceptions 2 (2006) 277–281 with permission of Collegium Basilea.

The typical temporal evolution of new technology is shown in Figure 12.3; it is based on a model of the printed circuit board industry, and so far has fitted the observed course of events in microsystems. It can reasonably be considered as a model for nanotechnology as far as its substitutional and incremental aspects are concerned. Insofar as it is universal and radical, however, prediction becomes very difficult.

Image
Figure 12.3 Generic model proposed for the temporal evolution of nanotechnology companies (originally developed by Prismark Associates, New York for the printed circuit board industry; it also seems to fit the evolution of the microtechnology industry). Reproduced from J.M. Wilkinson, Nanotechnology: new technology but old business models? Nanotechnol. Perceptions 2 (2006) 277–281 with permission of Collegium Basilea.

If one examines in more detail the early stages, it appears that there might be a gap between early adopters of an innovation and development of the mainstream market [36]. The very early market peaks and then declines, followed by the mainstream development as shown in Figure 12.4. This feature underlines the importance of patient investors in new high-technology companies.

Image
Figure 12.4 Widely held view of the evolution of expectations of a new technology.

Most nanotechnology start-ups use model A or B, sometimes C (the boundary between B and C is fuzzy): they must rely on finding other companies able to act as customers or partners in order to deliver goods to the marketplace (e.g., Nucryst has partnered with Smith & Nephew). Models C and especially D require large quantities of (venture) capital.

Figure 12.3 can be overlaid by the curve of Figure 12.4. Brain–computer interfaces and other types of human augmentation, quantum computing, and 3D bioprinting for organ transplants are examples of technologies on the early part of the curve. The Internet of things and autonomous road vehicles seem to be at the peak of inflated expectations. Heading towards the trough of disillusionment are augmented reality, consumer 3D printing, wearable electronic devices (other than watches), and hybrid cloud computing. Climbing toward sustainability are industrial 3D printing, virtual reality, and autonomous field vehicles.

Although “expectations” seems like a nebulous parameter, it is a very influential one, motivating an inventor to take an idea to a prototype, motivating an investor to back a project and, conversely, demotivating when expectations are falling. Yet another graph could be constructed, that of the time derivative of expectation, which seems to correspond to motivation.

A different viewpoint is brought by Sethi, who identifies three kinds of start-ups: “route-to-market”, which “primarily focus on providing a new business model to as large a group of users as possible”, and whose “main focus is therefore the speed of scaling up”; “starting from pain”, which try to address a problem; and “technology-driven” [37]. Most nanotechnology start-ups seem to belong to the third kind. This means that they are confronted with the immediate problem of lack of an obvious market—mostly they do not solve any pre-existing problem, and if they do, it is likely to be something very obscure that not many people are interested in.

Naturally enough, nanotechnology is very heavily rooted in the ethos of technology but that does not suffice to make it a commercial success. Given the universal nature of the technology, however, problems that could be solved using nanotechnology are presenting themselves all over the place. As for scaling up, as a production technology nanofacture is particularly amenable to scale-up by simply multiplying production units and setting them up to work in parallel with one another.

12.11 Why Nanotechnology Companies Often Fail

There is a particularly striking dichotomy between the wildly sanguine predictions of “trillion-dollar markets” by market research organizations and the dismal fact that most nanotechnology companies are struggling. In order to resolve the discrepancy between these two scenarios, a number of propositions come to mind, such as:

  1. 1.  The predictions are inaccurate;
  2. 2.  Most nanofacture takes place in large, well-established companies.

We have already discussed the likely inaccuracy of the market predictions (Section 5.2). Recapitulating, the sources of inflation are:

  1. 1.  “Old” nanomaterials, such as carbon black, are included. These commodities are produced in very large quantities and greatly outweigh “genuine” nanomaterial production. The specific value (i.e., price per unit mass) is lower for old nanomaterials than for new ones, but usually market predictions are given in terms of monetary value rather than tonnage;
  2. 2.  The output of the semiconductor processing industry is included, on the basis that the typical feature size on a silicon chip is smaller than 100 nm, hence it can legitimately be called nanofacture regardless of the fact that the transistors and other components work in the same way as ones with larger feature sizes (nevertheless, the ultraminiaturization (i.e., nanification) of the circuits does enable a host of novel applications, such as cellphones);
  3. 3.  The value of the final product containing nanomaterials produced using nanofacture is reckoned, rather than the value added by the nanotechnology.

Undoubtedly there are a number of “genuine” nanomaterials made by large chemical companies, wherever there is sufficient demand. Doubtless in many or most cases this demand—e.g., for nanostructured catalysts—is internally generated. But most of the different types of nanomaterials available are made by small companies. Many of them are university spin-outs [38]. There exists a huge number of such small (including micro—with fewer than 10 employees) companies, each one manufacturing a unique nano-object.

It is much more difficult to assess the level of commercial activity in nanofacturing not used to produce nanomaterials, apart from the ultraprecision engineering sector [40]. Apart from “traditional” (in the sense of following the trend illustrated in Figure 1.1) ultraprecision engineering now capable of nanometer resolution (Figure 12.5), additive manufacturing (also known as 3-dimensional printing) is now achieving submicrometer resolution [41], and therefore deserves to be included under nanotechnology. Presently its main use is, however, for rapid prototyping rather than any kind of mass production.

Image
Figure 12.5 The Tetraform “C” 3-axis grinding machine approaching Taniguchi's ideal of “atomic bit machining”. Reproduced from P. McKeown et al., Ultraprecision machine tools—design principles and developments. Nanotechnol. Perceptions 4 (2008) 5–14 with permission of Collegium Basilea.

The small start-ups nanofacturing nanomaterials generally have their own proprietary technology. Hence, we have large numbers of small or even microcompanies, each one making one or more innovative products, differing from those made by the other companies, on a small scale. In most cases the products are constantly being improved, especially regarding purity. They can be sold to research laboratories. Any company manufacturing a downstream product and wishing to incorporate a nanomaterial into that product encounters the following problems:

  1. 1.  The materials are not made to a fixed specification, much less to a standard specification (as might be published by ISO, for example);
  2. 2.  The production capacity cannot cope with the demand commensurate with mass production of the downstream good.

If the nanomaterials were made to a fixed specification (it would not necessarily have to be an ISO standard, but would have to be publicly accessible, or at least accessible to all concerned parties), it would be possible for large-scale demand to be met by combining inputs from several different small manufacturers. This would also increase robustness of supply in case of failure of one of the small companies. But this possibility is ruled out because of the lack of standards. It would be possible for a single company to scale up supply. Nanotechnology has the advantage that, unlike conventional chemical engineering, it is not necessary to laboriously re-optimize production parameters every time the scale of production is changed; typically production capacity is increased via scale-out: the simple duplication of machinery. But this is usually expensive, bringing us to another problem, more fully discussed in Chapter 13—the difficulty of accessing capital.

Given these difficulties it is little wonder that many of these micro and small companies have abandoned a truly commercial business model and simply rely on periodic injections of public funds via, for example, the European Union's “Framework” research and technical development programme. Once the intricacies of the complicated application processes have been mastered, it becomes increasingly cost-effective to apply for grants; especially if a smoothly working consortium has been assembled, successive grants can enable a company to continue for many years, even if it never actually sells any products. Meanwhile potential end-users hoping to introduce nanotechnology into their products find that their aspirations have to be indefinitely postponed.

Note that Tables 12.1 and 12.2 do not explicitly reveal the “valley of death” that intervenes in the climb from the initial innovation to commercial success around level 7—it corresponds to the “shake-out” in Figure 12.3.

References

[1] Synonyms for “universal” in this context are “generic”, “general purpose”, and “platform”.

[2] There is a certain ambiguity here, since the nanoscale processors (which could now be called nanoprocessors) have only been introduced very recently. Hence, the majority of extant information processors probably still belong to microtechnology rather than nanotechnology, but the balance is inexorably tipping in favor of nanotechnology.

[3] This can be considered as quasi-axiomatic. It seems to apply to a very broad range of situations. For example, in agriculture the primary grower usually obtains the smallest profit. The explanation might be quite simple: it is customary and acceptable for each purveyor to retain a certain percentage of the selling price as profit; hence, since value is cumulatively added as one moves down the supply chain, the absolute value of the profit will inevitably increase. In many cases the percentage actually increases as well, on the grounds that demand from the fickle consumer fluctuates, and a high percentage profit compensates for the high risk of being left with unsold stock. As one moves upstream, these fluctuations are dampened and hence the percentage diminishes.

[4] E. Shane, Academic Entrepreneurship. Cheltenham: Edward Elgar; 2004.

[5] In practice, however, some parts of nanotechnology consist of products of simple substitution (cf. Section 5.1).

[6] Nanoscience and Nanotechnologies: Opportunities and Uncertainties. London: Royal Society of London & Royal Academy of Engineering; 2004.

[7] P.A. Revell, The biological effects of nanoparticles, Nanotechnol. Percept. 2006;2:283–298 and the many references therein.

[8] T. Kealey, Sex, Science and Profits. London: Heinemann; 2008.

[9] G.R. Evans, Funding science: a new law, new arrangements, J. Biol. Phys. Chem. 2017;17:33–37.

[10] C.N. Parkinson, In-Laws and Outlaws. London: John Murray; 1964:134–135.

[11] A. Berezin, The perils of centralized research funding systems, Knowl. Technol. Policy Fall 1998;11:5–26.

[12] The author spent some weeks in his institute in 1991. For a published account, see I.A. Shcherbakov, 25 Years of the A.M. Prokhorov General Physics Institute of the Russian Academy of Science (RAS), Quantum Electron. 2007;37:895–896.

[13] Unfortunately the procedure for applying for these grants is unacceptably bureaucratic and thus vitiates what would otherwise be the benefit of the scheme; furthermore the success rate in the first round was only a few percent, implying an unacceptable level of wasted effort in applying for the grants and evaluating them. The main mistake seems to have been that the eligibility criteria were set too leniently. This would also account for the low success rate. Ideally the criteria should be such that every applicant fulfilling them is successful. Incidentally, this criticism is just one of many directed at European Union research funding: a declaration launched in February, 2010 in Vienna entitled “Trust Researchers” attracted more than 13,000 signatures.

[14] O.G. Selfridge, A splendid national investment, I.J. Good, ed. The Scientist Speculates. London: Heinemann; 1962:31.

[15] J.J. Ramsden, The differences between engineering and science, Meas. Control 2012;45:145–146.

[16] J.J. Ramsden, Integrity, administration and reliable research, Oxford Magazine 2012;Noughth Week, Trinity Term:6–8; J.J. Ramsden, The independence of University research, Nanotechnol. Percept. 2012;8:87–90.

[17] See S. Breschi, Knowledge spillovers and local innovation systems: a critical survey. Liuc Papers n. 84, Serie Economia e Impresa (27 March 2001) for an assessment.

[18] See, however S. Milgram, The experience of living in cities, Science 1970;167:1461–1468; Updated in J.J. Ramsden, The future of cities, Nanotechnol. Percept. 2016;12:63–72.

[19] G.A. Carlino, Knowledge spillovers: cities' role in the new economy, Bank Philadelphia Bus. Rev. 4 (2001) 17–26.

[20] M.P. Feldman, D.B. Audretsch, Innovation in cities: science-based diversity, specialization and localized competition, Eur. Econ. Rev. 1999;43:409–429.

[21] This state of affairs has led to the failure of many (geographical) exploratory expeditions. It is understandable, given the prudence (some would say meanness) of those from whom resources are being solicited, but is paradoxical because the success of the venture is thereby jeopardized by being undertaken with inadequate means. Failure might also decrease the chances of gathering support for future expeditions of a similar nature.

[22] E.g. H.S. Houthakker, L.D. Taylor, Consumer Demand in the United States: Analyses and Projections. Cambridge, MA: Harvard University Press; 1970.

[23] P.J. Thomas, D.W. Stupples, M.A. Alghaffar, The extent of regulatory consensus on health and safety expenditure. Part 1: Development of the J-value technique and evaluation of regulators' recommendations, Trans. IChemE B 2006;84:329–336.

[24] Unfortunately in much of Europe there is still a strong tendency to buy the cheapest, regardless of quality, which of course militates against technological advance.

[25] R.A. Freitas, Economic impact of the personal nanofactory, Nanotechnol. Percept. 2006;2:111–126.

[26] As far as nanotechnology is concerned, the task of deciding whether agile manufacturing is appropriate is made more difficult by the fact that many nanotechnology products are available only from what are essentially research laboratories, and the price at which they are offered for sale is rather arbitrary; in other words, there is no properly functioning market. This situation will, however, evolve favorably if the industry embraces the exchange system for trade (Section 13.4).

[27] J.C. Mankins, Technology Readiness Levels. NASA Advanced Concepts Office; 6 April 1995.

[28] J. Hodgkinson, et al., Gas sensors 2. The markets and challenges, Nanotechnol. Percept. 2009;5:83–107 This modified scheme is called one of Commercial Readiness Levels (CRL).

[29] Taken from the International Technology Roadmap for Semiconductors (ITRS).

[30] In this case that does not imply that performance is necessarily 100 times better, because programming abilities to exploit the increased number of components are likely to lag behind.

[31] In industries with strong regulatory controls, such as pharmaceuticals, the need to fulfil extensive and exhausting tests before commercialization means that any drug, whether involving nanotechnology or not, may take as long as 15 years to reach the market.

[32] J.D. Bernal, The Social Function of Science. London: Routledge; 1939.

[33] E. Mansfield, et al., Imitation costs and patents: an empirical study, Econ. J. 1981;91:907–918 This study found that the average costs of copying approached 70% of the costs of the original invention.

[34] L.T.C. Rolt, Isambard Kingdom Brunel. London: Longmans, Green & Co.; 1957:217.

[35] E.g. J. Thursby, M. Thursby, Where is the new science in corporate R& D? Science 2006;314:1547–1548.

[36] G.A. Moore, Crossing the Chasm. New York: Harper Business; 1991.

[37] A. Sethi, From Science to Start-Up: The Inside Track of Technology Entrepreneurship. Springer International Publishing; 2016.

[38] Several terms are in use to describe the formation of new, small companies. A “start-up” is simply a business that has just begun. Typically it will have exclusive rights, either through outright ownership or a licensing agreement, to some intellectual property (IP). A “spin-off” evokes the idea of a small particle detaching itself from a large rotating body to become an independent entity. A “spin-out” evokes a similar idea but the particle retains some tangible ties with the parent body; for example, the parent body, which may be a university or a large research-oriented company, may have a minority shareholding; whereas the spin-off, started by staff formerly employed by the parent, is financially independent (but obviously inherits some intangible assets in the heads of those staff). Companies spinning off daughters often use the term “carve-out” rather than “spin-out” [39].

[39] S. Watanabe, A paradigm shift to sustainable evolution through creation of universal ties, Nanotechnol. Percept. 2016;12:100–129.

[40] P. McKeown, et al., Ultraprecision machine tools—design principles and developments, Nanotechnol. Percept. 2008;4:5–14.

[41] M. Vaezi, H. Seitz, S. Yang, A review on 3D micro-additive manufacturing technologies, Int. J. Adv. Manuf. Technol. 2013;67:1721–1754. see erratum: Int. J. Adv. Manuf. Technol. 2013;67:1957.

Further Reading

[42] M. Boldrin, D.K. Levine, Against Intellectual Monopoly. Cambridge: University Press; 2008.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset