Chapter 2

Science, Technology, and Wealth

Abstract

Models for the growth of science, technology and wealth are presented. The traditional, linear, model, first formulated by Francis Bacon and beloved by State officials, put them in that order. A more realistic model positions science as an offshoot of technology and wealth. Nevertheless, nanotechnology, in common with the nuclear industry, appears to better fit the lines of the traditional model rather than the realistic model that well describes most of the history of technology within human societies.

Keywords

Linear model; Alternative model; Exponential growth; Social value

Our knowledge about the universe grows year by year. There is a relentless accumulation of facts, many of which are reported in scientific journals, but also at conferences (and which may, or may not, be written down in published conference proceedings) and in reports produced by private companies and government research institutes (including military ones) that may never be published—and some work is now posted directly on the Internet, in a preprint archive, or in an online “journal”, or on a personal or institutional website. The printed realm, comprising published journals and books, constitutes the scientific literature [1]; the rest is known as the “gray literature”. A recent trend in scientific publishing is the “open access” journal, which demands an “author processing charge” (APC) prior to publication, thereby putting it in the same category as advertising or vanity publishing. Papers submitted to such journals are supposedly subjected to refereeing, but acceptance of an APC creates an irreconcilable conflict of interest with the publisher [2], the main practical effect of which, from the viewpoint of the reader, is the publishing of papers of dubious quality.

Reliable facts, such as the melting temperature of tungsten, count as unconditional knowledge. Such knowledge does not depend on the particular person who carried out the measurement, or even on human agency (although the actual manner of carrying out the experimental determination depends on both). The criterion of reliability is above all repeatability [3]. These facts are discovered in the same way that Mungo Park discovered the upper reaches of the River Niger.

There is also what is called conditional knowledge: inductive inferences drawn, possibly quite indirectly, from those facts by creative leaps of human imagination. These are (human) inventions rather than discoveries. Newton's laws (and most laws and theories) fall into this category. They represent, typically, the vast subsuming of pieces of unconditional knowledge into highly compact form. Tables and tables of data giving the positions of the planets in our solar system can be summarized in a few lines of computer code—and the same lines can be used to calculate planetary positions for centuries in the past and to predict them for centuries into the future. Despite the power of this procedure, some people have called it superfluous—the most famous protagonist probably being William of Ockham, whose proverbial razor was designed to cut off all inductive inferences, all theories, not only overly elaborate ones. We must, however, recognize that inductive inference is the heart and soul of science, and John Stuart Mill and others seem to have been close to the truth when they asserted that only inductive, not deductive, knowledge is a real addition to the human store.

There is nothing arcane about the actual description of the theories (although the process by which they are first arrived at—what we might call the flash of genius—remains a mystery). In the course of an investigation in physics and its relatives, the facts (the primary observations) must first be mapped onto numbers—integer, real, complex or whatever. This mapping is sometimes called modeling. Newton's model of the solar system, for example, maps a planet with all its multifarious characteristics onto a single number representing a point mass. Then, the publicly accepted rules of mathematics, painstakingly established by generations of mathematicians working out proofs, are used to manipulate those numbers and facilitate the perception of new relations between them.

What motivates this growth of knowledge? Is it innate curiosity, as much a part of human nature as growth in physical stature and mental capabilities? Or is it driven by necessity, to solve problems of daily survival? According to the former explanation, curiosity led to discoveries, which in turn led to practical shortcuts (i.e., technology)—for the production of food in the very early era of human existence and later on for producing the artificial objects that came to be seen as indispensable adjuncts to civilization. Many of these practical shortcuts would involve tools, and later machines, hence the accumulation of possessions, in other words wealth, via saving, which is possible since production exceeds consumption thanks to machines [4]. As will be discussed in Part 3, the whole “machinery” of this process constitutes an indivisible system incorporating also libraries and, nowadays, the Internet.

This pattern (Figure 2.1) was later promoted by Francis Bacon in his book The Advancement of Learning (1605) with such powerful effect that it thereafter became part of the policy of many governments, remaining so to the present. Bacon was struck by the tremendous political power of Spain in his day. It seemed to heavily preponderate over that of Britain. He ascribed it to technology, which in his view directly resulted from scientific discoveries, which were in turn deliberately fostered (as he believed) by the Spanish government. Nearer our own time, in the Germany of Kaiser Wilhelm, a similar policy was followed (as exemplified most concretely by the foundation of the Kaiser Wilhelm institutes). In Bacon's mind, as in that of Kaiser Wilhelm, the apotheosis of technology was military supremacy, perceived as the key to political hegemony, the political equivalent of commercial monopoly. Today, the British government, with its apparatus of research councils funding science that must be tied to definite applications with identifiable beneficiaries, is aiming at commercial rather than political advantage for the nation but the basic idea is the same [5]. Similar policies can be found in the USA, Japan and elsewhere. This model is also known as “linear”; because of the link to government yet another name is the “decree-driven” model.

Image
Figure 2.1 Sketch of the relationship between science and technology according to the curiosity or decree-driven (“linear”) model. Hence, technology can be considered as applied science. The dashed line indicates the process whereby one state, envious of another's wealth, may seek to accelerate the process of discovery.

Bacon's work was published 17 years after the failure of the Spanish Armada, which supposedly triggered his thoughts on the matter. Interestingly, during that interval, although the threat was almost palpable, the feared Counter-Armada never actually materialized. This singular circumstance does not seem to have deflected Bacon from his vision, any more than the failure of Germany's adherence to this so-called “linear model” (science leading directly to technology) to deliver victory in the First World War deflected other governments from subsequently adhering to it. Incidentally, these are just two of the more striking pieces of evidence against that model, which ever since its inception has failed to gather solid empirical support.

The alternative model, which appears to be in much better concord with known facts [6], is that technology, born directly out of the necessity of survival, enables wealth and leisure, hence saving and accumulation of capital, by enhancing productivity, and a small part of this leisure is used for contemplation and scientific work (Figure 2.2), which might be described as devising ever more sophisticated instruments to discover ever more abstruse facts, modeling those facts, and inferring theories [7]. The motivation for this work seems, typically, to be a mixture of curiosity per se and the desire to enhance man's knowledge of his place in the universe. The latter, being akin to philosophy, is sometimes called natural philosophy, a name still used to describe the science faculties in some universities. Those theories might then be used to enhance technology, probably by others than those who invented the theories, enabling further gains in productivity, and hence yet more leisure, and more science. Note that in this model, the basic step of creative ingenuity occurs at the level of technology; that is, the practical man confronted with a problem (or simply filled with the desire to minimize effort) hits upon a solution in a flash of inspiration.

Image
Figure 2.2 Sketch of the relationship between science and technology according to the “alternative” model. Technology-enabled increases in productivity allowed Man to spend less than all his waking hours on the sheer necessities of survival. Some part of each day, or month, could be spent in leisure, and while part of this leisure time would be used simply to recuperate from the strains of labor (and should therefore perhaps be counted as part of production), part was used in contemplation of the world and of events, and sometimes this contemplation would lead to inferential leaps of understanding, adding mightily to knowledge. New knowledge leads to further practical shortcuts, more leisure, and so forth, therefore the development is to some degree autocatalytic (sometimes stated as “knowledge begets knowledge”). The dashed lines indicate positive feedback. According to this view, science can be considered as “applied technology”.

A further refinement to this alternative model is the realization that the primary driver for technological innovation is often not linked directly to survival, but is esthetic. Cyril S. Smith has pointed out, adducing a great deal of evidence, that in the development of civilization decorative ceramic figurines preceded cooking utensils, metal jewellery preceded weapons, and so forth [8].

Both models adopt the premise that technology leads to wealth. This would be true even without overproduction (i.e., production in excess of immediate requirements), because most technology involves making tools (i.e., capital equipment) that have a relatively permanent existence. Wealth constitutes a survival buffer. Overproduction in a period of plenty allows life to continue in a period of famine, provided that some of the surplus has been converted into capital goods. It also allows an activity to be kick-started, rather like the birth of Venus according to legend [9].

The corollary is that science cannot exist without wealth. The Industrial Revolution was in full, impressive swing by the time Carnot, Joule and others made their contributions to the science of thermodynamics. James Watt had no need of thermodynamics to invent his steam engine, although the formal theoretical edifice built up by the scientists later enabled many subsequent improvements to be made to the engine [10]. Similarly, electricity was already in wide industrial use by the time the electron was discovered in the Cavendish Laboratory of Cambridge University.

Of course, in society benefits and risks are spread out among the population. Britain accumulated wealth through many diverse industries (Joule's family were brewers, for example). Nowadays, science is almost entirely carried out by a professional corps of scientists, who in the sense of the alternative model (Figure 2.2) spend all their time in leisure; the wealth of society as a whole is sufficient to enable not only this corps to exist, but also to enable it to be appropriately educated—for unlike the creative leaps of imagination leading to practical inventions, the discovery of abstruse facts and the theories inferred from them requires many years of hard study and specialized training, as well as the freedom to think.

2.1 Nanotechnology Is Different

We can, then, reliably assert that all the technological revolutions that have had such profound effects on our civilization (steam engines, electricity, radio, and so forth) began with the technology, and the science (enabled by the luxury of leisure that the technologies enabled) followed later—until the early decades of the 20th century. The discovery of radioactivity and atomic (nuclear) fission were purely scientific discoveries, and their technological offshoot, in the form of the atomic pile (the first one was constructed around 1942), was devised by Enrico Fermi, a leading nuclear theoretician, and his colleagues working on the Manhattan project. The rest—nuclear bombs and large-scale electricity-generating plants—is, as they say, history. This “new model”, illustrated in Figure 2.3, represents a radical departure from the previous situation. In the light of what we have said above, it begets the question “how is the science paid for?”, since it is not linked to any existing wealth-generating activity in the field, for there is none. The answer appears to be twofold. Firstly, wealth has been steadily accumulating on Earth since the dawn of civilization, and beyond a certain point there is simply enough wealth around to allow one to engage in luxuries such as the scientific exploration of wholly new phenomena without any great concern about affordability. We conjecture that this point was reached at some time early in the 20th century. Secondly, governments acquired (in Britain, largely due to the need to pay for participation in the First World War) an unprecedented ability to gather large quantities of money from the general public through taxation. Since governments are mostly convinced of the validity of the “linear model”, science thereupon enjoyed a disproportionately higher share of “leisure wealth” than citizens had shown themselves willing to grant freely in the preceding century. The earlier years of the 20th century also saw the founding of a major state (the USSR) organized along novel lines. As far as science was concerned, it probably represented the apotheosis of the linear model (qua “scientific socialism”). Scientific research was seen as a key element in building up technical capability to match that of the Western world, especially the USA. On the whole, the policy was vindicated by a long series of remarkable achievements [11], especially and significantly in the development of nuclear weapons, which ensured that the USSR acquired superpower status, effectively rivaling the USA even though its industrial base was far smaller [12].

Image
Figure 2.3 The “new model” relating wealth, science and technology, applicable to the nuclear industry and nanotechnology. Note the uncertainty regarding the contribution of these new industries to wealth. There are probably at least as many opponents of the nuclear industry (who would argue that it has led to overall impoverishment; e.g., due to the radioactive waste disposal problem) as supporters. In this respect the potential of nanotechnology is as yet unproven.

We propose that this “new model” applies to nanotechnology. Several reasons can be adduced in support. One is the invisibility of nanotechnology. Since atoms can only be visualized and manipulated using sophisticated nanoscopes (e.g., scanning probe ultramicroscopes), and hence do not form part of our everyday experience, they are not likely to form part of any obvious solution to a problem [13]. Another reason is the very large worldwide level of expenditure, and corresponding activity, in the field, even though there is as yet no real nanotechnology industry [14].

2.2 The Evolution of Technology

Human memory, especially “living memory”, is strongly biased towards linearity. By far the most common mode of extrapolation into the future is a linear one. Unfortunately for this apparent manifestation of common sense, examination of technology developments over periods longer than the duration of a generation shows that linearity is a quite erroneous perception. Nowadays, there should be little excuse for persistence in holding the linear viewpoint, since most of us have heard about Moore's law, which states that the number of components (transistors, etc.) on an integrated circuit chip doubles approximately every two years. This remarkably prescient statement (more an industry prediction than a law) has now held for several decades. But, as Ray Kurzweil has shown, exponential development applies to almost every technology [15]—until, that is, some kind of saturation or fatigue sets in. Of course, any exponential law looks linear provided one examines a short enough interval; that is probably why the linear fallacy persists. Furthermore, at the beginning (of a new technology) an exponential function increases very slowly—and we are at the beginning of nanotechnology. Progress—especially in atom-by-atom assembly—is painfully slow at present. On the other hand, progress in information-processing hardware, which nowadays counts as indirect nanotechnology (cf. Section 1.4), is there for all to see. The ENIAC computer (circa 1947) contained of the order of 104 electronic components and weighed about 30 t. A modern high-performance computer capable of 5–10 TFLOPS [16] occupies a similar volume. Formerly, for carrying out large quantities of simple additions, subtractions, multiplications and divisions, as required in statistics, for example, one might have used the Friden electromechanical calculator that cost several thousand dollars and weighed several tens of kilograms; the same performance can nowadays be achieved with a pocket electronic calculator costing one dollar and weighing a few tens of grams [17].

The improvements in the performance (speed, energy consumption, reliability, weight and cost) of computer hardware are remarkable by any standards. If similar improvements could have been achieved with motor-cars, they would nowadays move at a speed of 3000 km/h, use one litre of petrol to travel a 100,000 km, last 10,000 years, weigh 10 mg, and cost about 10 dollars! Comparable improvements in a very wide range of industrial sectors might be achievable with nanotechnology.

Kurzweil elaborates the exponential growth model applicable to a single technology [15], placing technology as a whole in the context of the evolution of the universe, in which it occupies one of six epochs:

Epoch 1: Physics and chemistry are dominant; the formation of atomic structures (as the primordial universe, full of photons and plasma, expands and cools).

Epoch 2: Biology emerges; DNA is formed (and with it, the possibility of replicating and evolving life forms; as far as we know today, this has only occurred on our planet, but there is no principle reason why it could not occur anywhere offering favorable conditions).

Epoch 3: Brains emerge and evolve; information is stored in neural patterns (both in a hard-wired sense and in the soft sense of neural activity; living systems thereby enhance their short-term survivability through adaptability, and hence the possibility of K-selection [18]).

Epoch 4: Technology emerges and evolves; information is stored in artificial hardware and software designs.

Epoch 5: The merger of technology and human intelligence; the methods of biology, including human intelligence, are integrated into the exponentially expanding human technology base. This depends on technology mastering the methods of biology (including human intelligence).

Epoch 6: The awakening of the universe; patterns of matter and energy become saturated with intelligent processes and knowledge; vastly expanded human intelligence, predominantly nonbiological, spreads throughout the universe.

The beginning of Epoch 6 is what Kurzweil calls the singularity, akin to a percolation phase transition.

The epochs leading to the singularity somewhat resemble a concept of Kardashev and Dyson, namely that energy supply is a crucial factor in determining civilization level [19]. Hence, civilization can be classified into the following types:

Type I: Civilization has mastered all forms of terrestrial energy and can effectively exploit all planetary resources.

Type II: Civilization has mastered stellar energy (partly motivated by the exhaustion of terrestrial sources)—that of its sun and, presumably along the way, has succeeded in exploiting anything useful from any other planets in its solar system—and has begun the exploration and possible colonization of nearby star systems.

Type III: Civilization has exhausted the energy resources of a single star and harnesses the energy of collections of star systems throughout the galaxy: as the range of exploitation increases, this type becomes a galactic civilization.

This evolution can be continued; Type IV will exploit, essentially, the entire galaxy and have mastered the harnessing of energy from quasars, pulsars, radio jets etc.; Type V will extend to galaxy clusters.

2.3 The Nature of Wealth and Value

Wealth is defined as accumulated value. A wealthy country is one possessing impressive infrastructure—including hospitals, a postal service, railways, and huge and sophisticated factories for producing goods ministering to the health and comforts of the inhabitants of the country. It also possesses an educated population, having not only universal literacy and numeracy, but also a general interest in intellectual pursuits (as might be exemplified by a lively publishing industry, active theaters and concert halls, cafés scientifiques [20], and the like) and a significant section of the population actively engaged in advancing knowledge; libraries, universities and research institutes also belong to this picture. Thus, wealth has both a tangible, material aspect and an intangible, spiritual aspect.

This capital—material and spiritual—is, as stated, accumulated value. Therefore, we could replace “wealth” in Figures 2.12.3 by “value” (part of which is refined and accumulated in a store). We should, therefore, inquire what is value.

Past political economists (such as John Stuart Mill and Adam Smith) have distinguished between value in use and value in exchange (using money). “Value in use” is synonymous with usefulness or utility, perhaps the most fundamental concept in economics, and defined by Mill as the capacity to satisfy a desire or serve a purpose. It is actually superfluous to distinguish between value in use and value in exchange, because the latter, equivalent to the monetary value of a good (i.e., its price) is simply a quantitative measure of its value in use. A motivation for making a distinction might have been the “obvious” discrepancies, in some cases, between price and perceived value. But as soon as it is realized that we are only talking about averages, and that the distributions might be very broad, the need for the distinction vanishes. For some individual, a good might seem cheap—to him it is undervalued and a bargain—and for another the converse will be the case. Indeed it might be hard to find someone who values something at exactly the price at which it is offered for sale in the market. A difficulty arises in connection with human life; because there are some ethical grounds for placing infinite value upon it, it might be hard to accommodate in sums. But the insurance industry has solved the problem adequately for the purposes of political economy—it can be equated to anticipated total earnings over a lifetime [21]. A further difficulty arises regarding the possible additional stipulation that for something to have value, there must be some difficulty in its attainment. But here too the difficulty appears to be artificial. Gravity would be more valuable on the Moon than on Earth, where it has, apparently, zero value because it is omnipresent. But perhaps it has zero net value: for aviation it is a great nuisance but for motoring it is essential. Air is easily attainable but clean air is a different matter, and even in antiquity whole cities were abandoned because of insufferably bad air. Confusion may arise here because the mode of paying for air is different from that customary for commodities such as copper or sugar. Intrinsically, however, there is nothing particularly arcane about value, which heuristically at any rate we can equate with price, and there is no need for Pareto's ingenious and more general concept of ophelimity. It should be emphasized that value is always shifting. Certain components of a particular type of aircraft might be very expensive to manufacture, but once that aircraft is no longer in service anywhere in the world, stocks of spare parts become valueless. Mill erred when he tried to determine value relative to some hypothetical fixed standard. The value of almost everything is conditional on the presence of other things, and organized in an exceedingly complicated web of interrelationships.

If utility is considered as the most fundamental concept in economics, the relationship between supply and demand is considered to be the most fundamental law. According to this law, the supply of a good will increase if its price increases, and demand will increase if its price falls, the actual price corresponding to that level of supply exactly matching that of demand—considered to represent a kind of equilibrium. Demand for necessities is stated to be inelastic, because it diminishes rather slightly with increasing price, whereas demand for luxuries is called elastic, because it falls steeply as price increases. However, this set of relationships has little predictive power. Most suppliers will fix the price of their wares based on a knowledge of the past, and adjustments can be and are constantly being made on the basis of feedback (numbers of units sold) [22]. Because there is a finite supply of many goods (since we live on a finite planet), their supply cannot increase with increasing price indefinitely; on the other hand, the supply of services could in principle be increased indefinitely pari passu with demand, which is presumably one of the reasons for the popularity of the “service” rather than the manufacturing economy [23].

There have been numerous attempts to elaborate the simple law of supply and demand. One interesting decomposition of demand is that of Noritaki Kano into three components: basic, performance, and excitement. For example, the basic needs of the prospective buyer of a motor-car are that it is safe, will self-start reliably, and so forth. Even if the supplier fulfills them to the highest possible degree, the customer will merely be satisfied in a rather neutral fashion, but any deficiency will evoke disappointment. In other words, these attributes are essentially privative in nature. Performance (e.g., fuel consumption per unit distance traveled) typically increases continuously with technological development; customer satisfaction will be neutral if performance is at the level of the industry average; superior performance will evoke positive satisfaction. Finally, if no special effort has been made to address excitement needs (which are not always explicitly expressed, and may indeed only be felt subconsciously), customer satisfaction will be at worst neutral, but highly positive if the needs are addressed. These three components clearly translate directly into components of value.

2.4 The Social Value of Science

Francis Bacon argued in his Advancement of Learning (1605) that science discovery should be driven not just by the quest for intellectual enlightenment, but also for the “relief of man's estate”. This view is, naturally enough, closely associated with Bacon's “linear” model of wealth creation (Figure 2.1), and forms the basis of the notion (nowadays typically promulgated by state funders of scientific research—indeed for them it justifies the contentious issue of whether there should be state funding) that actively feeding into technological development and wealth creation is an official duty incumbent upon those scientists in receipt of state funds for their work. According to the “alternative model” (Figure 2.2), on the other hand, a scientist voluntarily devotes a part of his or her leisure to research, and there is no especial duty to explicitly promote wealth creation. However, the modern situation of a professional corps of scientists who are in effect paid by society to devote their whole time to leisure (which they in turn typically wholly devote to research) would appear unarguably to give society the right to demand a specific contribution to the creation of wealth on which, ultimately, the continuation of this arrangement depends. This obligation is most efficiently discharged by insisting on dissemination—an inseparable part of the work of the scientist. Others, with special abilities for wealth creation, can then take up those ideas [24].

When seeking to analyze the present situation and attempting to present a reasonable recommendation, shifting perspectives during the last few hundred years must be duly taken into account. The Industrial Revolution and the immense wealth it generated managed very well without (or with very little) science feeding into it, but during the last hundred years or so, science has become increasingly associated with obtaining mastery over nature. A survey of the papers published in leading scientific journals shows indeed that a majority is directly concerned with that. However, this work was in general undertaken in a piecemeal fashion. For example, H.E. Hurst's seminal work on the analysis of irregular time series was apparently undertaken at his own initiative while he was engaged as Scientific Consultant to the Ministry of Public Works in Egypt, when he was confronted with the need to make useful estimates of the required capacities of the dams being proposed for construction on the Nile. In some cases scientific results were made use of with excellent results; in others with disastrous results [25]; there are many other examples of both excellent results and disasters obtained without any scientific backing. Hence, historical evidence does not allow us to conclude that a scientific research backing guarantees success in a technological endeavor, but rather shows that many other factors, most prominently political ones, intervene. One very positive aspect is that at least this decoupling of science from technology prevented the growth of distortions in the unfettered, disinterested pursuit of objective truth, which almost inevitably becomes a casualty if wealth instead is pursued.

But when it comes to the “new model” (Figure 2.3), we have technology wholly dependent upon science. In other words, without the science there would be no technology and, as already stated, nanotechnology seems to fall into this category. Further implications will be explored in Chapters 3 and 12. It should be noted that the “new model” does not invalidate the “alternative model”; the two can exist in parallel. With the former, there is no “shop floor” in the sense that there was during the Industrial Revolution, in which the shop floor was the main source of innovation. Being invisible to the eye, the nanoworld engages a different kind of innovator, a more learnèd one perchance, but whereas the traditional innovator usually had a sound commercial sense, the learnèd one typically does not.

We cannot usefully turn to historical evidence on this point because too little has accumulated. It follows that any extrapolation into the future is likely to be highly speculative. Nevertheless, we cannot rule out the advent of a new era of highly effective science-based handling of affairs that would hopefully yield excellent results. Although the economies and, especially, the banking sectors of most countries of the world are now rather fragile, to which the response in many circles is rather conservative retrenchment, this is just the wrong kind of response. The whole system of the planet (ecological, social, industrial, financial, and so forth) has been driven so hard to such extremes that mankind can scarcely afford to make more mistakes, in the sense that there is practically no buffering capacity left. Hence, in a very real sense survival will depend on getting things right. The delicacy of judgment required from the decision-making process is further exacerbated by globalization, thanks to which we now in effect only have one “experiment” under way, and failure means collapse of everything, not just a local perturbation. Given that it is very doubtful that humankind will ever be able to fine tune the running of his affairs such that things are always right, it is a better strategy to strive towards decentralized diversity, adopting multiple trial solutions, which can run in parallel, without the necessity of ending up with a unique outcome.

References

[1] In passing, it may be noted that this realm is the only one for which consequential source quality appraisal can be carried out. On this point see W. Wirth, The end of the scientific manuscript? J. Biol. Phys. Chem. 2002;2:67–71.

[2] J. Beall, Scholarly open-access publishing and the problem of predatory publishers, J. Biol. Phys. Chem. 2014;14:22–24.

[3] An important aspect of ensuring the reliability of the scientific literature is the peer review to which reports submitted to reputable scientific journals are subjected. Either the editor himself or a specialist expert to whom the task is entrusted ad hoc carefully reads the typescript submitted to the journal and points out internal inconsistencies, inadequate descriptions of procedures, erroneous mathematical derivations, relevant previous work overlooked by the authors, and so forth. The system cannot be said to be perfect—the main weaknesses are: the obvious fact that the reviewer cannot in practise himself or herself actually check the experiments by running them again in his or her laboratory, or verify every step of a lengthy theoretical work, which would take as long as doing the work in the first place; the temptation to undervalue work that contradicts the reviewer's own results; the pressures imposed by publishers when they are commercial organizations, in which case an additional publishability criterion is whether the paper will sell well, which tends to encourage hyperbole rather than a humbler, more sober and honest style of investigation. Despite these flaws, it would be difficult to overestimate the importance of the tremendous (and honorary) work carried out by reviewers. This elaborate refining process creates a gulf between the quality of work finally published in a printed journal and the web-based preprint archives, online journals and other websites. Conference proceedings are in an intermediate position, some papers being reviewed before being accepted for presentation at a conference, but naturally the criteria are different because the primary purpose of a conference is to report work in progress rather than a completed investigation, and the discussions of papers represent a major contribution to their value, yet might not even be reported in the proceedings. As regards the internal reports of companies and government research institutes, although they would not necessarily be independently and objectively peer-reviewed in the way that a submission to a journal is, those reports dealing with something of practical value to the organization producing them are unlikely to be a repository of uncertain information, and this provides a kind of internal validation.

[4] In order to amortize the capital invested in the overproducing machines, there is a great temptation to promote overconsumption to match the overproduction and this is generally achieved by advertising and other artifices. One result of this is economic growth, but another is environmental and other kinds of degradation, which introduces a moral element into economics.

[5] This policy is given explicit expression in the Higher Education and Research Bill, which received Royal Assent on 27 April 2017 and creates a new body, UK Research and Innovation (UKRI). See the discussion about some of the implications in G.R. Evans, Funding science: a new law, new arrangements, J. Biol. Phys. Chem. 2017;17:33–37.

[6] Not least the fact that technology has existed for many millennia, whereas science—in its modern sense, as used in all the diagrams in this chapter, for example—only began in the 12th century CE.

[7] This model is quite similar to one proposed by Adam Smith in his Wealth of Nations, (1776), Book 5, Ch. 1: industrial money plus old technology enabled new technology to be financed, from which both wealth and academic science sprung.

[8] C.S. Smith, A Search for Structure. Cambridge, MA: MIT Press; 1981.

[9] The Spanish Armada was essentially financed by the vast accumulation of gold and other precious metals from the newly won South American colonies, rather than wealth laboriously accumulated through the working of the linear model, as Bacon imagined.

[10] Maxwell's paper providing a theoretical (mathematical) foundation for the construction of governors for steam engines, considered to be a landmark J.C. Maxwell, On governors, Proc. R. Soc. 1867–1868;16:270–283 appeared almost a century after Watt had actually constructed a working governor.

[11] Albeit not in biology, which was marred by dogmatic adherence to a demonstrably erroneous set of ideas. Hence, since the biologists were effectively eliminated, physicists started to occupy themselves with biology, which led to the unique and seminal contributions of L.A. Blumenfeld, D.S. Chernavsky, M.V. Volkenstein and others.

[12] See D. Holloway, Stalin and the Bomb. New Haven: Yale University Press; 1994.

[13] It should, however, be borne in mind that these nanoscopes are themselves products of a highly sophisticated technology, not science (one may also note that the motivation for developing electron microscopes included a desire to characterize the fine structure of materials used in technological applications).

[14] Apart from an appreciable industry, with a global turnover of around $750 million, namely building electron and atomic force ultramicroscopes, servicing the needs of those developing nanotechnology.

[15] R. Kurzweil, The Singularity Is Near. New York: Viking Press; 2005.

[16] 1 TFLOPS is 1015 floating-point operations per second.

[17] Assertion of the “same performance” neglects psychology—human factors—the existence of which provide one of the reasons why design is so important (cf. Chapter 10).

[18] See Section 3.1.

[19] M. Kaku, Visions. Oxford: University Press; 1998.

[20] In the UK they began in Leeds in 1998, modeled on the café philosophique started in Paris in 1992, and have become an important forum for debating science issues.

[21] The reader may also recall King James V of Scotland's question “How much am I worth?”, that was wittily answered by the miller of Middle Hill as “29 pieces of silver—one less than the value of our Saviour” A. Small, Interesting Roman Antiquities Recently Discovered in Fife. printed for the author and sold by Edinburgh: John Anderson & Co.; 1823.

[22] One of the problems faced by commercial operators is the difficulty of “reading”, feedback (let alone responding to it).

[23] Not least since the suppliers of the services mostly themselves require the same services. The apparently infinite possibilities for creating new services recall the construction of fractal curves.

[24] A difficulty is that the style in which most research papers are written makes them unreadable other than by researchers working in the same narrow field.

[25] The Kongwa (Tanganyika) groundnut scheme (1947–1951) of the Overseas Food Corporation serves as an example of a disastrous one. Another is the mass installation of tube wells during the 1970s in Bangladesh, promoted by the United Nations Children's Fund (UNICEF). The wells were supposed to provide a clean, safe alternative to traditional sources such as stagnant ponds. The groundwater they tapped was bacteriologically clean but unfortunately heavily contaminated with arsenic, leading to the largest (affecting 35∼70 million people) mass poisoning of a population in history. See A.H. Smith, E.O. Lingas, M. Rahman, Contamination of drinking-water by arsenic in Bangladesh: a public health emergency, Bull. World Health Organ. 2000;78:1093–1103.

Further Reading

[26] J.D. Bernal, The Social Function of Science. London: Routledge; 1939.

[27] T. Kealey, Sex, Science and Profits. London: Heinemann; 2008.

[28] E. Mansfield, Academic research and industrial innovation, Res. Policy 1991;20:1–12.

[29] J. Pethica, T. Kealey, P. Moriarty, J.J. Ramsden, Is public science a public good? Nanotechnol. Percept. 2008;4:93–112.

[30] J.J. Ramsden, P.J. Kervalishvili, eds. Complexity and Security. Amsterdam: IOS Press; 2008 especially Chapters 4 and 21, for examples of how scientific rationality can be used in policy formulation (“evidence-based sociology”).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset