5
Towards a Post-industrial iconomy

Since 2010, we have lived in the digital era, a new episode in the historical and progressive deployment of computerization, this third industrial revolution which, over the last 40 years or so, has relied on advances in microelectronics, software and networks, in addition to giving rise to innovations that affect society as a whole, as is the case with the Internet.

Since the appearance of semiconductors and printed circuits, the electronic chip, mainframe, personal computer and digital communication industries have largely organized and supported all vital activities of our time. All our economic, social and political functions depend on it. It has also enabled a host of services, most of which are now accessible from anywhere thanks to the network of networks.

Artificial intelligence suggests that a computer could replace the human brain: it has given birth to a chimera, something that thinks, and inspires science fiction dreams. Attention has thus turned away from the possibilities and dangers involved in the augmented human being, a synthesis of the human brain and the programmable automaton, whose effects exhibit themselves in institutions, the productive system, markets and even in the family and personal life of each individual.

In order to shed light on the anthropological significance of computerization, it was necessary to revive the intuition of visionaries of the 1950s such as John von Neumann or John Licklider, and to enrich it with the experience acquired since then in order to provide guidance: the iconomy, the schematic model of an efficient computerized society, is put forward as a reference to illuminate the future.

What do we mean by digital? Some think that this word means that “all things are numbers” as Pythagoras said, because in a computer, every program and every document (text, image, sound, etc.) is broken down into binary numbers. Others say that digital technology was born when the telephone, portable and mobile, became a computer: they thus associate digital technology with the ubiquity of computer resources. Others also believe that digital technology offers everyone the opportunity to contribute to cultural production, which has thus multiplied. Lastly, others believe that the digital era regards the innovation of usage, something that is even more important than technical innovation in their eyes. Digital technology is therefore as ambiguous as digital culture, the digital revolution, digital development, digital footprints, digital humanities, digital business, digital democracy, etc. This trivialization, under a single adjective, mixes together quite different phenomena: this undoubtedly facilitates confusion and risks increasing misunderstandings!

Summary of previous times

The periods prior to the digital era (in the previous sense) were those of mainframe computers of the 1960s; of the information system of the 1970s; of the office automation of the 1980s; of networking and computerization of processes in the 1990s, of the dematerialization and maturation of the web and of finally the computerization of the mobile telephone in the 2000s1. In each of these phases, it was believed that computerization had reached its ultimate stage and innovators were always poorly received. For example, those who designed personal computers in the 1960s and 1970s were considered as marginalized people [LEV 94]. In love with their mainframes, computer scientists initially refused to network them; it was in spite of them that microcomputers and office automation were finally made available to users. Similarly, the telecom’s corporation, in love with the wired telephone, has also long refused mobile telephones and the Internet! Thus, years elapsed before implementation, and even more years before wide distribution and use: it took, for example, a quarter of a century before the Internet became mainstream around 1995, another five years between the invention of the web and its large-scale exploitation, and even more years before it reached maturity with the intermediation platforms and e-commerce that can be seen today.

By the 1950s, forward-looking thinkers had laid the scientific foundations of computerization, perceived its profound nature and embraced its anthropological consequences [VON 57, LIC 60]. In the decades that followed, subtle minds were concerned of its realization, some focusing on the scientific and technical sides of computerization, others on usages: the big picture was somehow neglected. The technical, economic, psychological, sociological, philosophical and cultural aspects of computerization are certainly taken care of today, given that everyone can put what they want into digital devices, but it is difficult to discern cause from effect in such a catch-all; while the range of computerization has widened, recognition of their common origin has faded.

Moreover, digital technology has been allegedly detached from computerization, which is now considered outdated! The current era that we call digital is thus subject to the same illusion as before: given that we do not fully understand its dynamics, we can hardly perceive the spring that is being stretched back to propel our society towards the next era. As a starting point, this coming era will probably depend on a disruptive and decentralized cluster of firms, a variable number of companies that will constantly renew the digital environment, a creative and destructive set of players that will successively die and be reborn2. When we focus on the combination of the human brain and the robot, some of the negative features characterizing the current digital era may well fade away: the underestimation of necessary skills, the brutality of outsourcing, negligence in the organization of services, carelessness towards data quality and information systems as well as some of the illusions that we mention below.

Large institutional systems such as politics, health, education or justice will eventually adapt at a slower pace because, as most centralized organizations, it is quite difficult for them to move forward and abandon the status quo and their traditions. This is not any different from a historical transition that periodically shuffles the community of nations. Those who fall behind might lose their diplomatic influence and even become unable to intervene by force of arms.

Real and imagined digital influence

While computing has brought benefits, it has also brought a crisis that causes some distress. It has intoxicated the banks and made it drift towards criminality [VOL 08]; ubiquity encouraged excessive globalization. Automation disrupted work, competition has become ultra-violent and predation has resumed a kind of a feudal regime. In order to navigate safely in the ocean of wide possibilities, one must orient oneself. This is why we have imagined the iconomy model3, a schematic representation of a computerized society that would, by hypothesis, tend to be efficient: this model helps to elucidate the necessary conditions of efficiency. Therefore, the iconomy is not the next phase relaying the digital era; nor is it a forecast for the future; it lays references in the context of time, it may as well relate the past to future eras of computerization. We suggest this model to all those who want to contribute, as much as they can, to closing the present crisis, without remaining passive. Thanks to an enlightened view towards the iconomy, our era is at risk of all the unfortunate myths propagated by imaginative essayists, fascinated by innovation and by the so called friendly, tiny start-ups! As friendly as they may be, their kind of economy of sharing certainly will not replace large powerful firms that will not vanish because the ever-present constraint of scale cannot be forgotten; and also, the myth of artificial intelligence cannot mask the true nature of computerization.

The myth about start-ups

Start-ups occupy a large place in digital discourse. Being both innovative and agile, they would do wonderful things and renew a cooperative economy that would no longer be tied to that of large business. Because they are at the “human scale”, start-ups awake a real fondness among those who hate everything that is institutional, organized and seriously tedious. They follow Michel Serres, bewildered by the “brilliant” virtuosity of children in front of a smartphone or a computer [SER 12]. While start-up founders could tinker in a garage and make ingenious solutions from almost nothing, imagine new and cheap standard products and free open source software that they would assemble while playing around… real life is something else, as Andy Grove summarized: start-ups are wonderful but they don’t increase employment much. The vital part happens after the mythical moment of the invention in a garage, when it is necessary to scale up from the prototype and proceed to the serial production: it is at this very moment that companies acquire their true form. They then have to go deeply into design, get organized to produce cheaply, build factories, hire by the thousands… Scaling up is difficult, but it is vital in order to truly innovate4.

To illustrate the above, here is one example: in 1998, the data center of a start-up was as cute as a baby panda: this start-up was named Google. Let’s take a closer look: prior search engines already existed and demonstrated that indexing the full text of documents was not efficient enough to respond to most queries in a relevant way. Larry Page and Sergey Brin thus also thought about classifying each document according to the number of hypertext links which were pointed towards it. They hence weighted the documents according to the number of links pointed at them5. This is recursive, and Google’s PageRank algorithm (i.e. the classification of accessible pages) is obtained by extracting a proper vector from a matrix, built from the table of mutual quotes. The process has been improved over time to counter manipulations. The creators of Google were certainly not handymen, but authentic scientists (and also entrepreneurs) who thought deeply about the subsequent algorithms, the technical architecture of the platform and its scalability6 to handle future growth in the number of documents accessible on the web and the queries posted by users. According to the myth, youngsters are able to reinvent the world in a garage. That happened, it’s true; but those that really went on to be successful were, albeit young, high-level entrepreneurs: do not therefore imagine that anyone could, with any luck, have done as much as Bill Gates and Bob Allen did in 1975 (Microsoft), as Steve Jobs and Steve Wozniak did in 1976 (Apple), as Larry Page and Sergey Brin did in 1998 (Google), and so on.

A new world is quickly understood by the youngest people, while most mature people may remain prisoners of their habits acquired in the old world7. The industrial revolution brought about by computerization has also opened a field for competent young people, whereas the previous economy only gave credit and legitimacy to middle-aged men. However, it is cruel to make “young people” believe that they could launch a start-up without having any serious computer or management skills: they will inevitably fail, because youth is not enough for success! Lacking the financial capital that they were fortunately able to find afterwards8, Brin and Page were experts from the very start of their adventure. The effective implementation of Google required a very significant investment because scaling up their business transformed it into a very large company. Today’s data centers are a network of factories, whose operation required an entire industry to be invented: Google now employs 60,000 people9. A mature start-up is thus far from the “nice” image of a small company that attracts and federates more or less voluntary contributions; it is neither the end of work nor the end of capitalism. The sentimentality that surrounds the start-up world is an obstacle, probably also an alibi, on the path of contemplation that is necessary in order to design and implement new forms of business.

Fondness towards start-ups and other SMEs must not mask reality. Those who know how to combine advanced technical expertise with their intuition of future needs and have, beneath the apparent do-it-yourself nature of their first prototype, the ability to acquire from the get-go the scalable architecture that will meet users’ needs in the long term. In the long run, successful start-ups will turn into large companies. Although relations between individuals will not necessarily emulate forms adopted by the mechanized industry of yesteryear, they will be organized such that the strategic function and the command function are exerted effectively.

Illusions about the “Commons”10

The future economy is therefore not one where independent contributors would be paid on an ad hoc basis according to the value of their contribution, unless they receive a basic income: just the same as yesterday; the future belongs to organized institutions bathing in an outside market: the firms! However, there is a frequent illusion that institutions, businesses and commercial relationships would be doomed to disappear and give way to spontaneous collaboration by individuals, to a generalized peer-to-peer network, based on voluntary work and free exchange. For example:

  • – electrical energy would be produced in a decentralized way by solar panels that are coupled with batteries and made available through a capillary network;
  • – services would also be provided on a decentralized basis: apartments, automobiles, household equipment and handyman tools would be loaned or rented for a modest price;
  • – 3D printing would allow everyone to produce the things they need at home;
  • – the power of institutions would follow a free disposition of resources on community territories, whose entire population would share ownership;
  • – labor, like capital and, with it, capitalism (in other words: companies) would be replaced by individual production of wealth and the remuneration of labor would give way to an unconditional basic income11 etc.

This perspective is defended by the essayist Jeremy Rifkin, who explained it with perseverance for some 20 years. It has seduced those who hate everything institutional and organized, as well as some key political leaders: the third industrial revolution would, according to them, be that of the energy transition and not, as we think, that of computerization [RIF 14]. Based on observations that we share with him, but from which he draws erroneous conclusions: “Economists have long understood that the efficient economy is one in which consumers only pay marginal costs of the goods they buy. But if they only pay these marginal costs, when it collapses, companies will not be able to dampen their investment or make an adequate margin. As a result, dominant companies will be tempted by the monopoly, and impose prices above marginal cost and prevent the invisible hand from guiding the economy towards efficiency. This is the contradiction underlying capitalist practice and theory.” Rifkin therefore ignores the fact that marginal cost pricing is only effective when return to scale is decreasing. When marginal cost is higher than the average cost, pricing maximizes profit: if the market is perfectly competitive, such pricing does indeed lead to an efficient situation. However, Rifkin forgets that this result no longer holds true when the return to scale is increasing and, in particular, if the marginal cost is close to zero! In the latter case, pricing must mainly cover the fixed costs of production, which means that the average price must be at least equal to the average cost which means selling a new product at high price, then gradually lowering its price as production increases. This is exactly what happens in a monopolistic competitive market. This logical error ruins Rifkin’s conclusion about the “end of capitalism”. It would, by the way, be quite strange for capitalism to disappear in an economy based on computers, chips and networks, whose production basically relies upon fixed cost and high initial investments; such an industry is necessarily a hyper-capitalist one12! Rifkin met an energetic opponent in Eric Raymond, who has a real practical experience in collaborative economics and has learned the following lessons: “The concept of commons is not a magic wand that can remove issues of motivation and people, power relations and the risk of minority oppression. In practice, managing a commons requires more, rather than less, of a conscience when it comes to respecting each person and their values. If you fail to maximize long term utility for each person and for the whole community they belong to, your commons will explode. The utopian talk about commons repulses me.”13

Collaborative services such as Airbnb, Drivy, Frizbiz and Uber all meet the requirements of a trade organization14: they are forced to be professional because it is very difficult to effectively manage an intermediation platform that is placed at the center of a network of partners. Contrary to what Rifkin thinks and, with him, all those seducing essays who share his aversion towards capitalism, firms will not disappear with the digital era; on the contrary, the future belongs to them, but truly, the context is different from the one to which we have been accustomed until now!

Can an intelligence be artificial?

Computer “intelligence” fascinates some and frightens others. According to Stephen Hawking: “The advent of full artificial intelligence could spell the end of the human race.[…] Humans, limited by the slow pace of biological evolution, couldn’t compete, and would be superseded.15” It is arguable that “our intelligence and institutions have to fight more and more against a collapse in employment caused by technology” [BRY 11]. These two authors fear that artificial intelligence supplants human intelligence and that automation eliminates the employment of human beings. Others say, on the contrary, that relying on computers compensates for human beings who are not reliable enough [TRU 02]. Claims are that if and when all production is automated, life will be better because humanity will devote its time to leisure. Would an “intelligent” computer then be a “thing that thinks”? Isn’t this one of those chimeras that language alone can create because it’s so easy to sew words together, regardless of their meaning? Could such an “artificial intelligence” exist?

The magic of automata

We support a different thesis here, affirming that the human brain is the exclusive place of creative intelligence and that, if mass unemployment seems inevitable, it is not because computers “think”, but because our society has not yet assimilated computerization16! A few benchmarks to start: one computer has beaten the world chess champion; another one won against the Go world champion; computers automatically fly planes and drive cars more efficiently, it seems, than humans do. However, if these programmable automata appear to be “intelligent” to us, it is only because they were equipped with cleverly designed programs that take advantage of processors’ power, huge electronic memory and fast ubiquitous network flows. A program can only be intelligent (in the computer sense) if it has been written by an intelligent programmer (in the common sense). What we are in fact considering here, by some abuse of language, is not the machines’ intelligence but rather the programmers’, even if we take into account the stacking of programs that translate the source code into 0s and 1s so that an automatic processor can run it.

The program’s “intelligence” is hence nothing but a delayed effect of the programmer’s own intelligence. The person who programs a word processor, for example, is not the one who uses it to draft a text or display it on the web. While taking advantage of the human programmer’s intelligence, the computer user does their own job (think of the text, write it, formulate it) and thus shows an immediate intelligence. Conversely, the deferred intelligence incorporated into a program, is analogous to “dead labor”, a fixed capital asset incorporated into a machine which must be clearly distinguished from the “living labor”, that of those who use the machine; fixed capital is a stock of former past labor, whereas the worker provides a flow of present labor. In the mechanized and computerized economy, the design of a machine therefore results from:

  • 1) creative knowledge, based on technical abilities and anticipation of needs;
  • 2) a methodology that makes it possible to create a prototype;
  • 3) an engineering expertise that organizes line production, reproducing the model in series.

Deferred work, incorporated into a machine, is thus the sum of design which manifests a deferred intelligence, and subsequent production. Deferred intelligence, included in the computer program, is analogous to the deferred work incorporated into a machine; the program in turn only incorporates the design effort, given that duplicating any number of the program copies practically requires no further work!

Programming a computer and running a program have surprising joint properties. If I write a program in three days that will solve any sudoku in one second, compared to the 20 minutes or so it would take me to do it by hand, this program manifests my intelligence and that of the people who designed the computer, the programming language, the operating system, etc. Does this mean that this program is smarter than I am? Superficially, yes, since it solves sudokus faster than I do; but the reasonable answer is clearly no, because my computer is only able to run the program I designed, nothing more, nothing less!

To sum up: if the computer impresses us so much, it is because computerization seems to realize certain magic promises. Doors open on their own as if someone just said: “Open sesame!” In order to move things with mass and volume, the “abracadabra” formula is replaced by a few lines of code. When a programmer sits in front of his keyboard and screen, it’s as if the genie from legend says to him: “Here I am, oh master of the lamp!” Consequently, does he not risk unleashing forces that he cannot control, like the sorcerer’s apprentice?

Let’s get back to talking about the machine. Once built, the machine does exist and produces effects that require attention. If considered as a natural object, we find it natural that it turns on when we press the button “on”. However, this machine is an artifact, just like houses and institutions used, designed and built by human beings according to their knowledge and according to the techniques they master. An artifact has two aspects: the service it renders and the human conception it reflects, which expresses its values. A well-designed house shows the architect’s respect for those who will inhabit it; a poorly-designed house shows the builder’s casualness and contempt for those who will live there! The same goes for computers, smartphones or tablets. Machines and technology are therefore in line with the culture or values of a society, as Gilbert Simondon said: “Those who only want to see the utility or, as they say, the use in an artifact, choose to ignore the fact that it results from a will and that it has a history that will be prolonged in the future by other artifacts, resulting from the will of others”17 [SIM 12]. Those who speak of computer intelligence seem to forget that the so-called “machine intelligence” resides in programs incorporating deferred human work and intelligence, just like deferred work enshrined into a machine is linked with the immediate, real time intelligence of the user.

From man to machine: a quantum leap?

It is true though that a subtle relationship exists between deferred intelligence (stored in a program) and the immediate intelligence of the user: the art of computerization lies in the judicious articulation of these two intelligences and these two wills hence combining the abilities of the human brain (that of the user) with the ubiquitous automaton of the computer (equipped with its programs). Given that intelligence lies entirely within the combination of programming and use (or, if preferred, the programmer and the user), speaking of “computer intelligence” masks the joint role of two persons: it lends some “intelligence” to a “magic” black box whose origin, purpose and functioning is invisible. Thus the expression artificial intelligence shadows actual human intelligence that is already incorporated within the computer: the deferred intelligence stored in machines themselves and into their programs, also erasing the current immediate intelligence of the computer user.

The mechanized companies of the manufacturing industry already articulated deferred work, which was incorporated into fixed capital with the working labor of people. Today’s computerized firms combine the deferred intelligence of computer programmers with the live intelligence of operational staff. Hence the analogy (deferred/immediate) and a difference (work/intelligence) between the two forms of enterprise; this is of course schematic, because a share of built-in intelligence is found in mechanized companies, just as there is still human labor found in computerized companies! One could therefore compare the revolution that computers brought to our present way of life with the revolution brought to writing by the invention of the alphabet which, allowing the exact reproduction of oral speech, corrected the ambiguity of consonant notation. While the latter had been used for accounting, legislation and liturgy, the alphabet made it possible to transcribe conversations, reveries and philosophical reflections. Speech, which only lasts as long as the sound waves that carry it, became eternal or, at least, as durable as the medium on which it is copied. The same signs could thus be read, contemplated and commented on, generation after generation; the images, the concepts and the reasoning expressing our thoughts could then circulate in space and in time. Just like computer programs, writings contain deferred intelligence; however, writings may express thought and sentiments whereas programs are mainly intended to carry out an action; their purpose, essentially practical, is not (like that of many written texts) to share messages they carry, but rather to carry out the action decided and programmed by an intelligent living person18. A word processor formats the text I’m typing right now; a spreadsheet does calculations; a program may display a video or a movie; another program drives cars designed by Tesla, Alphabet or Apple; a plane follows its autopilot, etc. Between computers and writing, there is thus a real analogy (deferred/immediate) but a significant difference in purpose (thought/feeling vs. action).

We could go further. Writing is not only a support for memory: it is also an aid to reasoning and reflection. It is difficult to add up several large numbers; however, the process becomes easier if you have a paper and a pencil. This is also true for the reasoning that links algebraic relationships or games rules: one can mentally glimpse its course, but you can only accomplish and control it in detail with the help of writing. It is of course the same when you apply yourself to compose a sonnet. The coalescence between our brain and writing therefore seems more powerful than our brain alone: “My pen is smarter than me,” Einstein said. He obviously did not mean that his pen possessed any kind of intelligence, but that his own intelligence was less effective when it was deprived of writing. The same is true of our relationship with computers: the combination we form with it, that is to say, the coalescence between our immediate intelligence and the deferred intelligence incorporated into the programs, is smarter than our intelligence alone.

Distinguishing power from intelligence

A computer seems intelligent because it is powerful and speedy which allows it to perform actions or calculations that would otherwise be beyond our reach. Eventually equipped with sensors in order to react to its environment by executing a series of operations programmed by an intelligent individual, a computer automatically follows a choreography with impressive precision. All this creates a temptation: if the powerful deferred intelligence operating the computer proves, as it does on so many occasions, to be more effective than the immediate intelligence of a human being, should the computer supplant the living person? However, the deferred intelligence of the programmer would only manifest itself through programs prepared in advance; their execution would no longer leave any place for the spontaneous reaction typical of a human being reacting to their environment19.

Might computerization prevent humans from interpreting the world around them and acting accordingly? Do we believe that companies and society would be more effective if they forbade any initiative on behalf of the staff who meets a client or confronts the complexity of nature? Of course, our brain dates back to our huntergatherer ancestors: its computing ability is much slower and less powerful than that of a computer; but it is capable of interpreting its environment, of facing new situations, in short: doing things for which it has not been programmed. Creativity belongs to man whose emotions potentially trigger productive ideas among so many signals born within the brain, according to random associations. This creativity makes us inventors, innovators, organizers, programmers and people who can finally get by, and that is what sets us apart from a computer!

Those who design new products and stage their production are creators. Those who serve customers must deal with uncertainty and diversity: quality of service is the main support of product quality and, as is commonly said, of a company’s competitive edge. However, many companies have not yet understood that the more their production is automatic, the more the customer needs human support to advise in case of trouble. A recurring case is that of the customer adviser of a bank, as the staff behind the counter says that they are sorry, but that they are being prevented “by the computer” from doing a simple action that meets the customer’s request and common sense. Companies that do maintain their services, compromise their competitiveness and, ultimately, their sustainability. Natural selection will only allow the survival of those who understand that initiative should be left to deferred intelligence. That is why, despite automation, the job outlook is not as bleak as is commonly believed. Many tasks will be automated most certainly; and that is a good prospect because it doesn’t make sense to have human beings perform the work that a computer or a machine accomplishes better and quicker than a living operator! Once firms understand that efficiency lies in the arrangements between deferred intelligence and immediate intelligence, when they attach the appropriate importance to the quality of services, organize themselves accordingly, and train the necessary skills, why would full employment be impossible?

However, there are still risks: awkward computerization and automation could force people to perform tasks that an automaton would do better than themselves. There are also risks of excessive automation, which would deny the role of immediate intelligence and inhibit the initiative of staff. As before, the program flows from the programmer; in case the latter is not clever, inattentive or careless, things may go wrong. Moreover, it is never certain that a program is perfectly correct, even if it has been carefully tested. Incidents are inevitable and a poorly designed program can cause systemic damage. Furthermore, even if each programmer is, hypothetically, intelligent (in the sense that we give to this term) and if the program is correct, it would still be necessary to know how to control the interconnection of thousands of automata and other programs running in parallel and exchanging data: this involves simulations, a statistical approach, an assessment of uncertainties, which will still and always require some kind of a human intelligence.

In summary: towards the iconomy?

After this deviation about artificial intelligence, we now come to the current economic crisis: the slowdown in growth and unemployment struck several countries hard. Our thesis derives from Bertrand Gille’s analysis [GIL 58]. He divided history into major periods, each characterized by a technical system, a synergy of some fundamental techniques. Since the Paleolithic period, human beings have indeed known how to build tools in order to enhance the action of their hands; since then, technical systems have followed one another until today.

Let us consider four of the more recent systems: 1) the essentially agricultural system of the former royal regime gave way, from 1775 onwards, 2) to the modern technical system (MTS) which was based on a synergy between mechanics and chemistry. Around 1875, these techniques were complemented by electrical energy and oil: the electric motor was invented by Gramme in 1873; electric lighting by Edison in 1879; the internal combustion engine by Otto in 1884. This finally gave birth to the 3) developed modern technical system (DMTS), of which very large companies were the most representative outcome. Finally, 4) around 1975, the fourth system that Gille named “contemporary technical system” (CTS) arrived [GIL 78]. It is based on an entirely new synergy: that of microelectronics, software and the Internet. From the 1970s, the computerization of companies was organized around their information system, then microcomputers spread out, followed by the Internet and the mobile telephone, then the “intelligent” smartphone (which is essentially a mobile computer) in the 2000s; in factories, robotization automated the tasks hitherto performed by human labor. Next steps are already underway with the synergy of high-speed mobile access, cloud computing and the Internet of Things; the human body is also becoming more combined with the mobile computer and computerized prostheses; various tools (3D printer, scanner, etc.) finally make it possible to give reality to things that were virtual, and vice versa.

Transition times

The transition from one technical system to another is considered an industrial revolution. Agriculture has not been abolished by the modern technical system; it has been mechanized with agro-machinery and chemical fertilizers, while the share of agriculture in the active population decreased sharply: in France, this share was 66% in 1800; it was just 3% in 2000. The third industrial revolution – that of computers – took place around 1975. It has not eliminated mechanics, chemistry or energy, but rather computerized the industry and reduced their relative weight in the workforce accordingly. Each of the industrial revolutions gave rise to a new society and brought about profound anthropological transformations: the modern technical system gave birth to the working class, capitalism and urbanization in the 19th Century; the industrialized nations competed for control of markets and raw materials, they competed to build empires and this provoked wars: those of the Revolution and the Empire after the first industrial revolution; and the two world wars after the second industrial revolution.

Transition between two periods often begins with a crisis, because institutions, habits and social classes are shaken by change, new opportunities and new dangers. Crises are caused by an inadequate behavior of some of the economic players in relation to some new technical system: practical conditions of production and trade then transform; this troubles the psychology of people; political sociology and legitimate powers are modified. Representations and social techniques must adapt as if human beings arrived on a new continent, unaware of whether the plants that grow there are food or poison, wary of strange animals, and uncertain of their geographical position.

Each transition therefore starts with turmoil20. The decisions of the ruling class are erratic, given that they are disoriented; the most powerful companies compete with smaller but very agile companies, which know how to take advantage of new circumstances. Opinion turns against leaders, who are deemed incapable of understanding the new world in which society finds itself. The principles to which regulators are committed are counterproductive. Laws and regulations, adapted to the old world, which were developed at the end of a patient arbitration between the particular interests, are considered obsolete without the possibility of conceiving which laws and which regulations should replace them. Thus, the troubles that shook France towards the end of the 18th Century were one of the various causes for the Revolution. Similarly, French institutions were discredited after 1880: the French General Union Bank was bankrupt in 1882, the President of the Republic resigned in 1887 following alleged corruption, the Boulanger Affair came to a head in 1989, the Panama scandal broke out in 1892, followed by the Dreyfus affair in 1894. Nobody, in this recently established Third Republic, founded in 1871, could imagine the economic boom which would soon bring France to the leading position among industrialized nations in 1900!

This return to the past can help us interpret the present situation: computerization gave rise to an entirely new world of techniques and methods; the transition from the developed modern technical system to the contemporary technical system is therefore more brutal than that of 1875, which only added the mastery of new sources of energy to the previous synergy between mechanics and chemistry. Today’s dismay and discrediting of institutions can be linked with the brutality of contemporary transition. Most leaders are disoriented, including in industry: the emissions scandal which struck Volkswagen, the bribery charges against Siemens, and the difficulties Deutsche Bank have met show that the phenomenon does not spare any country; Germany, supposedly the wealthier of the nations forming the European Union, does not avoid trouble! Many people who call themselves environmentalists dream to return to the era of hunter-gatherers; others look for salvation in the energy transition [RIF 11], a limited response to a true constraint rather than a significant venture21. Conversely, predators who have no rules or qualms are constantly on the lookout, the quickest to seize new opportunities: organized crime takes advantage of computers in order to launder its profits and conquer positions of power in a legal economy that rots from within; banks have not been able to resist temptations and the creation of money exerts a predation on the productive system. High-frequency trading has been described as systemic insider trading [GAY 14].

Profound adjustments are needed

The current economic crisis cannot therefore be solely explained by macroeconomic imbalance: computerization has strongly influenced human actions and intentions. The institutions adapted to previous technical system are obsolete, as well as most recommendations suggested by economists. The behavior of economic agents, whether they be companies, consumers or governments, no longer meet the requirements of the current times that computerization has brought about. In order to get out of a crisis, you have to know where to go. The Institut de l’iconomie has set its task of constructing a model of the computerized society that would, hypothetically, be efficient and in which the behavior of economic players would be reasonably in line with the contemporary technical system and its nature. The conditions necessary for efficiency are not sufficient conditions, but the society that denies them can never achieve the efficiency that the contemporary technical system provides. The future being essentially unpredictable, it is only a question of placing a reference point to guide decisions, unify determination and finally put an end to the destructive behaviors that we see today, particularly concerning the containment of predation.

Main features of the iconomy

After summarizing the essential features of the iconomy, let us return to the present situation. Repetitive tasks are automated, whether they are physical or mental tasks: in factories, robots perform operations that were previously performed by a workforce that was only an auxiliary to the machine; as the employment of this labor has practically disappeared, only maintenance crews who maintain the robots and supervisors who control their operation can be seen in the workshops. Mental tasks are also automated: jurisprudence searches for lawyers are carried out by automata; the work of architectural draughtsmen and technicians is assisted by programs that facilitate the production of plans and documentation; the design of industrial products is accelerated by three-dimensional modelling and simulation tools, etc.

As the repeated reproduction of a prototype, the cost of industrial production is essentially that of this repetition. If repetitive tasks are automated and if labor is eliminated, this reproduction is reduced to the cost of raw materials, which is a relatively low cost: as economists say, the marginal cost is practically zero. This is the case for microprocessors and memory manufacturing; it is also the case for software: once the program is written, its reproduction costs practically nothing. The same is true for mobile phones, tablets and computers built by surrounding integrated circuits and software with a body that offers the user a convenient interface. Computerization and automation pieces are increasing in cars, aircraft, grids, electricity generation and transmission: we speak of intelligent meters and intelligent grids. The more computerized these products are, the more automated their production and the lower their marginal cost is.

In all these circumstances, however, the cost of the initial investment is high. It covers the design of the product, the engineering of its production and therefore the cost of the PLCs and their programs. It also covers business engineering, which we will discuss in a moment. Putting a new microprocessor into production costs about 10 billion dollars; the cost of a new operating system is of the same order of magnitude22. The complexity of these design and engineering operations explains the time required to design an aircraft or a car, then to launch their industrial production. The cost function consequently takes a particular form: a significant fixed cost and a marginal cost that is practically zero (or insignificant). The company’s risk is, on the other hand, very high because almost all of the investment is spent before the first copy of the product is sold, even before any competitive initiatives are known: the iconomy is therefore an economy of maximum risk. In order to limit the risks, the company will set up a network of partners who share the production process. The information system must ensure the interoperability of the partners throughout the production process, in addition to also ensuring the transparency of the partnership, and each partner at any time must be able to verify that the sharing of tasks, expenditure and revenue complies with the initial contract.

A new kind of firm?

Some could obviously object that some start-ups launch projects whose design and engineering are inexpensive, because they implement software components that can be found on the market. It is true, and it is also true that some of them succeed; but if they do achieve success, they must scale up, a which involves setting up a heavy infrastructure. The fixed cost then becomes high, even if the company only pays it after a low-cost start-up phase. When the cost function has this form, the average cost is a decreasing function of the quantity produced. This could lead to a natural monopoly: a single company, the largest, would then dominate the world market for each product because its production cost would be lower than that of its competitors. Other companies, therefore, could only survive if they differentiated their product and if they produced a range that its attributes could target towards the needs of a market segment. In this case, the market would be monopolistically competitive. For this to be possible, the product must be available to meet various needs. This has always been the case for books, music, clothing and automobiles23. This market regime extends to all products, with the exception of those that do not diversify, such as raw copper ingots. It differs, of course, from perfect competition, which remains the reference for economists and one of the consequences of which is marginal cost pricing24: however, such pricing would be absurd when the marginal cost is practically zero.

In the iconomy, firms must sell at the average production cost, topped by a premium which compensates for the risks taken. Its strategy will be to look for some monopoly in a market segment; regulators should make sure that this monopoly is temporary: a rather long period of time may eventually be necessary in order for the company to make its effort profitable; but not too long as to rest on its laurels! The art of the regulator is therefore to regulate the duration of such a monopoly so that the engine of innovation runs at full speed. The variation of the same product into different ranges aims to provide each customer segment with the “quality” that suits it. A distinction can be made between “vertical quality”, depending on the finishing level, and “horizontal quality”, which is characterized by the diversity of quality parameters with equal finishing (shirt color, trouser size, etc.). Its assessment of quality being subjective, the customers will make their choice according to the quality/price ratio: they will look for what suits them, there and then. The iconomy is thus an economy of quality.

The quest for monopoly often encourages dominant companies to supply goods bundled with services that satisfy the user: pre-sale, loan, leasing, customization, maintenance, repair, warranty, replacement and recycling of the product. Computerization and the Internet of Things do facilitate the follow-up of services to such an extent that any device ends up being an assembly of goods and services whose information systems ensure cohesion25. The information system is thus the pivot of contemporary industrial production: it ensures the cohesion of a combination of goods and services, as well as the interoperability and transparency of the various partnerships between the firm and third parties implied from design to the production line and to distribution26.

An overlook on employment

In such a productive system, industrial employment has changed: it is nowadays mostly concerned with product design, engineering, production and services. The tradition of human labor is replaced by brain power (the “knowledge worker”). While the manufacturing industry long forgot about the brain of the workforce who was hardly the auxiliary of a machine because manual workers were only asked to learn a gesture and to repeat it in a purely reflexive way27, the computerized economy requires initiative, responsibility, creativity and judgement from the brain force. This is clearly the case, for example, for jobs related to product design, organization and the production line; it is also evident that service jobs do require qualification and deserve a fair remuneration, because the person who intervenes in a service must not only know but understand the client, translate what they say, react to unforeseen events judiciously and definitely possess high personal skills. Moreover, the company cannot behave with a knowledge worker as it once did with the labor workforce: the contemporary firm must accredit legitimacy to the operational staff, one that is proportional to the responsibilities for which it entrusts them and therefore admit a right to make mistakes and a right to listen to people! Staff who have something to say have to be listened to as they report new ideas, what is happening in the field, etc. Hierarchical relationship, sanctified as command functions of the former manufacturing industries, should consequently be replaced by a “trade of consideration” throughout the various levels of responsibility: within the firm in general, throughout specialties within the company, between the company and its customers, and between the company and its partners (distributors, for instance).

In short, the iconomy is an economy of competency. Efficiency thereupon lies in a clever articulation between the “brain force” and the computer resources combining programs, processing and documents. Computerization then reveals a new being to the world, an alliance between the human brain and the automaton. Like any form of alliance, this particular one may reveal qualities that none of its components have: this hybrid being might be creative like the human brain, reliable like a machine and as fast as electronics!

Conclusion and recommendations

In addition to automating repetitive tasks, computers perform tasks that were once attributed to magic: they control things through words in programs that are executed at high speed; allowing for actions that were previously thought impossible. The autopilot of an aircraft keeps it in a fuel-saving state for a long time, an unstable position that a human pilot could only maintain for a few seconds.

We have characterized the iconomy as the economy of maximum risk, quality and competency. The secret of its effectiveness lies in the efficient alliance between the human brain and computer resources: computerizing a company is an art that requires technical, psychological, sociological and even philosophical capacities because computerization enriches technologies. This provides a useful insight into the current world: in order to emerge from crisis, the behavior of consumers, businesses and the state would have to comply with the necessary requirements of computerization. It is therefore necessary that consumers’ judgement be directed towards the best quality/price ratio, and not just by the low price; this might enable consumers to effectively manage their budget. It is also necessary that firm strategies be guided by the iconomy principles, that they set themselves the goal of conquering a temporary monopoly on a segment of global needs! States must also prioritize a reasonable policy towards computerization of the nation’s major systems (health, education, justice, defense, etc.); policies should hence encourage companies to move towards the iconomy. This presumes that regulators finally consider monopolistic competition as a reference, instead of the classical perfect competition model and marginal cost pricing!

The general public must also accept that the key phenomenon lies in computerization of the productive system, and not in the use of “intelligent” telephones or social networks on which too much attention is paid; computerization should not be reduced to digital technology. We must admit that intelligence is not found in computers, but in the brains of programmers and users: there is no such thing as an artificial intelligence! It is ordinary, natural intelligence that contributes to the iconomy in various forms, starting with programming of robots!

Lastly, let us not be mistaken about the risks: the main danger is not that too much information might kill information; this was already taken care of during the Renaissance period thanks to the multiplication of printed books! There is no actual danger either that automation might kill employment, because the economy of competence will most probably favor full employment in a middle-class society, which implies a profound transformation of the educational system. The real danger threatening us is of a different nature: beware of the return of a feudal regime because predation, supported by computer power, could very well defeat the rules of law and democracy.

Bibliography

[BRY 11] BRYNJOLFSSON E., MCAFEE A., Race Against the Machine, Digital Frontier Press, Lexington, 2011.

[GAY 14] GAYRAUD J.F., Le nouveau capitalisme criminel, Odile Jacob, Paris, 2014.

[GIL 78] GILLE B., Histoire des techniques, Gallimard, Paris, 1978.

[FIS 06] FISHER I., The Nature of Capital and Income, Macmillan, London, 1906.

[LEV 94] LEVY S., Hackers: Heroes of the Computer Revolution, Delta, New York, 1994.

[LIC 60] LICKLIDER J., “Man–computer symbiosis”, Transactions on Human Factors in Electronics, no. HFE-1, pp. 4–11, 1960.

[SER 12] SERRES M., Petite Poucette, Le Pommier, Paris, 2012.

[RIF 95] RIFKIN J., The End of Work, Tarcher, New York, 1995.

[RIF 11] RIFKIN J., The Third Industrial Revolution, St Martin’s Griffin, Spokane, 2011.

[RIF 14] RIFKIN J., The Zero Marginal Cost Society, St Martin’s Griffin, Spokane, 2014.

[SCH 54] SCHUMPETER J., Capitalisme, socialisme et démocratie, Payot, Paris, 1954.

[SIM 12] SIMONDON G., Du mode d’existence des objets techniques, Flammarion, Paris, 2012.

[TRU 01] TRUONG J.M., Totalement inhumaine, Le Seuil, Paris, 2001.

[VOL 06] VOLLE M., De l’économie, Economica, Paris, 2006.

[VOL 08] VOLLE M., Prédation et prédateurs, Economica, Paris, 2008.

[VOL 14] VOLLE M., Iconomie, Economica, Paris, 2014.

[VON 57] VON NEUMANN J., The Computer and the Brain, Yale Nota Bene, New Haven, 1957.

Chapter written by Michel VOLLE.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset