7

An historical introduction to formal knowledge systems

Liam Magee

Kant moreover considers logic, that is, the aggregate of definitions and propositions which ordinarily passes for logic, to be fortunate in having attained so early to completion before the other sciences; since Aristotle, it has not lost any ground, but neither has it gained any, the latter because to all appearances it seems to be finished and complete. Now if logic has not undergone any change since Aristotle—and in fact, judging by modern compendiums of logic the changes frequently consist mainly in omissions—then surely the conclusion which should be drawn is that it is all the more in need of a total reconstruction; for spirit, after its labours over two thousand years, must have attained to a higher consciousness about its thinking and about its own pure, essential nature… Regarding this content, the reason why logic is so dull and spiritless has already been given above. Its determinations are accepted in their unmoved fixity and are brought only into external relation with each other. In judgements and syllogisms the operations are in the main reduced to and founded on the quantitative aspect of the determinations; consequently everything rests on an external difference, on mere comparison and becomes a completely analytical procedure and mechanical calculation. The deduction of the so-called rules and laws, chiefly of inference, is not much better than a manipulation of rods of unequal length in order to sort and group them according to size—than a childish game of fitting together the pieces of a coloured picture puzzle (Hegel 2004, p. 51).

Analysts discussing knowledge systems typically distinguish their logical (or procedural) and ontological (or data) components (Smith 1998; Sowa 2000). To employ another related distinction, the logical part can be termed the formal component of a system—what preserves truth in inferential reasoning—while the ontological part can be considered the material component—or what is reasoned about. Usually challenges of interoperability focus on the explicit ontological or material commitments of a system—what is conceptualised by that system. This is of course understandable; it is here where the authorial intent is to be divined, where the system design is manifest. However, as the history of knowledge systems demonstrates, the line between logical/ontological or formal/material is often blurred; as the mechanics of systems have evolved to provide different trade-offs between features and performance, so too have the kinds of implicit ontological assumptions embedded within the logical constructs of those systems. In less arcane terms, even the austere, content-less world of logic remains nonetheless a world—bare, but not quite empty. And since logics themselves are pluralised, it therefore makes sense that their worlds be plural too, with slight variations and gradations of difference between them. The next two chapters explore these differences, through the lens of the historical development of formal knowledge systems. This history stretches back beyond the information age to the earliest musings on the potential to arrive at conclusions through purely mechanical procedure; to deduce automatically and unambiguously.

This chapter, then, presents a general historical narrative, which plots the development of knowledge systems against three successive different waves of modernisation. This development is inherently tied to the rise of symbolic logic in the modern era, without which both computation generally, and knowledge systems specifically, would not be possible. A constant guiding goal, which can be termed, following Foucault, the pursuit of a ‘mathesis universalis’ (Foucault 1970), motivates this development. At its most extreme, this goal represents a form of epistemological idealism; it imagines that all knowledge can be reduced to an appropriate vocabulary and limited rules of grammar and inference—in short, reduced to a logical system. The extent of this idealism is itself an important feature of the different formalisms surveyed.

To focus on this and other significant points of difference and similarity, the characterisation of the history of this development is intentionally schematic; it does not attempt a broad description of the history of logic, computers or information systems. It does, on the other hand, aim to illustrate the rough affinity between the specific history of knowledge systems and the much broader history of modern industrialisation and capitalism. This is in part to counter the tendency of histories of logic and computing to present them as purely intellectual traditions, with only coincidental application to problems of industry, bureaucracy and governance. Logical ‘idealism’ in fact arose specifically in those places and times which demonstrated a practical need for it— because, in a sense, both the quality and quantity demands of organisational knowledge management, traceable back to the rise of the bureaucracy in the nineteenth century, foreshadowed the emergence of information systems concurrently with the greater waves of modernity in the twentieth. The conclusion of the study, at the end of the next chapter, suggests that the nexus of tensions which arise in modernity play a structuring role in the production of incommensurability.

The structure employed here distinguishes between ‘pre-modern’, ‘modern’ and ‘postmodern’ development phases of knowledge systems. The ‘pre-modern’ and ‘modern’ periods cover, respectively, the scattered precursors and the more structured programs in logic and mathematics which pointed towards the development of knowledge systems. Early knowledge systems, such as the relational database, can be said to apply the results of ‘modernist’ logic in the form of highly controlled structures of knowledge. ‘Postmodern’ knowledge systems, on the other hand, arise out of the perceived difficulties of coercing all forms of knowledge into rigid structures. In particular, the era of the web has inspired the construction hybrid, semi-structured knowledge systems such as semantic web ontologies—combining some of the computational properties of relational databases with support for documents, multimedia, social networks and other less structured forms of data and information.

The next chapter shifts from an historical overview towards a more detailed examination of the question of commensurability of relational database and semantic web systems. In particular it looks at one stark area of potential incommensurability—that of so-called ‘closed’ versus ‘open’ world assumptions. While this area of incommensurability is well documented within the relevant literature (Reiter 1987; Sowa 2000), it resonates with several broader cultural distinctions between the two kinds of systems. These distinctions, along with several recent discussions of them, are then reviewed, followed by a suggestive assessment of commensurability between the two kinds of knowledge representation surveyed.

The conclusion of the two chapters recasts knowledge systems back into a broad historical frame, suggesting several causal factors behind the production of differences between them. These suggestive indications are further developed in later chapters in the book.

Pre-modernity: logical lineages

Retrospectively, it appears that modern knowledge systems are the culmination of a steady linear development in the field of logic. In the past century and a half, since Frege’s efforts to systematise logic in symbolic form, progressive and continuous advancement is a plausible narrative line. Prior to Frege, however, logic appears at relatively brief intervals in the development of Western thought. A more fitting metaphor is perhaps that of an expanding series of fractal-like spirals—sporadic and incidental surges during the ancient, medieval and Enlightenment periods, before a sudden and sustained preoccupation from the nineteenth century onwards (Kneale 1984). Indeed, as the quote from Hegel suggests, at the start of the nineteenth century, logic was perceived to be a field for the most part exhausted by Aristotle’s exposition of syllogisms. More recent histories have shown a somewhat more complex picture: important precedents to modern logic variants, such as predicate, modal and temporal logics, can be found in Aristotelian and later classical works on logic (Bochenski 1961), as well as in medieval scholasticism (Kneale 1984).

Notwithstanding these precursors, it is generally agreed that not until the seventeenth century was something like contemporary symbolic predicate logic, on which knowledge systems are based, conceived (Bochenski 1961; Kneale 1984). Largely the product of a solitary figure, Leibniz, this conception was of a universal symbolism—universalis mathesis (Foucault 1970)—which would provide both a standardised vocabulary and formal deductive system for resolving disputes with clinical and unambiguous clarity (Davis 2001). Leibniz dreamed of a process which could strip all argument from the vagaries and ambiguities of natural language, leaving only a pristine set of statements and rules of valid inference in its place:

If this is done, whenever controversies arise, there will be no more need for arguing among two philosophers than among two mathematicians. For it will suffice to take pens into the hand and to sit down by the abacus, saying to each other (and if they wish also to a friend called for help): Let us calculate! (Lenzen 2004, p. 1).

Set in the context of Cartesian geometry, Newtonian physics, Copernican cosmology, the construction of the calculus, and a host of other mechanical formalisations of the seventeenth century, that mathematics should be seen to be the epistemological pinnacle towards which other kinds of thought might aspire—to reason ‘clearly and distinctly’, as another rationalist, Descartes, put it—is perhaps not surprising. At the time, consensual workings-out of ‘controversies’—with or without the aid of a ‘friend’—was an important intellectual concomitant to the preferences for personal introspection over traditional, largely clerical authority, for rationality over dogma, for individual decision making over ecclesiastical mandate, and for mechanical laws over divine decrees, all of which mark the emergence of the Enlightenment (Habermas 1989). The cry to resolve disputes by ‘sitting down by the abacus’—or any of its contemporary analogues—was, however, to inspire a much longer and sustained wave of rationalist oneirism only by the middle of the nineteenth century. Arguably, Leibniz’ fervour was not yet met by a sufficiently developed and broader need for rationalised and standardised communication in the social sphere at large. From the nineteenth century onwards, though, three further distinct points can be isolated within this historical trajectory: Frege’s repudiation of German idealism and psychologism in the late nineteenth century, which paved the way for symbolic logic; logical positivism’s rejection of metaphysics, and its search for a purified, foundational mathematics; and, most importantly, subsequent post-war exploration of computational methods to represent, refine and extend human knowledge, which gradually filtered down from ‘pure’ research to applied problem-solving in a myriad of practical contexts. What began as an individual exhortation, barely a rippling murmur in a sea of philosophical discourse, had, by the twentieth century, coalesced into a tradition of what Rorty (1992) termed ‘Ideal Language Philosophy’—a putative, therapeutic program for extending the use of language, ultimately, from the selective company of human agents to that which would embrace a wider family of computational agents as well.

One of the foundational populist expressions of the ambitions of the semantic web, published in Scientific American in 2001, gives a modern rendering of this zeal for intellectual asceticism:

The semantic web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation. The first steps in weaving the semantic web into the structure of the existing Web are already under way. In the near future, these developments will usher in significant new functionality as machines become much better able to process and ‘understand’ the data that they merely display at present… The semantic web, in naming every concept simply by a URI [Uniform Resource Identifier], lets anyone express new concepts that they invent with minimal effort. Its unifying logical language will enable these concepts to be progressively linked into a universal Web. This structure will open up the knowledge and workings of humankind to meaningful analysis by software agents, providing a new class of tools by which we can live, work and learn together (Berners-Lee, Hendler and Lassila 2001).

As with the Enlightenment, these more recent moments have been accompanied by broader ideological trends. These are sketched out in more detail below—in part to emphasise the inter-dependent structural connections between the emergence of knowledge systems, on the one hand, and the rise of distinct styles of modern organisation and management—features of contemporary capitalism—on the other; and in part to help explain how variant knowledge formalisms—even at a level of abstraction from questions of conceptual content—still bear substantial epistemological assumptions. These assumptions, in turn, can have significant bearing on how systems based on these formalisms might be considered commensurable.

Early modernity: the mechanisation of thought

After Leibniz, the dream of a formal mechanism for ‘calculating’ the logical outcome from a set of premises was to remain dormant for a considerable period. Variants of Germanic idealism sought instead to emphasise the irreducibility of thought to pure procedure. Even the ostensibly logical works of Kant and Hegel differentiated the sphere of the rational from other modes of thought: practical/ethical and judgement/aesthetic categories were procedurally, not just substantively, differentiated. Foucault goes so far as to argue that the eighteenth and early nineteenth centuries, in the human sciences at least, are marked by a departure, at the level of method, from attempts at a common, universal and formal language:

In this sense, the appearance of man and the constitution of the human sciences… would be correlated to a sort of ‘de-ma thematicization’… for do not the first great advances of mathematical physics, the first massive utilizations of the calculation of probabilities, date from the time when the attempt at an immediate constitution of a general science of non-quantifiable orders was abandoned? (Foucault 1970, pp. 349–50).

It was not until the mid-nineteenth century, coincidentally when industrialisation, and the associated widespread mechanisation of industry, grew rapidly (Hobsbawm 1975), that logic began again to take on importance as an active field for new research in its own right. The incipient form of logic as a coherent and regulated, machine-like system began to take shape in four related British works around the middle of the nineteenth century: Richard Whately’s Elements of Logic (1826), William Thomson’s Outlines of the Laws of Thought (1842), John Stuart Mill’s A System of Logic (1843) and, most significantly, George Boole’s An Investigation of the Laws of Thought (1854). These developments, contemporaneous with the early computational designs of Babbage, mark a shift in the treatment of logic from a study of modes of argumentation (as a sibling discipline to rhetoric) to a study of a system, with strong affinities to mathematics—logic begins here to be considered as a kind of calculus, of the kind Leibniz envisioned, rather than a mere rhetorical aid (O’Regan 2008). This is especially evident in Boole’s landmark text, which not only marks its discussion of logic with algebraic rather than verbal terms, but introduces for the first time a set of logical operations equivalent to those of arithmetic (logical product, sum and difference) (Bochenski 1961; Kneale 1984). Evidence of the radical nature of this effort is indicated by a prolonged defence in the introduction:

Whence it is that the ultimate laws of Logic are mathematical in their form; why they are, except in a single point, identical with the general laws of Number; and why in that particular point they differ;—are questions upon which it might not be very remote from presumption to endeavour to pronounce a positive judgement. Probably they lie beyond the reach of our limited faculties. It may, perhaps, be permitted to the mind to attain a knowledge of the laws to which it is itself subject, without its being also given to it to understand their ground and origin, or even, except in a very limited degree, to comprehend their fitness for their end, as compared with other and conceivable systems of law. Such knowledge is, indeed, unnecessary for the ends of science, which properly concerns itself with what is, and seeks not for grounds of preference or reasons of appointment. These considerations furnish a sufficient answer to all protests against the exhibition of Logic in the form of a Calculus. It is not because we choose to assign to it such a mode of manifestation, but because the ultimate laws of thought render that mode possible, and prescribe its character, and forbid, as it would seem, the perfect manifestation of the science in any other form, that such a mode demands adoption (Boole 2007, p. 11).

For Boole, rendering the laws of thought ‘in the form of a Calculus’ becomes ‘perfect manifestation of the science’, and a natural accompaniment to the greater scientific enterprise then burgeoning in mid-nineteenth century Britain. It is an undertaking which, moreover, fits comfortably with the broader economic and military aspiration of a global-looking empire (Hobsbawm 1975). Nevertheless, as the titles of these works indicate, logic remained a description of concomitant mentalistic ‘laws of thought’—however much they may be ‘mathematical in their form’ (Boole 2007). That these laws themselves belonged to the domain of mathematics, or perhaps might found a new branch of ‘metamathmatics’, rather than psychology—and thus could be replicated by a machine—was an implication yet to be developed. For Frege, writing a little later in the nineteenth century, expressions such as ‘laws of thought’ were the last vestiges of a discipline about to be wrenched from its psychologistic origins (Kneale 1984). Subsequently, logic was to be reoriented onto new disciplinary foundations, not on the basis of a mere analogy or affinity with mathematics, but as no less than the very foundations of the mathematical enterprise.

The latter half of the nineteenth century witnessed the emergence of two new global powers which could compete with the military, economic and technological dominance of the British Empire—Germany and the United States (Hobsbawm 1987). Coincidentally these two countries also boast the two seminal logicians of this period, in Frege and Peirce. Quite independently and, in the case of Peirce, to relatively little initial acclaim, they worked to develop completely axiomatised logical systems, which in turn would form the basis for all modern-day formal knowledge systems (Davis 2001; Sowa 2000). Frege, in particular, developed three pivotal and influential innovations: the ‘concept script’ (Begriffschrift), a notational language of symbols with variables and constants with well-defined semantics; the vital conceptual distinction between connotational meaning (sense—Sinn) and denotational meaning (reference—Bedeutung); and, most notably, the formalisation of quantified predicate logic, which as one historian suggests ‘was one of the greatest intellectual inventions of the nineteenth century’ (Kneale 1984). While Frege’s notation was never widely adopted, and presented considerable intellectual challenges to its early readers, the recognised flexibility of predicate logic allowed for an explosion of interest in ‘metamathematical’ problems—how to develop a foundational system from which all of mathematics could be derived (van Heijenoort 1967). Together with Cantor’s set theory, at the turn of the twentieth century, it now appeared at least possible to unite mathematics under a single universal theory—indeed, the very desire to develop, for the first time, a unified coherent theory itself points to a uniquely modern epistemology of mathematics (Davis 2001). More ambitiously still, the challenge of erecting all knowledge on the rigorous epistemological foundations of logic and mathematics could now be conceived—a challenge which, around the turn of the century, was indeed posed by the mathematician Hilbert, and soon after was also accepted by Whitehead and Russell (van Heijenoort 1967).

Crises in modernity: the order of logic and the chaos of history

The fascination with propositional form, which characterises the enthusiasm for symbolic logic in the early twentieth century, has its suggestive cultural analogues in the geometric obsessions of cubism, the calculating devices of the new mass culture industries, modernist architecture, the ordered urban planning environments of Le Corbusier, and the shared desire and horror of order that is a continued motif of modernist art and literature (Adorno 2001; Hobsbawm 1994). It also parallels the development and interest in a host of more mundane technologies—double-entry book-keeping, time-and-motion studies, efficient transportation, assembly-line manufacturing, punch-card tabulation and the growth of explicit management techniques, to mention but a few (Hobsbawm 1994). The first half of the twentieth century was a highly productive period for the formalisation of logic, during which the foundations for contemporary research in artificial intelligence, cognitive science and a host of affiliated disciplines today were laid. It was during this period, too, that the latent potentials of logic began to coalesce with a fully fledged modernity to provide the kinds of technological instrumentation required to meet the demands of large-scale administrations and bureaucracies. Here the quantitative growth of organisational data collated would outstrip the capacities of pre-digital storage technologies—and companies quickly emerged to fill the breach: the late nineteenth century already witnessed the growth of one corporate entity willing to service government census needs with new tabulating machines, which by 1915 had, after a three-way merger, started to operate under the now familiar name of International Business Machines (IBM Corporation 2009). Yet formal logical systems were, by and large, still considered without direct regard for their applications. Prior to the elaboration of the first computers during and after the Second World War, the steady production of theorems in set theory, model theory, foundational arithmetic and mathematical logic formed the basis from which something like modern information and knowledge systems could emerge.

These innovations happen within an era of unprecedented political and economic crisis: two world wars, numerous political revolutions and the Great Depression (Hobsbawm 1994). The surrounding turmoil of Europe often appears eclipsed in the isolated intellectual histories of this period, which feature predominantly the relative sanctuaries of Cambridge, Oxford, Vienna, Warsaw universities, and eventually those of Berkeley, Harvard and Princeton too. Yet the application of logic in military, administrative and organisational context was to become an important factor in funding and direction of problem solving within these as yet relatively small academic circles (Ceruzzi 2003). Some of the key figures in the emergence of the information age—Turing and von Neumann—made vital contributions, respectively, in code breaking and the construction of the atomic bomb (O’Regan 2008). Notoriously, the Nazis used ever more efficient information systems for cataloguing concentration camp prisoners—recently, for instance, it has been claimed that this use of punch-card tabulators involved lucrative agreements and ongoing business with IBM subsidiaries (Black 2002). But on a more general level, systems for tabulating and calculating at high speeds for academic, governmental, commercial or military purposes meant that there was significant curiosity about the otherwise arcane results emerging from this form of theoretical enterprise, even if it did not hold the public attention in the way that, for example, theoretical physics did from the First World War onwards.

The following sections outline some of the salient developmental steps in the construction of both computers generally and knowledge systems in particular.

1910s—Mathematical principles

In Principia Mathematica, Whitehead and Russell (1910) endeavoured to refound the entirety of mathematics on the new ‘meta-mathematics’ of formal logic. This work developed on Frege’s system, and was to prove instrumental in inspiring the development of the austere brand of philosophy by the Vienna circle in the 1920s, known as logical positivism (van Heijenoort 1967). Principia Mathematica, more than any other work, was responsible for directing Anglo-American philosophy away from metaphysics, idealism and the naive empiricism of the nineteenth century, and towards an empiricism instead founded on the precise use of a language resolutely committed to describing facts—a language epitomised in the severe codes of symbolic logic. The impetus behind Russell and Whitehead’s project remained that of Leibniz’ dream, but phrased now in tones less of wishful thinking and more of matter-of-factual inevitability. The task of logical analysis, in Russell’s telling introduction to Wittgenstein’s work, is to show ‘how traditional philosophy and traditional solutions arise out of ignorance of the principles of Symbolism and out of misuse of language’ (Wittgenstein 1921). At the heart of this vision, in a reductionist form, is the idea that, once the appropriate logical vocabulary is supplied, and the concepts of a field made sufficiently clear, all knowledge can be reduced to empirical observation and data collection.

1920s—‘Thereof one must be silent’

This vision, in the various imaginings of Frege, Russell and the logical positivists, receives its most incisive and forceful articulation in Wittgenstein’s Tractatus Logico-Philosophicus (1921). Not a work on logic in the usual sense—it uses relatively little symbolism, almost no mathematics, and has none of the standard hallmarks of logical papers or textbooks—it nevertheless had great influence, and has continued to be read long after Principia Mathematica was relegated to relative obscurity of the history of logic. Indicative of what was deemed to be the new putative function of philosophy, the Tractatus took a long aim at the entire history of philosophy, portraying it as a discipline awash with metaphysical confusion. The following exemplifies this critique, but also suggests the putative manner in which philosophy ought to proceed—with surgical precision:

3.32. In the language of everyday life it very often happens that the same word signifies in two different ways—and therefore belongs to two different symbols—or that two words, which signify in different ways, are apparently applied in the same way in the proposition…

3.324 Thus there easily arise the most fundamental confusions (of which the whole of philosophy is full).

3.325 In order to avoid these errors, we must employ a symbolism which excludes them, by not applying the same sign in different symbols and by not applying signs in the same way which signify in different ways. A symbolism, that is to say, which obeys the rules of logical grammar—of logical syntax.

(The logical symbolism of Frege and Russell is such a language, which, however, does still not exclude all errors.) (Wittgenstein 1921, p. 41).

Wittgenstein’s specific technical contribution to the discipline of logic in the Tractatus was limited to the construction of truth tables—a device for determining the truth function of a proposition given the truth values of its atomic parts. The broader influence of the Tractatus in philosophy, though, is inestimable—it completed the exercise instigated by Frege and Russell, of placing logic and linguistic analysis at the centre of contemporary philosophical discourse (Rorty 1992). Just as significantly, once sufficiently interpreted and translated by Carnap, Ayer, Popper and others, it emphasised just how the factual propositions constituting scientific knowledge, specifically, were to be articulated—as a system of concepts, relations and properties. This was to form the basis for how modern knowledge systems would develop. Of anything which could not be subsumed directly within this system, logical positivists, following Wittgenstein, might declare: ‘thereof one must be silent’ (Wittgenstein 1921).

1930s—Completeness, truth, decidability and computations

The 1930s witnessed a furious explosion of theoretical work in logic, and the first emergence of its practical application. In the early years, two of the most significant figures in the history of logic, Tarski and Gödel, published vitally important results for the future evolution of knowledge systems. Neither’s work is easily assimilable into a historical synopsis of this sort, so this section focuses only on two of their more significant results—published in Gödel’s ‘On Formally Undecidable Propositions of Principia Mathematica and Related Systems’ ([1931] 1962) and Tarski’s ‘The Semantical Conception of Truth’ (1944), respectively. These results were to have near-immediate impact on the development of computing—around the middle of the decade, Church and Turing delivered models of computation, and by the decade’s end, quite independently, the first computer had been developed by Zuse in Germany (Lee 1995). Thereafter, the war was both to disrupt and furnish new opportunities for research in aid of the war effort, and, inadvertently, to establish a realignment of technological prowess with the diaspora of mathematical talent from the old to the new world.

In 1931 Gödel resolved the problem of providing a sufficient axiomatisation of the foundations of mathematics, albeit with a negative result. In his incompleteness theorem, he demonstrated that a logical system could not be both complete and consistent (Gödel [1931] 1962). A system could only describe a complete set of theorems, such as those of arithmetic, by also admitting contradictory theorems. This result was to have vital implications—in Turing’s recasting of it several years later, the ‘halting’ problem indicated that in the context of a hypothetical Turing machine performing an algorithm, it was impossible to decide whether it would terminate or run forever. This insight, vital to the general notion of computability, would also eventually mature into different classes of logic-based knowledge systems, depending on the trade-off between expressivity—what kinds of facts such systems can represent and reason over—and tractability—what guarantees of performance and termination reasoning algorithms would have (Nagel and Newman 1959).

Tarski’s work on formal languages has greater application still for the development of formal knowledge systems, and indeed has been highly influential in the philosophy of natural language. Although most of his work related to various fields in mathematics, several significant papers on logic published originally in Polish in the 1930s focus on the concept of truth in formal languages (Tarski 1957). These papers form the basis for the development of model theory, which aims to describe how linguistic models (expressed in either formal or natural languages) can be adequately interpreted in a truth-functional sense. Tarski’s relevant work from this period is ‘The Concept of Truth in Formalized Languages’. The aim of the paper is to ‘construct—with reference to a given language—a materially adequate and formally correct definition of the termtrue sentence”‘(Tarski 1957, p. 152, original emphasis). Tarski is careful not to offer a definition of truth itself—that task, he states, belongs to epistemology. Rather he is interested in answering the question: what is it for a sentence constructed in a given (formal) language to be true? An important step towards this result is the introduction of recursive languages, in which one language (a metalanguage) can be used to give the truth conditions of those in another (an object language). The metalanguage cannot however state the truth conditions of its own sentences—these conditions must be stated in yet another metalanguage. For Tarski, a sentence is true just in case there is a sentence in the metalanguage under consideration which is its translation. Tarski frames this result as a postulate known as Convention T. This semantic conception was eventually elaborated into model theory in the 1950s, and gave the precise semantic determination required for the construction of highly expressive formalisms, of which semantic web ontologies are a contemporary example.

Turing was another key figure who emerged in this period, building, as did Church, on Gödel’s critical insights. Although Great Britain had a diminished role in the development of computing in the post-war period, Turing remained a significant and iconoclastic influence in this development until his death in 1954 (Hodges 2009). A member of the fervent Cambridge intellectual scene in the 1930s, Turing’s work was to have greater practical consequence in the second half of the century than any of his contemporaries. The Turing machine, elaborated in an effort to solve the problem of mathematical decidability, was to form the basis of all of modern computers. Its key insight is that a machine can encode not only information, but also the very instructions for processing that information. This involved a virtualisation of the machine: from a predefined instruction set built into the machine itself, to one stored instead in ‘software’—an erasable, manipulable tape or script which now programmed the machine. Critically, these instructions then could be modified, so that the underlying physical machine would effectively be reprogrammed using new algorithms to replicate any number of different ‘logical’ machines. In the same paper, Turing also applied Gödel’s incompleteness theorem to demonstrate the undecidability of certain algorithmic classes. Just as the broad vista of the computer age was sketched out in intricate detail via a decidedly old-world metaphor of marked tape, ironically its theoretical horizon—the limits of computability—was also being discovered and announced.

1940s—Towards a computational world

The work of Whitehead, Russell, Wittgenstein, Gödel, Tarski, Church and Turing, among many others, were to have substantial implications for modern computing applications, but in the 1930s these remained confined to the small communities of mathematicians and logicians. The first computers were developed in the late 1930s and the early 1940s, in Germany and in Great Britain respectively (Metropolis, Howlett and Rota 1980), and hence theoretical work in this field was only tentatively being applied to practical applications. Although Great Britain and Germany continued to produce the predominant logicians of this period, the prominence of Tarski and others attested to the broader interest in logic across the European continent, and increasingly in the United States (Ceruzzi 2003; Davis 2001).

The advent of the Second World War had two significant effects on the further development of logic and its application. First, it brought greater attention and funding to a range of theoretical disciplines, which suddenly appeared to have tremendous military application. The most conspicuous example was the development of the atomic bomb, possible only because of the recently realised theoretical feasibility of splitting atoms. But there was also significant developing of computing applications, notably in Britain and the United States—Turing’s enduring fame is partly as a result of his code-breaking work on the Enigma project (Metropolis, Howlett and Rota 1980). Germany could also have entered the computer age in this period; Zuse, an enterprising young engineer, built the first functioning computer in 1938, but ironically could not obtain funding from the Nazi party to further its development (Lee 1995; Metropolis, Howlett and Rota 1980). Second, with less immediate but equally great long term effect, the rise of Nazism and the war also stimulated the enormous migration of Jewish intellectuals to the United States in the 1930s and the early years of the war (Davis 2001). This included, along with many others, Gödel and Tarski, as well as von Neumann, a leading logician and economist, and also creator of the basic hardware architecture of the modern computer (Lee 1995). This influx of considerable talent provided an enormous stimulus to American universities, noticeably at Princeton, Berkeley and Harvard. The preponderance of these intellectuals led, in the post-war period, to a generation of students who had been trained and influenced within the relatively sedate academic climate of the wealthy American university system. In the context of the Cold War and burgeoning economic conditions of the 1950s, highly trained mathematicians and physicists were to be increasingly in demand for both industry and government. The United States, and to a much lesser extent Russia, were well placed, then, to capitalise on the influx of significant intellectuals like Gödel and Tarski—although their influence was equally likely to be felt indirectly, through the work of their students, and the subsequent dissemination of their results via translation.

Although they form only a small part of the broader work conducted in mathematical logic, or as it was then termed, ‘metamathematics’, the logicians introduced here still stand out as singularly responsible for the disciplinary orientations of philosophy and mathematics, and setting the foundations for the extraordinary rise of the computing industry in subsequent years. In the latter part of the twentieth century, this quantitative growth itself led to significant qualitative specialisation in computing science, to the extent that the number of conferences, papers and results has long since been impractical to survey single-handedly. Even this appreciable academic activity pales in comparison, however, to the extraordinary industrial investment in computing applications. The next section aims to chart the direction of this development in the latter part of the twentieth century, especially in the context of the rise of knowledge systems. Of these, the relational database has had the most spectacular rise, to the extent that it is now a pervasive part of any modern-day organisational infrastructure. The semantic web represents an effort to develop an alternative architecture for representing knowledge, featuring more expressive features than the set-theoretic models of relational databases. Collectively, however, they both represent species of a broader common genus, a family of formal knowledge systems which aimed to realise, with various inflections, the letter if not quite the spirit of Leibniz’ much earlier utopic dreams.

References

Adorno, T.W. The Culture Industry: Selected Essays on Mass Culture. Abingdon: Routledge; 2001.

Berners-Lee, T., Hendler, J., Lassila, O. The Semantic Web. ScientificAmerican. 2001; 284:34–43.

Black, E. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation. New York: Crown Publishing Group; 2002.

Bochenski, J.M., A History of Formal Logic I. Thomas (tr.). University of Notre Dame Press, Notre Dame, 1961.

Boole, G. An Investigation of the Laws of Thought. New York: Cosimo Classics; 2007.

Ceruzzi, P.E. A History of Modern Computing. Cambridge, MA: MIT Press; 2003.

Davis, M. Engines of Logic: Mathematicians and the Origin of the Computer. New York: W.W. Norton & Company; 2001.

Foucault, M. The Order of Things: An Archaeology of the Human Sciences. New York: Vintage Books; 1970.

Gödel, K. On Formally Undecidable Propositions of Principia Mathematica and Related Systems. New York: Basic Books, 1962; .

Habermas, J., The Structural Transformation of the Public Sphere T. Burger (tr.). MIT Press, Cambridge, MA, 1989.

Hegel, G.W.F., Science of Logic A.V. Miller (tr.). Routledge, Abingdon, 2004.

Hobsbawm, E.J. The Age of Capital. London: Weidenfeld & Nicolson; 1975.

Hobsbawm, E.J., The Age of Empire 1875–1914. Weidenfeld & Nicolson, London, 1987.

Hobsbawm, E.J., The Age of Extremes: The Short Twentieth Century 1914–1991. Michael Joseph and Pelham Books, London, 1994.

Hodges, A. Alan Turing. In: Zalta E.N., ed. The Stanford Encyclopedia of Philosophy. Stanford, CA: Metaphysics Research Lab, Center for the Study of Language and Information, Stanford University, 2009.

IBM Corporation. IBM Highlights, 1885–1969. http://www-03.ibm.com/ibm/history/documents/pdf/1885-1969.pdf, 2009. [(accessed 19 January 2010)].

Kneale, M. The Development of Logic. Oxford: Oxford University Press; 1984.

Lee, J.A.N. Computer Pioneers. Los Alamitos: IEEE Computer Society Press; 1995.

Lenzen, W., Leibniz’s Logic vol. 3, ‘The Rise of Modern Logic from Leibniz to Frege’. Handbook of the History of Logic. North-Holland Publishing Company: Amsterdam, 2004.

Metropolis, N.C., Howlett, J., Rota, G.C. A History of Computing in the Twentieth Century. Orlando: Harcourt Brace Jovanovich; 1980.

Nagel, E., Newman, J.R. Godel’s Proof. London: Routledge & Kegan Paul, 1959; .

O’Regan, G. A Brief History of Computing. London: Springer, 2008; .

Reiter, R. On Closed World Data Bases. In: Ginsberg M.L., ed. Readings in Nonmonotonic Reasoning. San Francisco: Morgan Kaufmann Publishers, 1987.

Rorty, R. The Linguistic Turn: Essays in Philosophical Method. Chicago, IL: University of Chicago Press; 1992.

Smith, B., Basic Concepts of Formal Ontology IOS Press. FOIS 1998: Proceedings of Formal Ontology in Information Systems. 1998:19–28.

Sowa, J.F. Knowledge Representation: Logical, Philosophical, and Computational Foundations. Cambridge, MA: MIT Press; 2000.

Tarski, A. The Semantic Conception of Truth and the Foundations of Semantics. Philosophy and Phenomenological Research. 4, 1944.

Tarski, A. The Concept of Truth in Formalized Languages. In: Logic, Semantics, Metamathematics. Indianapolis, IN: Hackett Publishing Company; 1957.

van Heijenoort, J., From Frege to Gödel: A Source Book in Mathematical Logic 1879–1931. Harvard University Press, Cambridge, MA, 1967.

Whitehead, A.N., Russell, B. Principia Mathematica. Cambridge, UK: Cambridge University Press; 1910.

Wittgenstein, L. Tractatus Logico-Philosophicus. Mineola, IA: Dover Publications; 1921.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset