3

The meaning of meaning: alternative disciplinary perspectives

Liam Magee

The sentence ‘Snow is white’ is true if, and only if, snow is white (Tarski 1957, p. 190).

A guiding thread through discussions of the semantic web is the very general notion of meaning—as something which can be variously reasoned over computationally, generated or processed cognitively, expressed linguistically and transmitted socially. In order to understand how this broad thematic can be conceived in relation to the specifically technological construct of the semantic web, here it is classified and discussed under the following ‘semantic’ rubrics or frames:

image linguistic semantics

image cognitive semantics

image social semantics

image computational semantics.

This chapter develops the introduction to the semantic web given in the previous chapter by reviewing these frames of how meaning is variously construed; it further serves as a preliminary schematisation of the kinds of variables used to organise, cluster and describe knowledge systems generally.

The first of the disciplinary frames surveyed below considers semantics as a subsidiary linguistics discipline—as the study of meaning as it is expressed in language. A general discussion outlines some of the major conceptual distinctions in this field. Given the reliance of knowledge systems on formal languages of different kinds, work in the area of formal semantics is discussed specifically. Other kinds of research have been directed towards the interpretation and use of ordinary everyday language; these theories in related fields of hermeneutics and pragmatics are also reviewed briefly.

Another, closely related, frame concerns recent work conducted in cognitive science and psychology on concept formation, categorisation and classification. Examining recent models of cognition can provide clues as to possible causes and locations of incommensurability between conceptualisations made explicit in ontologies. Several recent theories have developed explicitly spatial models of mind and cognition, which provide helpful metaphorical support, at least, for a discussion of commensurability. A review of recent research in these fields is developed in the section on cognitive semantics.

As well as being amenable to algorithmic analysis, representations of cognitive phenomena and linguistic artefacts, ontologies are also social products—they are things produced and consumed within a broader marketplace of communicative practices. It is then useful also to look at social theoretic models of communication generally, to see how these devolve onto the specific concerns of ontology commensurability. Chapter 11, ‘On commensurability’, examines several social theorists in more detail, but here it is useful to survey a range of theoretical and empirical research conducted under the broad umbrella of the social sciences. Specifically, research in key fields—sociology of knowledge, studies of technology and science, knowledge management, IT standardisation and cross-cultural anthropology—helps to introduce certain concepts which emerge again in the development of the commensurability framework in Chapter 12, ‘A framework for commensurability’. A review of these fields is provided in the section on social semantics below.

Extensive research has been undertaken in the field of computer science, notably in the area of ontology matching, but also in related areas of ontology and database modelling and design. Much of this research focuses on developing improved algorithms for concept translation between ontologies; as noted in the introduction, there has been relatively little attention to using background knowledge as a heuristic tool for augmenting ontology translation efforts. The section on computational semantics, below, surveys work in ontology matching, and also discusses related studies looking at ontology metrics and collaboration.

Finally, considerable work in philosophy of mind and language has been oriented towards problems of conceptual schemes, translatability and interpretation. However, this field is much too broad to survey even schematically here; Chapter 11, ‘On commensurability’, provides a further review of this tradition, within the specific context of outlining a theoretical background for a framework of commensurability.

Linguistic semantics

Semantics in language

As a subsidiary domain of linguistics, semantics is, as a textbook puts it, ‘the study of the systematic ways in which languages structure meaning’ (Besnier et al. 1992). Early in the history of linguistics, Ferdinand de Saussure established several foundational semantic distinctions: between signifier (a spoken or written symbol) and signified (a mental concept); and between sign (the combination of signifier and signified) and referent (the thing referred to by the sign) (Saussure 1986). Bloomfieldian research in the 1930s and 1940s emphasised structural, comparative and descriptive rather than semantic features of language; ironically it was the advent of Chomskyian generative grammar in the 1950s which, in spite of emphasising syntax, again paved the way for a more explicit focus on semantics in the 1960s (Harris 1993). Since then, numerous kinds, branches and theories of semantics have emerged: generative semantics, formal semantics (applied to natural languages), lexical semantics, componential analysis, prototype and metaphor theories, ‘universal’ semantics, cognitive semantics, hermeneutics, pragmatics and various theories of translation—not to mention the general interest in semantic computer applications and platforms such as the semantic web.

Linguistic meaning can be studied through several different lexical units or levels: words, sentences, groups of sentences, discourse or text, and a corpus of texts (Besnier et al. 1992). At each level, different types of meaning can also be distinguished. In the classical essay ‘On Sense and Reference’, Frege ([1892] 1925) distinguishes what objects in the world words refer to—their extensional or denotative meaning—from how those words are defined by other words—their intensional or connotative meaning. More recent analyses of meaning build on this primary distinction; for example, Chierchia and McConnell-Ginet (2000) distinguish denotational (or referential) from psychologistic (or mentalistic) and social (or pragmatic) theories of meaning, while Leech (1981) proposes a total of seven types of meaning: conceptual meaning, connotative meaning, social meaning, affective meaning, reflected meaning, collocative meaning and thematic meaning. Denotational or conceptual meaning is regarded as primary in most mainstream semantic accounts; since this referring capacity of language is essential for other types of meaning to be possible, ‘it can be shown to be integral to the essential functioning of language in a way that other types of meaning are not’ (Leech 1981, p. 9).

Approaches to understanding natural language meaning even in a denotational sense vary considerably. Common approaches include componential analysis, where semantic units—typically, words—are given positive or negative markers against a set of ‘components’ or ‘dimensions’ of meaning (Burling 1964; Leech 1981), and lexical analysis, where relations of units to each other are defined according to a set of rules: synonymy/antonymy (units with the same/opposite meanings to other units); hypernymy/hyponymy (units which are superordinate/subordinate in meaning to other units); holonymy/meronymy (units which stand in relation of wholes/parts to other units); and homonymy/polysemy (units which have one/multiple definitions) (Besnier et al. 1992; Cann 1993).

As an early critic of componential analysis noted, underlying ‘dimensions’ of meaning are not immediately obvious—they need to be explicitly theorised (Burling 1964). One effort to develop a core set of shared concepts which underpin all languages is Goddard and Wierzbicka’s ‘natural semantic metalanguage’ (NSM). The ‘metalanguage’ proposes a highly abstracted lexical inventory of ‘semantic primes’ from which all lexical units in any language can be derived (Goddard 2002)— an idea which is related to Rosch’s ‘basic’ categories, discussed below. Generation of such primes requires a ‘trial-and-error’ approach of postulating prime candidates (Goddard 2002), and mapping their derivation from the metalanguage into various natural language forms (Goddard and Wierzbicka 2002). As the authors suggest, the process is time-consuming and highly speculative, yet brought to fruition would provide a powerful device for, among other things, providing unambiguous rules for the translation of concepts across natural languages (Wierzbicka 1980). As the case-study on upper-level ontologies shows, the effort to develop a metalanguage for natural languages has its direct analogue in equivalent efforts to develop a set of foundational or core concepts for formal knowledge representations—with much the same difficulties and trade-offs.

The remainder of the review of linguistic approach to meaning moves in three directions, which roughly mirror the division suggested by Chierchia and McConnell-Ginet (2000): towards formal semantics, which seeks to describe meaning within a logic-based framework; towards hermeneutics, which understands meaning in a holistic and subjectively inflected way; and towards pragmatics, which understands meaning as a kind of social practice. Each of these approaches has important implications for a theory of commensurability developed here, and while apparently contradictory, the aim here is instead to demonstrate broad lines of complementarity. As Chierchia and McConnell-Ginet (2000, p. 54) emphasise, in a related context:

We believe that these three perspectives are by no means incompatible. On the contrary, meaning has all three aspects (namely, the denotational, representational, and pragmatic aspects). Any theory that ignores any of them will deprive itself of a source of insight and is ultimately likely to prove unsatisfactory.

What is ‘representational’ here is also given more expansive treatment in the section ‘Cognitive semantics’, and what is termed ‘pragmatic’ is also discussed further in the section ‘Social semantics’. However, the hermeneutic and pragmatic traditions covered here provide the means for extending out from language towards those cognitive and social domains, and consequently provide important building blocks in the development of the theory of commensurability.

Formal semantics

A significant strain of semantic research arose from work conducted in logic and foundational mathematics in the early twentieth century—a tradition touched on in more, albeit still schematic, detail in Chapter 7, ‘An historical introduction to formal knowledge systems’. Within this tradition, ‘semantics’ is interpreted truth-functionally—a statement’s meaning is just whether the proposition it expresses is true or false. Formal semantics arose generally through the interests of logical positivism, but specifically through an ingenious response to the logical paradoxes which had beset the preceding generation of logicians in the early twentieth century. Tarski’s semantic conception of truth, first published in 1933, provided a ‘formally correct and materially adequate’ basis for describing the truth conditions of a proposition (Hodges 2008). One of Tarski’s innovations was to impose a condition on a formal language, L, that it cannot construct a sentence based on an existing sentence and the predicate ‘is true’—such a sentence could only be constructed in a metalanguage, M, which contains all of the sentences of L and the additional ‘is true’ predicate. Consequently, a paradoxical statement like ‘this sentence is false’ becomes nonsensical—to make sense, it is split into two sentences, the first of which contains the sentence under consideration in the object language L, and the second of which defines the truth value of the first in the metalanguage M (Cann 1993; Hodges 2008). For Tarski, at least, this conception could apply to the kinds of statements common to the sciences:

At the present time the only languages with a specified structure are the formalized languages of various systems of deductive logic, possibly enriched by the introduction of certain non-logical terms. However, the field of application of these languages is rather comprehensive; we are able, theoretically, to develop in them various branches of science, for instance, mathematics and theoretical physics (Tarski 1957, p. 8).

The formal semantic conception of truth strongly influenced Quine, Davidson and Popper, among others. Although directed specifically towards formal languages, Tarski’s Convention T was applied first to natural languages by Davidson (2006). More systematic accounts of natural language as a derivative of formal logic, where sentential parts sit as truth-bearing components within a sentential whole, were developed by Tarksi’s student Montague (1974), followed by Dowty (1979), Partee (2004) and Kao (2004). The guiding insight of formal semantics was the ‘principle of compositionality’: ‘The meaning of an expression is a monotonic function of the meaning of its parts and the way they are put together’ (Cann 1993, p. 4).

While Chomsky’s generative grammar demonstrated how sentences could be syntactically ‘put together’, by the end of the 1960s rival generative (Lakoff, McCawley, Postal and Katz) and interpretivist (Jackendoff and Chomsky) semantic movements had as yet yielded no prevailing paradigm for accounting for how the meaning of sentential parts—individual words, as well as noun and verb phrases, for example—could account for the meaning of the sentence as a whole (Harris 1993). Montague grammar sought to provide a unified theory for the syntax and semantics of both natural and artificial languages (Kao 2004). In ‘The Proper Treatment of Quantification in Ordinary English’ (1974), the seminal account of such a theory, Montague presents a syntax of a fragment of English, a form of ‘tensed intensional logic’ derived from Kripkean possible world semantics, and, finally, rules for translating a subset of English sentences into the intensional logic. The role of the intensional logic is to handle certain classes of ‘complex intensional locutions’ using ‘intensional verbs’. For example, the truth value of a sentence containing the verb ‘seeks’ can vary depending on the verb complement—resolving the truth value means knowing the state of affairs which pertain within a possible world at a particular point in time (Forbes 2008). By demonstrating how it was possible to translate a large group of natural language sentences into disambiguated propositions, analysable into parts with truth conditions and able to stand as premises in logical inferences, Montague opened rich possibilities for further research in formal semantics (Kracht 2008; Partee 2004).

Formal semantics inspired by Tarski’s model theory has also been used in the construction of syntactically well-formed and semantically interpretable artificial languages for knowledge representation, including the languages of the semantic web, Resource Definition Framework (RDF) and Ontology Web Language (OWL) (Hayes 2004; Hayes, Patel-Schneider and Horrocks 2004). The somewhat arcane origins of this ‘semantic’ epithet—by way of the abstractions of model theory and description logics—has led, perversely, to several varying interpretations of the semantic web itself: as a process of incremental technological adaptation, or as a wholesale revolution of how knowledge is produced, represented, disseminated and managed. Both the development and subsequent interpretations of the semantic web are described in more detail in Chapter 7, ‘An historical introduction to formal knowledge systems’.

Hermeneutics and semantics

Hermeneutics predates the scientific study of semantics described above by some historical distance, originating in German Enlightenment philosophy in the eighteenth and early nineteenth centuries (Mueller-Vollmer 1988). Etymologically derived from the Greek for ‘translate’ or ‘interpret’, it is similarly concerned with meaning in a very general sense. In its earliest incarnations, the aims of hermeneutics were broadly sympathetic with later waves of the epistemologically ambitious programs of logical positivism and the semantic web:

Finally, with the desire of Enlightenment philosophers to proceed everywhere from certain principles and to systematize all human knowledge, hermeneutics became a province of philosophy. Following the example of Aristotle… Enlightenment philosophers viewed hermeneutics and its problems as belonging to the domain of logic (Mueller-Vollmer 1988, p. 3).

In the nineteenth and twentieth centuries, under the various influences of Romanticism, secularism, materialism, vitalism and phenomenology, hermeneutic studies became oriented towards psychological, historical and subjective aspects of interpretation. Although treatment of hermeneutics differs from author to author, it can be distinguished from semantics in being:

image oriented more towards the holistic meaning of texts, rather than the individual meaning of smaller linguistic units such as sentences or words

image focused on historical and humanist explanations of interpretation rather than scientific and objective, truth-functional accounts

image more closely connected with traditional approaches to language— rhetoric, grammar, biblical exegesis and language genealogy—than semantics (Leech 1981; Mueller-Vollmer 1988)

image directed towards the internal rather than external ‘side of our use of signs’—towards how signs are understood, rather than, conversely, how concepts can be signified (Gadamer 2004)

image interested in disruptive semantic features of meaning—ambiguity, paradox and contradiction are not features to be ‘explained away’, but rather are intrinsic characteristics of an account of meaning.

Twentieth-century philosophers working in the hermeneutic tradition have also pointed to a necessary structural relationship between holistic understanding and atomistic interpretation, depicted by the ‘hermeneutic circle’ (Gadamer 1975; Heidegger 1962). It describes a virtuous rather than vicious circular pattern of learning in relation to a text, discourse or tradition—as individual parts are interpreted, so the understanding of the whole becomes clearer; and as the text as a whole is better understood, so new parts are better able to be interpreted. This broad structure describes in the large the kind of complementarity discussed earlier between two approaches to aligning ontologies: ontology matching—translating atomic concepts found within them—and assessing commensurability—comparing holistically the conceptual schemes underlying them. Similarly, where atomic interpretation works from the explicit features of the ontologies themselves, developing a holistic understanding also engages the implicit assumptions and commitments held by those who design and use them.

Although typically directed towards historical, literary or philosophical texts, then, hermeneutics makes several distinctions and claims which can be no less applied to the interpretation of ontologies and other information systems—as texts of a different sort. Whereas ambiguity in poetical texts is often intentional, it is an unhelpful side-effect of tacit assumptions in the case of ontologies—and a ‘broad-brush’ hermeneutic orientation, which seeks to examine texts against the background of historical and contextual conditions, can be useful in making such assumptions explicit.

Pragmatic semantics

Linguistic pragmatics considers meaning a function of how words and phrases are used, famously coined in the Wittgensteinian expression ‘the meaning of a word is its use in the language’ (Wittgenstein 1967). According to this view, language, as well as being a repository of lexical items arranged according to various syntactic rules, primarily functions as a tool in the hands of linguistically capable agents. Utterances can be understood as acts, and are best analysed in terms of their effects. The meaning of words cannot be abstracted from their embeddedness in utterances, in the broader situational context in which those utterances are made, and in the practical effects they produce. Assertoric statements—of the kind analysed by formal semantics, whose semantic import could be judged by the truth value of their propositional contents—are an unremarkable region in the broader landscape of linguistic utterances, which can be analysable against several other functional vectors (Austin [1955] 1975). Unlike formal semantics, pragmatics focuses on a broader class of linguistic phenomena; unlike hermeneutics, this focus is less directed towards subjective interpretation, and more towards the social and intersubjective aspects of language use: speech acts, rules, functions, effects, games, commitments, and so on.

In the 1950s, Austin ([1955] 1975) Wittgenstein (1967), Quine ([1953] 1980) and Sellars ([1956] 1997) collectively mounted a series of what can best be described as pragmatically inflected critiques against the naive empiricism embedded within a logical positivist view of meaning, a view which can still be traced in the formal semantics tradition. Through a subsequent generation of philosophers, linguists and cognitive scientists, these critiques presented a range of new perspectives for understanding how semantic concepts are organised and used within a broader landscape of social practice.

In his landmark text How to do Things with Words, Austin ([1955] 1975) discusses sentences whose functional role in discourse is distinct from that of assertional statements—that is, sentences which are not directly decomposable into propositional form. Austin directs attention particularly towards ‘performatives’—utterances which do things. Unlike descriptive statements, which can be analysed in terms of truth content, such performative sentences have to be assessed against different criteria: whether they are successful in their execution, or in Austin’s vocabulary, ‘felicitous’ Austin ([1955] 1975). He eventually introduces a trichotomous schema to characterise how a sentence functions:

image as a locutionary act—’uttering a certain sentence with a certain sense and reference’

image as a illocutionary act—’informing, ordering, undertaking, &c., i.e. utterances which have a certain (conventional) force’

image as a perlocutionary act—’what we bring about or achieve by saying something, such convincing, persuading, deterring…’.

Searle (1969) elaborates a further, extended and systematic account of various kinds of such speech acts, as well as various means for understanding their function and effects in discourse.

Wittgenstein’s account represents, in turn, a yet more radical departure from a view which sees statement making as the canonical and primary function of language—a view which was moreover emphatically outlined in his own earlier work (Wittgenstein 1921). Understanding language meaning, again, means understanding how it is used in practice. Wittgenstein introduces the idea of language ‘games’, to direct attention to the role utterances play as pseudo-’moves’. Such games need not have winners, losers or even outcomes—what is distinctive about any linguistic situation is that it conforms to rules understood—at least partially—by its interlocutors. Despite its presentation in an elliptical and short text, Wittgenstein’s later work has been immensely influential. As well as motivating Rosch’s studies of prototypes and family resemblances (Lakoff 1987; Rosch 1975), his work has also been influential in a range of intersecting disciplines: Toulmin’s analysis of rhetoric and argumentation; Geertz’ phenomenological anthropology, with an attentiveness to ‘thick’ description and language games (Geertz 2005); and different strains of French philosophy and social theory, stretching across Bourdieu (1990), Lyotard (1984) and Latour (2004).

Sellars’ critique of empiricism broadly echoes those of Austin, Wittgenstein and Quine, but contains a more explicit and direct critique of empiricism (Sellars [1956] 1997). The ‘Myth of the Given’, like the ‘Two Dogmas of Empiricism’, forms part of the backdrop to the more recent pragmatist philosophy of Rorty, McDowell and Brandom. Sentential or propositional meaning is not ignored in all of these accounts. In Making It Explicit, for example, Brandom develops a monumental theoretical apparatus which connects a fine-grained analysis of assertions—the ground left behind by previous pragmatist accounts—with the social game of, as he puts it, ‘giving and asking for reasons’ (Brandom 1994). His brand of ‘analytic pragmatism’ provides one of the foundations for the ontology commensurability framework presented further on, and is discussed in more detail in Chapter 11, ‘On commensurability’.

Several implications can be drawn from a general pragmatist orientation towards language for the commensurability framework developed here. First, the main unit of pragmatic analysis is the sentential utterance rather than the word—focusing on the whole rather than the part. Second, a pragmatic treatment of meaning needs to be attentive towards not only the utterance itself, but also the situational context in which it is made—who is speaking, who is listening, what conventions are in operation, in what sequence utterances are made, what effects utterances produce, and so on. Third, definitional meanings can be understood not only as compositions of atomic parts—a series of denotations—but also only as codifications of convention—a bundle of connotations, associations and cultural practices. Fourth, an utterance can be understood simultaneously at variegated, multi-dimensional levels of abstraction—as a direct assertion, a move in a dialogical sequence, a tactical move in a language game, or an act which conforms to understood social norms and conventions.

At first glance, a pragmatist orientation might appear irrelevant for the interpretation of ontologies and other information schemes. After all, knowledge systems are developed in formal languages precisely to sidestep the kinds of ambiguities and ‘infelicitations’ which plague natural language. While systems are prima facie expressions of definitions and assertions, however, they also serve other discursive roles—to persuade, convince, insinuate and facilitate further negotiation. Moreover they are used in quite specific social language games—as ‘tokens’ in various kinds of political and economics games, for example. Interpreting knowledge systems pragmatically therefore means not only understanding their explicit commitments, but the roles they play in these extended language games. What are they developed and used for? What motivates their constructions—as very particular kinds of utterance? How are they positioned relative to other systems—what role do they play in the kinds of games played out between organisational, governmental, corporate and inter-departmental, for example? Pragmatism therefore provides a useful ‘step up’ from viewing ontologies as representations of conceptual schemes to viewing them as social products—’speech acts’, ‘utterances’ and ‘moves’ in very broadly conceived language games. It also underlines the contextual relevance of commensurability assessments themselves— that interpretation and translation are also linguistic acts, performed for particular purposes and goals.

Cognitive semantics

Theories of categorisation

One way of considering knowledge systems is as formal mechanisms for classifying and categorising objects. Graphically, a typical ontology resembles a hierarchical taxonomy—though, technically, it is a directed acyclic graph, meaning that concepts can have more than a single ‘parent’ as well as multiple ‘siblings’ and ‘children’. (Ontologies also can support other sorts of conceptual relations, but the relationship of subsumption is axiomatised into the semantics of the OWL directly, as are several other relations.) In such systems, concept application relies on objects meeting necessary and sufficient conditions for class membership. This general model accords well with the broad tradition of category application stretching back to Aristotle. However, ontologies are intended to be machine-oriented representations of conceptualisations, with only an analogical relation to mental cognitive models. What, then, can be gleaned from contemporary theories of categorisation?

Since the 1960s alternative models have been proposed for how mental concepts are organised and applied. Like ontologies, semantic networks, pioneered by Quillian (1967), model cognitive conceptual networks as directed graphs, with concepts connected by one-way associative links. Unlike ontologies these links do not imply any logical (or other) kind of relation between the concepts—only that a general association exists. Semantic networks were adapted for early knowledge representation systems, such as frame systems, which utilise the same graphic structure of conceptual nodes and links: ‘We can think of a frame as a network of nodes and relations’ (Minsky 1974). Minsky also explicitly notes the similarity between frame systems and Kuhnian paradigms—what results from the construction of a frame system as a viewpoint of a slice of the world. By extension, semantic networks can be viewed as proto-paradigms in the Kuhnian sense, though it is not clear what the limits between one network and another might be—this analogy should not, then, be over-strained.

A feature of semantic networks is the lack of underlying logical formalism. While Minskian frame systems and other analogues in the 1970s were ‘updated’ with formal semantic layers, notably through the development of description logics in the 1980s, according to Minsky the lack of formal apparatus is a ‘feature’ rather than a ‘bug’—imposition of checks on consistency, for example, impose an unrealistic constraint on attempts to represent human kinds of knowledge, precisely because humans are rarely consistent in their use of concepts (Minsky 1974). At best they are required to be consistent across a localised portion of their cognitive semantic network, relevant to a given problem at hand, and the associated concepts and reasoning required to handle it. Similarly the authors of semantic network models note the difficulty in assuming neatly structured graphs model mental conceptual organisation: ‘Dictionary definitions are not very orderly and we doubt that human memory, which is far richer, is even as orderly as a dictionary’ (Collins and Quillian 1969). Semantic networks represent an early—and enduring—model of cognition, which continues to be influential in updated models such as neural networks and parallel distributed processing (Rogers and McClelland 2004). Such networks also exhibit two features of relevance to the theory adopted here: first, the emphasis on structural, connectionist models of cognition—that concepts are not merely accumulated quantitatively as entries in a cognitive dictionary, but are also interconnected, so that the addition of new concepts makes a qualitative difference in how existing concepts are applied; and second, the implied coherence of networks, which suggests concepts are not merely arranged haphazardly but form coherent and explanatory schemes or structures.

In the mid-1970s prototype theory, another cognitive model, was proposed for describing concept use. Building on Wittgenstein’s development of ‘language games’ (Wittgenstein 1967), Rosch (1975) demonstrated through a series of empirical experiments that the process of classifying objects under conceptual labels was generally not undertaken by looking for necessary and sufficient conditions for concept-hood. Rather, concepts are applied based on similarities between a perceived object and a conceptual ‘prototype’—a typical or exemplary instance of a concept. Possession of necessary and sufficient attributes is a weaker indicator for object inclusion within a category than the proximation of the values of particularly salient attributes—markers of family resemblance—to those of the ideal category member. For example, a candidate dog might be classified so by virtue of the proximity of key perceptual attributes to those of an ideal ‘dog’ in the mind of the perceiver—fur, number of legs, size, shape of head, and so on. Applying categories on the basis of family resemblances rather than criterial attributes suggests that, at least in everyday circumstances, concept application is a vague and error-prone affair, guided by fuzzy heuristics rather than strict adherence to definitional conditions. Also, by implication, concept application is part of learning—repeated use of concepts results in prototypes which are more consistent with those used by other concept users. This would suggest a strong normative and consensual dimension to concept use. Finally, Rosch (1975) postulated that there exist ‘basic level semantic categories’, containing concepts most proximate to human experience and cognition. Superordinate categories have less contrastive features, while subordinate categories have less common features—hence basic categories tend to be those with more clearly identifiable prototypical instances, and so tend to be privileged in concept learning and use.

While semantic network and prototype models provide evocative descriptive theories that seem to capture more intuitive features of categorisation, they provide relatively little causal explanation of how particular clusters of concepts come to be organised cognitively. Several new theories were developed in the 1980s with a stronger explanatory emphasis (Komatsu 1992). Medin and Schaffer (1978), for example, propose an exemplar-based ‘context’ theory rival to prototype theory, which eschews the inherent naturalism of ‘basic level’ categorial identification for a more active role of cognition in devising ‘strategies and hypotheses’ when retrieving memorised category exemplar candidates. Concept use, then, involves agents not merely navigating a conceptual hierarchy or observing perceptual family resemblances when they apply concepts; they are also actively formulating theories derived from the present context, and drawing on associative connections between concept candidates and other associated concepts. In this model, concept use involves scientific theorising; in later variants, the model becomes ‘theory theory’ (Medin 1989). As one proponent puts it:

In particular, children develop abstract, coherent systems of entities and rules, particularly causal entities and rules. That is, they develop theories. These theories enable children to make predictions about new evidence, to interpret evidence, and to explain evidence. Children actively experiment with and explore the world, testing the predictions of the theory and gathering relevant evidence. Some counter-evidence to the theory is simply reinterpreted in terms of the theory. Eventually, however, when many predictions of the theory are falsified, the child begins to seek alternative theories. If the alternative does a better job of predicting and explaining the evidence it replaces the existing theory (Gopnik 2003, p. 240).

Empirical research on cognitive development in children (Gopnik 2003) and cross-cultural comparisons of conceptual organisation and preference (Atran et al. 1999; Medin et al. 2006; Ross and Medin 2005) has shown strong support for ‘theory theory’ accounts. Quine’s view of science as ‘self-conscious common sense’ provides a further form of philosophical endorsement to this view.

For the purposes of this study, a strength of the ‘theory theory’ account is its orientation towards conceptual holism and schematism—concepts do not merely relate to objects in the world, according to this view (although assuredly they do this too); they also stand within a dynamic, explanatory apparatus, with other concepts, relations and rules. Moreover theories are used by agents not to explain phenomena to themselves, but also to others; concept use has then a role both in one’s own sense making of the world, and also in how one describes, explains, justifies and communicates with others. In short, concepts are understood as standing not only in relation to objects in the world, as a correspondence theory would have it; they stand in relation to one another, to form at least locally coherent mental explanations; and they also bind together participating users into communities and cultures. The account presented here similarly draws on supplemental coherentist and consensual notions of truth to explain commensurability.

Semantics and the embodied mind

Various other influential cognitive models have also been proposed. Drawing together several diverse theoretical strains—generative semantics, phenomenology and Rosch’s earlier work—Lakoff and Johnson (1980) suggest that analogical and associative processes of metaphorisation are central to describing concept use and organisation. Eschewing logically derived models popular in the 1960s and 1970s, Johnson and Lakoff contend that conceptualisation is at least strongly influenced, if not causally determined, by the cognitive agent’s physical and cultural orientation. Rational minds are therefore subject to a kind of phenomenological embeddedness within a physical and cultural world— even the most abstract conceptualisations can be shown to ‘borrow’, in the form of metaphorical structures, from the perceptual and intersubjective worlds we inhabit. To take one of the case study examples presented later, ‘upper-level’ or ‘foundational’ ontologies are so-called because ‘upper’ refers to the head, the sky or the heavens—the phenomenological locus of conceptual abstraction—while ‘foundational’, though physically inverted, refers to structural support, substance—again, the phenomenological and etymological locus of conceptual ‘depth’. Johnson and Lakoff seek to explain not only individual concept use by this kind of metaphorical reduction, but also larger conceptual clusters, which when transposed from the immediate and physical to some more abstract field provide a means of understanding that field economically and coherently.

Lakoff and others develop tantalising glimpses of a metaphorical account of cognition, grounded in the ‘embodied mind’, in several subsequent works, notably Lakoff (1987), Dennett (1991) and Varela, Thompson and Rosch (1992). In part Lakoff’s critique—as with Rosch— is directed towards a mechanistic or computational theory of mind, which views cognition as a series of abstract operations which could be conceivably replicated on any suitable hardware—biological or otherwise. Implied in this view is a form of Cartesian mind–body dualism, a false dualism according to Lakoff (1987) and Dennett (1991). What can be extracted from these kinds of critique is a cautionary and corrective view that sees cognition as irretrievably bound to a physically and socially embedded agent, intent on making sense of new experience by drawing on an existing reserve of culturally shared, coherent conceptual constructs. Above all, both conceptual and physical experiences here are firmly oriented within a series of ‘cultural presuppositions’, as Lakoff and Johnson suggest:

In other words, what we call ‘direct physical experience’ is never merely a matter of having a body of a certain sort; rather, every experience takes place within a vast background of cultural presuppositions. It can be misleading, therefore, to speak of direct physical experience as though there were some core of immediate experience which we then ‘interpret’ in terms of our conceptual systems. Cultural assumptions, values, and attitudes are not a conceptual overlay which we may or may not place upon experience as we choose. It would be more correct to say that all experience is cultural through and through, that we experience our ‘world’ in such a way that our culture is already present in the very experience itself (Lakoff and Johnson 1980, p. 57).

Lakoff and Johnson’s metaphorical model, while clearly capturing some part of the way concepts are transferred over domain boundaries, nonetheless suffers from a problem of theoretical indeterminacy—it fails to account for why some metaphors are used and not others. Moreover, it arguably does not give sufficient agency to concept users—under their theorisation of cognition, it is not clear how concept users are any more than passive adopters of a shared collective cultural heritage. The creative use of metaphor—much less the range of other, non-metaphorical linguistic actions, such as various forms of deductive, inductive or abductive reasoning—is not explicitly treated in their account.

Geometries of meaning—the re-emergence of conceptual structure

In part to gather up traditional and more progressive theories of categorisation, several more recent models have been proposed. These are at once highly systematic, and tolerant of the problems of vagueness and fuzziness which had plagued older logistic approaches. Gardenfors (2000), for instance, proposes a sophisticated geometric model of cognition, which blends together more conventional cognitive elements—concepts, properties, relations, reasoning—with some of the suggestive elements proposed by Lakoff and others. Rogers and McClelland (2004) put forward what they term a ‘parallel distributed processing’ account of semantic cognition, which builds on the descriptive and explanatory strengths of ‘prototype’ and ‘theory’ theories, while attempting to remedy their defects. Goldstone and Rogosky (2002) propose an algorithmic approach to translation across what they call a ‘conceptual web’, presupposing holistic conceptual schemes and a quantifiable notion of semantic distance separating concepts within and across schemes. Although these recent accounts are themselves quite different in approach and findings, they share a greater willingness to use computational and geometrically inspired models to explore feasible modes of cognitive activity.

These kinds of studies are evidence of a kind of resystematisation taking place in the cognitive sciences, as (qualified) structural models once more come to the fore. Unsurprisingly, such models are also well suited to describing representations of conceptual systems in ontologies and schemas. At the same time, the model presented by Gardenfors (2000) in particular can be reconciled with the kinds of experiential phenomenology and cultural embeddedness which feature in the work of Lakoff and Johnson (1980). Chapter 11, ‘On commensurability’, employs Gardenfors’ model of cognition directly in relation to the question of commensurability, connecting it to the pragmatist-infused account of language offered by Brandom, and the more general social theory of Habermas, as the basis for generating comparative views of conceptual schemes (‘conceptual spaces’ in Garderfors’ vocabulary) across different cognitive, linguistic and cultural tiers.

Social semantics

The following section aims to sample some of the prevailing paradigms of sociological theory and research in relation to semantics, and specifically in its intersection with technological kinds of meaning formation.

Sociology of knowledge

The semantic models put forward so far consider the creation and dissemination of meaning to be first and foremost a concern of individual rational agents, in which the influence of culture is a secondary and frequently distorting feature. The sociology of knowledge, following Marx and Nietzsche—both suspicious of knowledge’s purported independence from its conditions of production—attempts to explain how different epistemic ‘perspectives’ emerge (Mannheim [1936] 1998). A more modern rendering describes sociology of knowledge as an inquiry into how knowledge is constituted or constructed within a social or cultural frame of reference (Hacking 1999). That is, knowledge is taken within such inquiry as not only trivially social—in the sense that it typically involves more than a single actor—but also, and fundamentally, as a product of social forces and relations. While epistemological inquiry has always sought to understand the role of external influences—usually negative ones—on the development of knowledge—even Socrates’ attack on the sophists can be read in this vein—nevertheless there is a specific trajectory that can be traced across twentieth-century thought. This trajectory leads from Mannheim’s Ideology and Utopia in the 1930s (Mannheim [1936] 1998), through to a revival in Kuhn, Foucault, Bloor and Latour in the 1970s (Bloor [1976] 1991; Foucault 1970; Kuhn 1970; Latour and Woolgar [1979] 1986), up to a flurry of present-day sociological interest in the natural and social sciences—most notably in the field of science and technology studies (Hacking 1999; Keller 2005).

Sociology of knowledge proponents are frequently accused of ‘relativising’ knowledge, making it little more than a circumstantial side-effect of social or cultural context (Davidson 2006; Pinker 1995). As several authors note (Bloor 1991; Hacking 1999, 2002), however, discussing social constructions of knowledge need not imply adoption of a relativising stance. Rather it can involve understanding why particular problems of knowledge—and solutions—might present themselves at various times. Moreover such an approach can be open to a two-way, dialectic relationship between knowledge and society—arguing that society (however broadly construed) is equally formed through the production of ideas, theories, facts and statements, as much as knowledge artefacts themselves are formed by societal influences. Treating knowledge as a social construct need not therefore be a one-way descent into epistemic relativism, in which all ‘facts’ can be merely be explained away as by-products of cultural forces, power relations or other social entities.

Applied to contemporary knowledge representation systems, as a general rubric, sociology of knowledge has much to recommend it. It opens a way to investigate not only which perspectives emerge in relation to some domain of knowledge, but also how those perspectives are coordinated with respect to each other—how, for instance, one perspective within a given domain can cause others to emerge as well, in opposition, or sometimes, perhaps, in an uneasy alliance. It also provides a convenient lexicon to move from technical discussions of ontologies—as knowledge artefacts—through to a general cultural field of perspectives, orientations, world views and standpoints. Moreover it suggests that it is possible, when looking at an object like a formal knowledge system through a sociological historical lens, to see it neither in strictly idealist terms (as intellectual history pure and simple, as the production of useful theorems and theses by enlightened minds), nor in strictly materialist terms (as the net effect of particular economic, political, governmental or military forces), but rather as something like the dialectical and probabilistic outgrowth of both idealistic and materialistic forces. As the case studies demonstrate, while ascribing direct causal influence on the production of knowledge systems is inveterately difficult, it is still possible to paint a plausible—and epistemologically defensible—portrait of the complex interplay of these forces.

Swidler and Arditi (1994) note that considerable attention has been devoted to a ‘sociology of informal knowledge’. As the survey of science and technology studies below shows, there have been many studies of various kinds of ‘formal knowledge’ too. However, the term ‘sociology of formal knowledge’ is an apt epithet for the kind of approach adopted here—the study of the perspectival standpoints which underpin formal knowledge systems, where ‘formal knowledge system’ specifically means the encoding of some knowledge (a series of concepts, relations and individual objects) in a formal language. One of the claims of the study is that studying knowledge systems as social or cultural artefacts is not only a matter of interest to a sociologist of knowledge, but also provides practical guidance to a systems analyst faced with ‘day-to-day’ problems of conceptual translation across perspectival divides—indeed the claim suggests, perhaps, that these two disciplinary roles increasingly converge in ‘networked societies’ (Castells 1996) where the technological and the anthropological are inseparably intertwined.

A fitting example of one such ‘intertwining’ is the term ‘ontology’ itself. Although it is introduced in its computer science appropriation in Chapter 2, ‘Frameworks for knowledge representation’, in its modernised philosophical sense ‘ontology’ can be understood as the historical and cultural ground against which conceptualisations are developed. This view of ontology, freeing it from its metaphysical roots as the study of ‘what is’, is succinctly encapsulated by Hacking:

Generally speaking, Foucault’s archaeologies and genealogies were intended to be, among other things, histories of the present… At its boldest, historical ontology would show how to understand, act out, and resolve present problems, even when in so doing it generated new ones. At its more modest it is conceptual analysis, analyzing our concepts, but not in the timeless way for which I was educated as an undergraduate, in the finest tradition of philosophical analysis. That is because the concepts have their being in historical sites. The logical relations among them were formed in time, and they cannot be perceived correctly unless their temporal dimensions are kept in view (Hacking 2002, pp. 24–25).

In Chapter 11, ‘On commensurability’, some of the ‘sociologists of knowledge’ introduced here—in particular, Kuhn, Foucault and Hacking—are discussed in more detail. For now, this introduction is intended to demonstrate how the tradition of the sociology of knowledge strongly informs the approach adopted in this study. It also suggests that an analysis of commensurability aims, ideally, to shed light on the historical and cultural conditions—to the degree that these can be ascertained—of ontology construction and design. What at a first glance can pose itself as a largely technical exercise, of mapping, matching and aligning ontological concepts, can at its further degree present itself instead as a complex task of translation across cultural conceptual schemes or, in Hacking’s phrase, ‘historical ontologies’; the aim of the framework presented here is, in part, to help analyse concepts and their schemes against a broader historical and cultural backdrop. Knowledge systems, no less than any other cultural artefact, ‘cannot be perceived correctly unless their temporal dimensions are kept in view’ (Hacking 2002).

Critical theory as a sociology of knowledge

The historical dimension to ontological standpoints is analysed further by the critical theory tradition. Developed out of the work of Marx, Weber and Lucáks, critical theory dispenses with what it sees as idealistic aspects of sociology of knowledge—or rather, reorients these on the materialist conditions of knowledge production (Popper and Adorno 1976). Different perspectival orientations, in more extreme variants of this view, are the apparent epiphenomena which develop out of the structural character of the economy at given moments in history (Horkheimer and Adorno 2002). While differences of opinion might always be free to circulate in any society, fundamentally incommensurable world views, irreconciliable through rational discourse, are the product of the alienating forces of modern capitalism, which rigidifies human relations in terms of class structure.

Habermas sought to free analyses of knowledge and communication from the more deterministic aspects of critical theory, while retaining its materialist foundations. His theory of communicative action points to several complex overlapping dimensions in post-Enlightenment society. For Habermas, the fundamental rift between objective system and subjective lifeworld can be attenuated through the intersubjective sphere of communication and discourse (Habermas 1987). Within this orbit, different knowledge formations are free to circulate, with the potential to reconfigure structural inadequacies in the growing systematisation of individual lifeworlds enacted by capitalism.

Luhmann provides a related frame of reference via systems theory (Arnoldi 2001; Baecker 2001). Not at all assimilable to critical theory, Luhmannian systems nevertheless provide some elaboration on the objectivist, ‘system’ side of the critical theoretical coin. For Luhmann, systems are ‘autopoetic’—they engender their own frames of meaning around a critical ‘distinction’. For economic systems, for example, the motivating distinction is the presence or absence of money. The distinction then structures the cluster of concepts which inform how those in the system operate. Luhmann’s views have some analogies with the model of culture put forward in Chapter 12, ‘A framework for commensurability’; however, for reasons of parsimony, the theoretical cues from Habermas’ admittedly less developed account of the complex overlays of systems in contemporary society, which provide a plausible generative and sufficiently abstract account of the divisions that fissure through contemporary knowledge systems, are instead adopted here. I return to Habermas in more detail in Chapter 11, ‘On commensurability’, where his theoretical apparatus is woven in among more fine-grained analyses of language and cognition.

Globalisation and technologies of knowledge

Related forms of social theory and research have investigated the rise of information technology and the correlative phenomenon of globalisation. Castells (1996), for instance, documents exhaustively the emergence of the ‘network society’, in which traditional forms of labour, organisation, urban planning, travel, markets, culture, communication and, finally, even human subjectivity are transformed by the ‘network’— a metonymic substitute for various kinds of physical and information networks that parallel the growth of globalisation and late or ‘hyper’ capitalism at the turn of the millennium. According to Castells, the ontological ‘horizon’ of modern times is qualitatively different partly as a result of quantitatively expansive affordances offered by network effects or externalities. This results not in a simple homogenising of cultural differences; rather, echoing the Frankfurt School and Habermas, the ‘global network of instrumental exchanges’ ‘follows a fundamental split between abstract, universal instrumentalism, and historically rooted, particularistic identities. Our societies are increasingly structured around a bipolar opposition between the Net and the Self’ (Castells 1996, p. 3).

Just as, for Habermas, radical ontological incommensurability arises between the system and the lifeworld, so Castells sees a similar structural schism between ‘the Net and the Self’, and its various conceptual analogues—culture and nature, society and community, function and meaning. The rise of the network society therefore produces incommensurability as an ‘unintended consequence’ precisely because of its globalising, standardising and homogenising character; it creates localised resistances in the fissures or lacunae of its network. However, for Castells as for Habermas, these differences can always be negotiated by the proselytising force of communication itself, with ambiguous effects:

Because of the convergence of historical evolution and technological change we have entered a purely cultural pattern of social interaction and social organization. This is why information is the key ingredient of our social organization and why flows of messages and images between networks constitute the basic thread of our social structure (Castells 1996, p. 508).

Studies on technology and science

Many of the preoccupations of the ‘sociology of knowledge’ have been inherited by more recently emergent disciplines, such as science and technology studies. Largely inaugurated through Latour and Woolgar’s seminal anthropological study of a scientific laboratory (Latour and Woolgar [1979] 1986)—though equally influenced by earlier ‘structural’ histories of science and technology—science and technology studies work for the most part by examining the sites and practices of science and technology. A common feature of science and technology studies research generally, and of research inspired by the closely aligned actor-network theory (ANT) in particular, is the desire to show how clear-cut conceptual boundaries—even those apparently fundamental, between subject and object, nature and culture, and active and passive agents—become blurred in scientific practice (Latour 1993; Law 1992, 2004). Not a ‘theory’ in the usual sense, owing more to Geertz’s ‘thick’ methodological approach to ethnography than explanatory sociological models (Latour 2004), ANT has nonetheless provided a broad rubric and vocabulary for researchers attempting to analyse how different knowledge formations are constructed socially, or, as Law puts it, ‘scientific knowledge is shaped by the social’ (Law 2004). And as information technology has began to play an important role in many scientific disciplines, many science and technology studies have increasingly paid attention to the social construction of computational systems of classification.

Bowker and Star (1999), for instance, examine how active political and ethical choices become invisible once encoded within classificatory schemes in a variety of bureaucratic contexts: medical, health and governmental demography. Adopting the term ‘information infrastructures’ to describe how such schemes facilitate organisational practices just as physical infrastructure might do, their analysis develops its own set of distinguishing—and inter-related—typological features. Classification systems can be:

image Formal/scientific or informal/folk—Formal systems are used in ‘information science, biology and statistics, among other places’, while informal systems are ‘folk, vernacular and ‘ethno-classifications’.

image Pragmatic or idealistic—Pragmatic systems tend to be oriented towards a limited set of contemporary goals; idealistic systems are future-oriented, trying to anticipate future uses.

image Backwards-compatible or future-oriented (related to the previous distinction)—Backwards-compatible systems endeavour to harmonise categories with pre-existing schemes and data-sets; future-oriented systems are developed from relatively new principles or methods.

image Practical or theoretical—Practical systems tend to evolve to meet the needs of new users and applications, and may lose original motivating principles; theoretical systems tend to retain such principles as endemic to their operation.

image Precise or prototypical—Prototypical taxonomies provide ‘good enough’ descriptive labels, rather than rigorous necessary and sufficient conditions, recalling Rosch’s distinction outlined above (Rosch 1975).

image Parsimonious or comprehensive—Parsimonious systems capture only a limited number of fields; comprehensive systems aim to gather as much information as possible.

image Univocal or multivocal—Univocal systems tend to reflect a singular, authoritative perspective; multi-vocal systems tend instead to reflect multiple interests.

image Standardised or eccentric—Standardised systems reflect a mainstream view of field or domain; eccentric systems adopt idiosyncratic, unique or otherwise alternative organisations of concepts.

image Loosely or tightly regulated—Loosely regulated classifications systems develop ad hoc—through trial and error, and incremental revision; tightly regulated systems tend to have formal review processes and versioning systems. (Adapted from Bowker and Star 1999.)

As the authors make clear, many of these distinctions are of degree rather than kind—systems may be more or less formal in the above sense, for example. These distinctions motivate several of the second-order dimensions introduced in Chapter 12, ‘A framework for commensurability’; there they are applied as a means of classifying classification systems themselves.

In a series of further case studies, Bowker and Star (1999) also demonstrate the complexity of factors that motivate particular categorial distinctions behind classification systems. They highlight the inherent fuzziness and historical residues that accrue to systems over time, demonstrating how, for instance, political and ethical values become embedded long past their historical valency. Such critical impulses can also be found in a number of more recent studies—Smart et al. (2008), for instance, look at how racial and ethnic categories become homogenised within scientific classification systems. Other studies have also described the complications arising from overlapping or conflicting methodological approaches to classification. Sommerlund (2006) demonstrates how conflicting genotypical and phenotypical methodological approaches impact on research practices in molecular biology. In another study, Almklov (2008) has shown how formal classification systems are supplemented in practice by informal heuristics, as ‘singular situations’ need to be interpreted against potentially incommensurable ‘standardised’ conceptualisations and data sets. The negotiated process of meaning making involved in devising classification systems between diverse disciplinary experts has also recently received attention, for example, in a study by Hine (2006) of ‘geneticists and computer engineers’ devising a mouse DNA database.

The desire to renovate classification systems—either through development of new systems, or refinements to existing ones—can, then, be motivated by numerous extrinsic social factors. As these various studies demonstrate, individual classifications systems can be shown to exhibit conflicting tensions and practical trade-offs in their design, construction and application. Seeking to understand the commensurability of multiple systems amplifies the potential noise generated by these tensions; tracing them in turn relies on examining each system against a matrix of interrelated dimensions—including both the kinds of distinctions outlined above, and also a further series of contextually determined and more or less implied distinguishing elements: political and ethical beliefs, disciplinary methodologies, theoretical–practical overlays, and vocational orientations. The resulting profiles can in turn be used for comparing and contrasting the systems concerned—or, in the language adopted here, for assessing their commensurability.

IT standardisation

Central to the rise of globalisation has been the phenomenal growth of standardisation, a process of negotiated agreement across many social layers—from common legislative frameworks, economic agreements, political affiliations and linguistic consensus, through to a myriad of ratified protocols, specifications and standards for mechanical, electrical, engineering, communications and information technology instruments. Considerable research has been directed towards standardisation in the information technology sector specifically, much of it published through a journal dedicated to the field, the Journal of IT Standards and Standardization Research. Unlike the preceding science studies, which for the most part adopt an anthropological orientation towards the production of technical artefacts, this research views standardisation as a predominantly economic phenomenon—though with important political, legal and cultural bearings. Where an anthropological view is useful in bringing out the internal perspectival character of knowledge systems, describing how and why these systems became widely used and adopted often requires a broader scope—looking at the complex interplay between individual actors, corporations, governments and multinational consortia, well beyond the laboratory or workplace setting—and commensurately, employing different research methods, examining the motivations and processes of standardisation primarily through historical and documentary evidence, rather than first-hand observation. From the point of view of developing a set of descriptive criteria or dimensions for describing the cultures responsible for knowledge systems, studies of standardisation provide a valuable supplementary source.

Standards exists for a wide range of technical formats, protocols, processes and applications, and studies of IT standardisation have been accordingly eclectic—covering style languages (Germonprez, Avital and Srinivasan 2006), e-catalogue standards (Schmitz and Leukel 2005), e-commerce (Choi and Whinston 2000), mobile platforms (Tarnacha and Maitland 2008), operating systems (Hurd and Isaak 2005; Isaak 2006; Shen 2005), project management (Crawford and Pollack 2008) and software engineering processes (Fuller and Vertinsky 2006). The term ‘standard’ itself is notoriously difficult to define. Kurihara (2008) points to its French etymological origins as a military ‘rallying point’; it has since been co-opted into economic parlance as being ‘required for communication between the labeled product and its user in order to win the fullest confidence of the market’ (Kurihara 2008). Blum (2005) suggests standards can be divided into ‘public’ and ‘industrial’ types; ‘public standards’ can be further distinguished between national and sector-specific, while ‘industrial standards’ can be either company or consortia-based. Blum also suggests further criteria for considering standardisation processes: the ‘speed of the process’; ‘outcomes’, in terms of the competitive conditions of market; ‘legal status’; and the ‘nature of the economic goods created’—whether they be closed or open, public or private.

Several motivations have been identified for the development and use of standards. Most commonly, authors point to one or more economic rationales. For example, Krechmer (2006) identifies three economic beneficiaries of standardisation: governments and market regulators looking to promote competition; lowered production and distribution costs for standards implementers; and user or consumer benefits brought about by a standard’s adoption. Hurd and Isaak (2005) add a fourth group: individuals, usually professionals, benefitting by certain kinds of standards certification and professionalisation. In addition, they note that standardisation benefits companies at all stages of a product’s lifecycle: by accelerating the initial rate of technology adoption and product diffusion; by expanding the functionality of a product as it matures; and by extending the lifetime of a product as it becomes obsolete or redundant in the face of new, emergent products. The personal motivation which accrues to individuals through their involvement in standards development and certification processes is further noted by Isaak (2006) and Crawford and Pollack (2008). Similarly, standardised quality processes accompanied by internationally recognised certification can help differentiate a company in a crowded market-place—as Fuller and Vertinsky (2006) observe, certification can, in some cases, even be a good market indicator of ‘improved future revenues’. Moreover numerous authors have emphasised the direct and indirect benefits of ‘network externalities’ which technology process, product and platform standardisation bring to users (Katz and Shapiro 1985; Park 2004; Parthasarathy and Srinivasan 2008; Tarnacha and Maitland 2008; Zhao 2004).

The by-products of standardisation are not, however, always beneficial. Van Wegberg (2004) discusses the trade-offs between ‘speed and compatibility’ in the development of standards, focusing particularly on the problematics of competing standardisation efforts instigated by rival industrial consortia. The fractious effects of multiple standards are further studied by Tarnacha and Maitland (2008) and Schmitz and Leukel (2005)—though authors are divided as to whether such problems arise from excessive competition, over-regulation, or are in fact intrinsic side-effects of market dynamics. The costs of standards compliance and certification processes can also produce negative unintended consequences, operating as market barriers to entry, and limiting rather than fostering market competition (Tarnacha and Maitland 2008). Furthermore, consequences can be culturally discriminatory: as one study has shown, on a national level standardisation can lead to adverse effects for ‘indigenous technology developments’ (Shen 2005), as the dispersion of proprietary, closed standards, in particular, can inhibit local training, innovation and industrial development. Analysis of the rivalry between Microsoft and Linux operating systems in China points to potential negative normative and even imperialist implications of purely market-driven standardisation, if unattended by adroit legal and political policy. Even relatively benign professional bodies, with no direct economic or political mandate, can, in developing standards, implicitly promote national or regional agendas into global ones—at the potential risk of marginalising those with less resources or authority (Crawford and Pollack 2008; Parthasarathy and Srinivasan 2008). Moreover, corporations have become experts at ‘gaming’ the standards process, both by overt political and economic influence, and by covert ‘patent ambush’, in which ‘submarine’ patents are submitted as part of otherwise ‘open’ or ‘fair and reasonable’ technological provisions to standards, only to resurface at the corporate donor’s leisure—if new products or technologies inadvertently infringe on the patents (Hemphill 2005). Market dynamics also often foster so-called ‘standards wars’, in which companies form competing consortia promoting rival standards— a process which fragments the market and dilutes the network externalities of standards, at least until a dominant candidate emerges (Parthasarathy and Srinivasan 2008).

Consequently, while many studies note the generally beneficial nature of standardisation, such processes—and the technical artefacts they produce—can be seen as part of a social negotiation between different kinds of co-operative and competitive agents, engaged in a series of complex trade-offs. In the case of ontologies and schemas, standardisation is often their very raison d’être—broad diffusion and adoption being key elements of their promise to deliver interoperability. Understanding commensurability of ontologies, then, can often involve understanding the methods and means by which their authors endeavour to make them standards. Of the studies surveyed, only Schmitz and Leukel (2005) offer something of a typology of distinguishing features of standards; they propose the following for the purpose of choosing e-catalogue standards:–

image General:

– What is the market penetration of the standard—current and future?

– What is the quality of the standard?

image Specific:

– Standard organisation—How long will it remain effective? What level of power and international exposure does it have? What is the level of user involvement?

– Methodology—Is the underlying language of the standard highly formalised? Machine-readable? Sufficiently expressive?

– Standard content—Is the standard at the right level? What objects are categorised? Is the coverage right and satisfying?

Some of these specific features are picked up and reworked as descriptive dimensions later in the framework presented in Chapter 12, ‘A framework for commensurability’. More generally, this review of standardisation studies has extracted a number of dimensions which can be applied to knowledge systems, particularly to the areas of process and motivation of system design. These dimensions include:

image open versus closed process

image levels of de facto and de jure standardisation

image size and levels of community activity around standards

image adoption rates, industry support, levels of satisfaction with standards

image differing motivations—economic, political, legal, social and technical.

Knowledge management

Knowledge systems can also be studied through yet another disciplinary lens, that of knowledge management. Knowledge management approaches tend to discuss ontologies less as kinds of classification systems or standards, and more as a kind of intangible organisational asset (Volkov and Garanina 2007). This perspectival shift brings about yet further distinctions which can be used to compare and contrast ontologies. Moreover, the literature review now moves closer to dealing with ontologies as a subject proper—increasingly knowledge management has co-opted ontologies as an exemplary kind of knowledge representation, with numerous studies explicitly proposing or examining frameworks, processes and systems for handling ontologies in knowledge management journals (Bosser 2005; Härtwig and Böhm 2006; Lanzenberger et al. 2008; Lausen et al. 2005; Macris, Papadimitriou and Vassilacopoulos 2008; Okafor and Osuagwu 2007)

Much attention in knowledge management studies is devoted to describing the relationship between tacit and explicit knowledge in an organisational context. Nonaka and Takeuchi (1995) put forward a widely adopted model for describing this relationship, which follows a four-step process of ‘socialization’ (tacit-to-tacit), ‘externalization’ (tacit-to-explicit), ‘combination’ (explicit-to-explicit) and ‘internalization’ (explicit-to-tacit). Hafeez and Alghatas (2007) study how this model can be applied to learning and knowledge management in a virtual community of practice devoted to systems dynamics. They also demonstrate how discourse analysis of online forums can be employed to demonstrate a process of knowledge transfer between participants—a method increasingly used to capture features of ‘virtual’ communities generally. These communities are an increasingly prevalent cultural setting for knowledge dissemination, as Restler and Woolis (2007) show; similarly, discourse analysis is used in several of the case studies in this work. Other studies extend similar knowledge diffusion models to the whole organisation life cycle (Mietlewski and Walkowiak 2007), or examine the application of such models as specific interventions into organisations, in the form of an action research program aimed at improving knowledge elicitation processes (Garcia-Perez and Mitra 2007). Al-Sayed and Ahmad show how expert knowledge exchange and transfer is facilitated within organisations by ‘special languages’— limited and controlled vocabularies—which represent ‘key concepts within a group of diverse interests’. As the authors point out, while use of such languages can serve to further the political aims of a specialised group within an organisation, the primary aim is one of parsimony ‘for reducing ambiguity and increasing precision’ within a professional context (Al-Sayed and Ahmad 2003). Such ‘languages for special purposes’ can serve to reify a given set of lexical items into discourse, giving rise to particular conceptualisations within a knowledgeable community of practice. In turn, these are frequently codified into knowledge systems; understanding the practical generative conditions of such languages is one way, then, towards understanding and describing the assumptions behind these systems.

Several authors (Detlor et al. 2006; Hughes and Jackson 2004; Loyola 2007; Soley and Pandya 2003) have sought to analyse the specific roles played by context and culture—two notoriously ill-defined concepts—in the formation, elicitation and management of knowledge. Acknowledging the resistance of the term ‘culture’ to easy definition, much less quantification, Soley and Pandya (2003) suggest a working definition: culture is a ‘shared system of perceptions and values, or a group who share a certain system of perceptions and values’, which would include ‘sub-groups, shared beliefs and basic assumptions deriving from a group’. This working definition arguably ignores an important dimension of shared or collective practice which, following Bourdieu (1990), would seem constitutive of any culture. Nonetheless the authors point to important ways in which various cultural attributes—technical proficiency, economic wealth, as well as linguistic, educational and ethical characteristics—impact on knowledge sharing, and suggest, anticipating some of the same points made in this study, that a certain degree of sensitivity and comprehension of culture has material consequences—although, in their case, these consequences are subject to the overall ‘game’ of corporate competition.

Both Detlor et al. (2006) and Loyola (2007) seek to understand the role that a similarly vexed concept, context, plays in knowledge management. Detlor et al. (2006) provide a structural account of the relationship between a ‘knowledge management environment’ and organisational and personal information behavioural patterns, using a survey-driven approach to show that indeed a strong causal relationship exists. In their analysis, four survey items relating to ‘environment’ (used interchangeably here with ‘context’) reference terms like ‘culture’, ‘organisation’, ‘work practices, lessons learned and knowledgeable persons’ and ‘information technology’—as well as ‘knowledge’ and ‘information’—which suggests the notion of ‘context’ here is synonymous with the modern organisational bureaucracy. Loyola (2007), on the other hand, surveys approaches which seek to formalise context as a more abstract ‘feature’ of knowledge descriptions. Building on earlier work in this area (Akman and Surav 1996; Bouquet et al. 2003), Loyola (2007) argues these approaches strive to describe context either as part of a logical language, or as part of a data, programming or ontological model. Recognising that context is frequently tacit in knowledge representations—that it ‘characterises common language, shared meanings and recognition of individual knowledge domains’—Loyola examines attempts to make it explicit as a kind of knowledge representation itself. After a comparative review, he concludes that an ontology developed by Strang, Linnhoff-Popien and Frank (2003) is best suited to describing context, and sees the explicitation of context as itself a vital part of facilitating interoperability between conceptual, informational and social divides.

While no studies address the specific question posed here about the commensurability of multiple ontologies, the relationships sketched in this literature between knowledge assets, on the one hand, and cultures, contexts and processes of knowledge management, on the other, constitute a useful conceptual rubric for the model of commensurability presented in Chapter 12, ‘A framework for commensurability’. Moreover, these studies bring forward several further salient dimensions which can be applied to ontologies:

image whether the ontology represents a relatively small and insular, or large and variegated ‘community of practice’

image whether the ontology uses ‘expert’ or ‘lay’ vocabulary

image what sorts of cultural beliefs, values, assumptions and practices impact on an ontology’s design

image what sorts of contextual factors can impact on an ontology’s design, and how those factors can be best rendered explicit.

As the literature review moves from an engagement with various forms of understanding social semantics towards examining computational approaches to representing and reasoning with meaning—in particular how to align different systems of meaning—the following complaint, ostensibly concerning the cognitive dissonance between ontology and broader knowledge management processes, provides a convenient segueway into the challenges at the intersection of these two fields:

Currently, none of the ontology management tools support social agreement between stakeholders, or ontology engineers. They most often assume one single ontology engineer is undertaking the alignment, and no agreement is therefore necessary. However, the whole point in ontology alignment is that we bring together, or align, ontologies that may have been created by different user communities with quite different interpretations of the domain. Social quality describes the relationship among varying ontology interpretations of the social actors. Means to achieve social quality are presentations of the alignment results in such a way that the different alignment types are explicitly distinguished and the location of the alignments from… a detailed and global perspective are highlighted (Lanzenberger et al. 2008, p. 109).

Computational semantics

The question of meaning is foundational for semantic web and broader computational research; indeed, the problems of how to represent and reason with concepts have been central preoccupations since the earliest days of research in artificial intelligence (Norberg 1989). Considerable attention has been devoted to the formal, logical mechanisms for representing meaning generally and to the construction of ontologies for representing meaning in specific fields or domains. Chapter 7, ‘An historical introduction to formal knowledge systems’, which examines different knowledge representation mechanisms, surveys some of these discussions. In the decentralised world of the semantic web, with no governing authority dictating which ontologies are useful, a corollary challenge of inter-ontology translation has led to the development of specific algorithmic techniques for automating the production of conceptual matches between ontologies. This field of ontology matching is explored in brief detail below. Related work in ontology metrics and collaboration are also relevant to the general approach and framework proposed here, and some recent findings are presented as well.

Matching ontologies

Ontology matching aims to find relationships between ontologies via algorithmic means, where no (or few) explicit relationships exist between them; according to a recent survey of ontology matching approaches, it ‘finds correspondences between semantically related entities of ontologies’ and the fundamental problem faced by ontology matching is one of ‘semantic heterogeneity’ (Shvaiko and Euzenat 2008). As Halevy notes:

When database schemas for the same domain are developed by independent parties, they will almost always be quite different from each other. These differences are referred to as semantic heterogeneity. Semantic heterogeneity also appears in the presence of multiple XML documents, web services and ontologies—or more broadly, whenever there is more than one way to structure a body of data (Halevy 2005, p. 50).

When dealing with one-to-one semantic mappings between databases within an organisation—a familiar system integration scenario—semantic heterogeneity is typically met with round-table discussions between experts and stakeholders, who endeavour to engineer appropriate conceptual translations between schemas. This takes time: ‘In a typical data integration scenario, over half of the effort (and sometimes up to 80 per cent) is spent on creating the mappings, and the process is labor intensive and error prone’ (Halevy 2005). These twin motives—time and quality—have spawned fervent searches for highly precise automatic approaches to mappings. Moreover, in the open world of the semantic web, where collaboration by ontology authors is often impossible, and at any rate where mappings need to be many-to-many, humanly engineered translations may be necessary, but invariably are insufficient (Gal and Shvaiko 2009).

Ontology matching algorithms typically take two ontologies (and possibly external data sources) as inputs, and generate a series of matches as output. A match consists of a tuple < id,e,e’,n,R>, where id is the identifier of the match, e and e’ are the two concepts from the two respective ontologies, n is the (optional) level of confidence in the match, and R is the relationship (one of conceptual equivalence, subsumption or disjointness) (Shvaiko and Euzenat 2005, 2008). The resulting match series is termed an alignment. Evaluation of algorithms, given the plethora of possible inputs and evaluative dimensions, is a notably difficult task (Do, Melnik and Rahm 2003). Since 2004, an annual competition has been held to rate algorithms’ outputs against expert humanly engineered alignments across a range of fields (Shvaiko and Euzenat 2009). Some of these approaches have demonstrated impressive precision and recall results against humanly engineered mappings (Lauser et al. 2008; Marie and Gal 2008), although, as Shvaiko and Euzenat (2008) note, no stand-out candidate approach has yet emerged.

In order to generate alignments, various approaches exploit the different syntactic, structural and semantic properties of ontologies. Several surveys of ontology and schema matching approaches have been conducted (Choi, Song and Han 2006; Do, Melnik and Rahm 2003; Halevy 2005; Noy 2004; Rahm and Bernstein 2001; Shvaiko and Euzenat 2005). Shvaiko and Euzenat (2008) provide a useful set of distinctions for grouping different approaches and methods. As with the metrics below, some of these distinctions resurface in the presentations of dimensions in Chapter 12, ‘A framework for commensurability’; hence it is useful to summarise these distinctions briefly here (redacted from Shvaiko and Euzenat 2008):

image Element versus structure—Element-based comparison is the comparison of individual concepts. An element-based comparison might be expected to find any of the following results: that ‘tree’ matches ‘tree’; that ‘tree’ also matches the French equivalent of ‘arbre’; that ‘leaf’ is a part of ‘tree’; that ‘tree’ and ‘animal’ are disjoint, and so on. A structural comparison, on the other hand, might instead compare the overall ontology graphs, or sub-graphs. Instead of relying on individual element matches, element relations are also analysed. A ‘tree - > leaf’ relation might be found to be equivalent to an ‘arbre - > feuille’ relation, for example.

image Syntactic versus external versus semantic—Both syntactic and semantic techniques use only the information contained in the ontologies themselves; external techniques may refer to other sources for information, for instance, a repository of previous matches in the same domain, or a structured dictionary like WordNet. Semantic techniques are further differentiated through the analysis of semantic relations between elements. In these cases the elements of each ontology are first normalised into a set of comparable logical propositions. If any two logical propositions from each ontology are found to have some valid semantic relationship (where a relation may be equivalent, disjointness, generalisation or specialisation), then a match is found. For example, ontology A may have some proposition ‘entity - > organic entity - > tree’ [A1] and ontology B may have some proposition ‘thing - > life-form - > vegetable - > tree’ [B1]. By reference to some independent set of axioms, such as a dictionary (WordNet is a common choice), it can then be determined that ‘entity’ is roughly synonymous with ‘thing’; ‘organic entity’ is synonymous with ‘lifeform’; and ‘tree’ is synonymous with ‘tree’. Hence the relation of equivalence is deemed to hold between concepts A1 and B1.

image Schema versus instance-based inferencing—The approaches described above refer only to the structure of the ontologies themselves, and therefore are defined as schema-based. Instance-based inferencing in contrast infers from the contents of the data the correct concepts belonging to that data. For example, some data containing a name with the word ‘tree’—as in ‘tree #35’—might be inferred as an instance of the tree class.

image Single versus hybrid/composite techniques—Hybrid and composite techniques use a combination of the above approaches. Frequently such approaches use various weighting schemes to preference one match over others.

Since the approach adopted in this study is contrasted with algorithmic ones generally—as heuristic and holistically oriented, rather than deterministic and atomistic—how do these distinctions differentiate algorithms particularly? Broadly it suggests that algorithms can themselves be plotted on a spectrum of ‘atomism–holism’, the more ‘holistic’ being those which are structural, utilise external sources, analyse semantic over syntactic relationships, and exploit a hybrid of alternative techniques (including both schema and instance-based ones). One algorithm which would rate highly against these holistic criteria is S-Match (Giunchiglia, Shvaiko and Yatskevich 2004). However, the modes of analysis and outputs remain very different from what is proposed here, which is oriented towards the general cultural assumptions and beliefs, and produces a general commensurability assessment rather than specific alignments. Without prior humanly engineered mappings to go by, the application of a culturally oriented holistic framework is a helpful process to cross-check the alignment results generated by algorithms.

All of the algorithmic approaches discussed in the surveys so far use what Noy (2004) terms ‘heuristic and machine learning approaches’. The other avenue towards semantic integration is through explicit mappings, where two ontologies share some common third ontology, typically asserting some set of generic and reusable conceptual axioms. Such ‘foundational’ or ‘upper-level’ ontologies show promise for by-passing both the time commitments and error-proneness of humanly engineered mappings, and the vagaries of algorithmically generated alignments. However, as Chapter 9, ‘Upper-level ontologies’, demonstrates, the proliferation of upper-level ontologies can create new sources of semantic heterogeneity or, in the language adopted here, incommensurability.

Why, across a given domain, are different ontologies ever produced? Relatively little account is given to the causes of semantic heterogeneity. Halevy (2005) offers: ‘Differing structures are a by-product of human nature—people think differently from one another even when faced with the same modeling goal.’ The resort to a kind of naturalistic individualism here underestimates socially and culturally structural distinctions—of the sorts discussed in the literature above—which also generate difference in conceptualisations in less stochastic ways. In the framework and case studies that follow, no single causal theory is provided to account for these differences. Nonetheless, in specific cases it is possible to hypothesise socially structural causal factors—distinctions in economic and political subsystems, epistemological assumptions, methodological practices, and the processes and uses to which these systems are put—which orient the categorial configurations of different ontologies one way or another, without reverting to a psychologism which suggests individual agents simply and inherently ‘think[ing] differently’. By making these factors explicit, it might be possible to plot lines of potential translation and integration—or, conversely, to recognise obstacles to translation irreducible to individual idiosyncrasies.

Ontology metrics

Several further studies have explored metrics for describing, comparing and evaluating ontologies. Use of these metrics is ‘expected to give some insight for ontology developers to help them design ontologies, improve ontology quality, anticipate and reduce future maintenance requirements, as well as help ontology users to choose the ontologies that best meet their needs’ (Yao, Orme and Etzkorn 2005). Some of these metrics are brought into the framework in order to characterise internal features of ontologies.

Tartir et al. (2005) propose a more extensive model for describing different features of ontologies, similar in principle to the framework presented here. They distinguish schema metrics, which describe only the arrangement of concepts in an ontology, from instance metrics, which describe individual objects. The authors propose the following schema metrics:

image relationship richness—’reflects the diversity of relations’, by comparing the number of non-subsumption relations to the number of subsumption relations (which stipulate specifically that one class is a sub-class of another)

image attribute richness—shows the average number of attributes defined per-class within the ontology

image inheritance richness—’describes… the fan-out of parent classes’, in other words, whether the ontology graph is broad or deep.

The instance metrics are more extensive, but generally are less relevant for the kind of ontology comparison anticipated here. One exception is ‘average population’, which describes the average number of instances or individual objects per class.

Yao, Orme and Etzkorn (2005) introduce three metrics specifically for describing the cohesion of ontologies, ‘the degree to which the elements in a module belong together’. The metrics are: the number of root classes (NoR), the number of leaf classes (NoL) and the Average Depth of Inheritance Tree of Leaf Nodes (ADIT-LN). Together these metrics provide a picture of the structure of an ontology—low numbers of root and leaf classes, relative to the total number of classes, and conversely high numbers of inheritance trees suggest a high overall degree of coherence, a ‘deep’ rather than ‘broad’ lattice of concepts. These metrics, then, can be used to further refine the metric of ‘inheritance richness’ presented by Tartir et al. (2005).

Other research has focused on different aspects and uses for ontology metrics. Alani and Brewster (2006), for instance, discusses four distinct measures for ranking ontologies based on their relevance to search criteria, while Vrandečić and Sure (2007) discuss how to develop semantic rather than purely structural metrics, by first normalising the structure. However, the research by Tartir et al. (2005) and Yao, Orme and Etzkorn (2005) has proven to be of greatest relevance to developing generalised features which can be used to compare, as much as to evaluate, the intrinsic features of different ontologies. These metrics correspond to a number of the dimensions of the framework presented in Chapter 12, ‘A framework for commensurability’, and can be used to supply quantitative values as part of the application of that framework.

Collaborative ontologies

As a coda to this literature survey, a recent strand of research has focused on the idea of social, collaborative ontology development and matching—an area which intersects with many of the concerns of this book. Several studies have investigated approaches and software systems for collaborative ontology development (Bao and Honavar 2004; Hayes et al. 2005; Sure et al. 2002). In a sign that researchers are increasingly aware of social dimensions of ontology development and matching, two noted contributors to the field have advocated: ‘a public approach, where any agent, namely Internet user (most importantly communities of users, opposed to individual users) or potentially programs, can match ontologies, save the alignments such that these are available to any other agents’ reuse’ (Zhdanova and Shvaiko 2006, p. 34).

As we explore further, it is vital to the future of the web as a social knowledge sharing platform that the very formal structures of knowledge are fully explicit: that representations and translations are shareable, reusable, contestable and malleable.

References

Akman, V., Surav, M. Steps Toward Formalizing Context. AI Magazine. 1996; 17:55–72.

Al-Sayed, R., Ahmad, K. Special Languages and Shared Knowledge. Electronic Journal of Knowledge Management. 2003; 1:1–16.

Alani, H., Brewster, C. Metrics for Ranking Ontologies. In EON 2006: Proceedings of the 4th International Workshop on Evaluation of Ontologies for the Web, at the 15th International World Wide Web Conference (WWW06). Association for Computing Machinery, Edinburgh, 2006.

Almklov, P.G. Standardized Data and Singular Situations. Social Studies of Science. 2008; 38:873.

Arnoldi, J. Niklas Luhmann: An Introduction. Theory, Culture & Society. 2001; 18:1.

Atran, S., Medin, D., Ross, N., Lynch, E., Coley, J., Ek, E.U., Vapnarsky, V. Folkecology and Commons Management in the Maya Lowlands. Proceedings of the National Academy of Sciences. 1999; 96:7598–7603.

Austin, J.L. [1955]How to Do Things With Words. Oxford: Clarendon Press, 1975.

Baecker, D. Why Systems? Theory, Culture & Society. 2001; 18:59–74.

Bao, J., Honavar, V. technical report. Collaborative Ontology Building with WikiOnt— a Multi-Agent Based Ontology Building Environment. International Semantic Web Conference (ISWC) Workshop on Evaluation of Ontology-based Tools (EON). 2004.

Besnier, N., Blair, D., Collins, P., Finegan, E. Language: Its Structure and Use. Sydney: Harcourt Brace Jovanovich; 1992.

Bloor, D., Knowledge and Social Imagery [1976]. 2nd edn. University of Chicago Press, Chicago, IL, 1991.

Blum, U. Lessons from the Past: Public Standardization in the Spotlight. International Journal of IT Standards & Standardization Research. 2005; 3:1–20.

Bosser, T. Evaluating User Quality and Business Value of Applications Using Semantic Knowledge Technology. Journal of Knowledge Management. 2005; 9:50.

Bouquet, P., Giunchiglia, F., Van Harmelen, F., Serafini, L., Stuckenschmid, H., C-OWL: Contextualizing Ontologies. In Proceedings of the International Semantic Web Conference 2003. 2003.

Bourdieu, P. The Logic of Practice. Stanford: Stanford University Press; 1990.

Bowker, G.C., Star, S.L. Sorting Things Out: Classification and its Consequences. Cambridge, MA: MIT Press; 1999.

Brandom, R. Making It Explicit. Cambridge, MA: Harvard University Press; 1994.

Burling, R. Cognition and Componential Analysis: God’s Truth or Hocus Pocus? American Anthropologist. 1964; 66:20–28.

Cann, R. Formal Semantics: An Introduction. Cambridge, UK: Cambridge University Press; 1993.

Castells, M. The Rise of the Network Society. Cambridge, MA: Blackwell; 1996.

Chierchia, G., McConnell-Ginet, S. Meaning and Grammar: An Introduction to Semantics. Cambridge, MA: MIT Press; 2000.

Choi, N., Song, I.Y., Han, H. A Survey on Ontology Mapping. SIGMOD Record. 2006; 35:34–41.

Choi, S.Y., Whinston, A.B. Benefits and Requirements for Interoperability in the Electronic Marketplace. Technology in Society. 2000; 22:33–44.

Collins, A.M., Quillian, M.R. Retrieval Time from Semantic Memory. Journal of Verbal Learning and Verbal Behavior. 1969; 8:240–247.

Crawford, L., Pollack, J. Developing a Basis for Global Reciprocity: Negotiating Between the Many Standards for Project Management. International Journal of IT Standards & Standardization Research. 2008; 6:70–84.

Davidson, D. The Essential Davidson. Oxford: Oxford University Press; 2006.

Dennett, D.C. Consciousness Explained. New York: Little, Brown and Company; 1991.

Detlor, B., Ruhi, U., Turel, O., Bergeron, P., Choo, C.W., Heaton, L., Paquette, S. The Effect of Knowledge Management Context on Knowledge Management Practices: An Empirical Investigation. Electronic Journal of Knowledge Management. 2006; 4:117–128.

Do, H.H., Melnik, S., Rahm, E., Comparison of Schema Matching Evaluations. In Revised Papers from the NODe 2002 Web and Database-Related Workshops on Web, Web-Services, and Database Systems. Springer, London, 2003:221–237.

Dowty, D.R. Word Meaning and Montague Grammar. Dordrecht: Kluwer Academic Publishers; 1979.

Forbes, G. Intensional Transitive Verbs. In: Zalta E.N., ed. The Stanford Encyclopedia of Philosophy. Stanford, CA: Metaphysics Research Lab, Center for the Study of Language and Information, Stanford University, 2008.

Foucault, M. The Order of Things: An Archaeology of the Human Sciences. New York: Vintage Books; 1970.

Frege, G. On Sense and Reference. [1892] Wikisource, The Free Library http://en.wikisource.org/w/index.php?title=On_Sense_and_Reference&oldid=1868023. 1925. [accessed 14 August 2010].

Fuller, G.K., Vertinsky, I. Market Response to ISO 9000 Certification of Software Engineering Processes. International Journal of IT Standards & Standardization Research. 2006; 4:43–54.

Gadamer, H.G., Truth and Method J. Weinsheimer and D.G. Marshall (trs). Continuum, London, 1975.

Gadamer, H.G., Philosophical Hermeneutics D.E. Linge (tr.). University of California Press, Berkeley, 2004.

Gal, A., Shvaiko, P. Advances in Ontology Matching. In: Dillon, T., Chang, E., Meersman, R., Sycara, K., eds. Advances in Web Semantics, vol. 1. Berlin and Heidelberg: Springer; 2009:176–198.

Garcia-Perez, A., Mitra, A. Tacit Knowledge Elicitation and Measurement in Research Organisations: A Methodological Approach. Electronic Journal of Knowledge Management. 2007; 5:373–386.

Gardenfors, P. Conceptual Spaces. Cambridge, MA: MIT Press; 2000.

Geertz, C. Deep Play: Notes on the Balinese Cockfight. Daedalus. 2005; 134:56–86.

Germonprez, M., Avital, M., Srinivasan, N. The Impacts of the Cascading Style Sheet Standard on Mobile Computing. International Journal of IT Standards & Standardization Research. 2006; 4:55–69.

Giunchiglia, F., Shvaiko, P., Yatskevich, M. S-Match: An Algorithm and an Implementation of Semantic Matching. In: Davies J., Fensel D., Bussler C., Studer R., eds. The Semantic Web: Research and Applications. Berlin and Heidelberg: Springer; 2004:61–75.

Goddard, C. NSM Semantics in Brief. http://www.une.edu.au/directory. 2002. [(accessed 23 October 2009)].

Goddard, C., Wierzbicka, A. Meaning and Universal Grammar: Theory and Empirical Findings. Amsterdam: John Benjamins Publishing Company; 2002.

Goldstone, R.L., Rogosky, B.J. Using Relations within Conceptual Systems to Translate Across Conceptual Systems. Cognition. 2002; 84:295–320.

Gopnik, A. The Theory Theory as an Alternative to the Innateness Hypothesis. In: Antony L., Hornstein N., eds. Chomsky and His Critics. New York: Wiley-Blackwell; 2003:238–254.

Habermas, J., The Theory of Communicative Action T. McCarthy (tr.). Beacon Press, Boston, 1987.

Hacking, I. The Social Construction of What?. Cambridge, MA: Harvard University Press; 1999.

Hacking, I. Historical Ontology. Cambridge, MA: Harvard University Press; 2002.

Hafeez, K., Alghatas, F. Knowledge Management in a Virtual Community of Practice using Discourse Analysis. Electronic Journal of Knowledge Management. 2007; 5:29–42.

Halevy, A. Why Your Data Won’t Mix. Queue. 2005; 3:50–58.

Harris, R.A. The Linguistic Wars. New York: Oxford University Press; 1993.

Härtwig, J., Böhm, K. A Process Framework for an Interoperable Semantic Enterprise Environment. Electronic Journal of Knowledge Management. 2006; 4:39–48.

Hayes, P. RDF Semantics. W3C recommendation, W3C http://www.w3.org/TR/rdf-mt/. 2004. [(accessed 20 January 2010)].

Hayes, P., Patel-Schneider, P.F., Horrocks, I. OWL Web Ontology Language Semantics and Abstract Syntax. W3C recommendation, W3C http://www.w3.org/TR/owl-semantics/. 2004. [(accessed 20 January 2010)].

Hayes, P., Eskridge, T.C., Saavedra, R., Reichherzer, T., Mehrotra, M., Bobrovnikoff, D., Collaborative Knowledge Capture in Ontologies. Proceedings of the 3rd International Conference on Knowledge Capture. Association for Computing Machinery, New York, 2005:99–106.

Heidegger, M., Being and Time J. Macquarrie and E. Robinson (trs). Blackwell Publishing, Oxford, 1962.

Hemphill, T.A. Technology Standards Development, Patent Ambush, and US Antitrust Policy. Technology in Society. 2005; 27:55–67.

Hine, C. Databases as Scientific Instruments and their Role in the Ordering of Scientific Work. Social Studies of Science. 2006; 36:269.

Hodges, W. Tarski’s Truth Definitions. In: Zalta E.N., ed. The Stanford Encyclopedia of Philosophy. Stanford, CA: Metaphysics Research Lab, Center for the Study of Language and Information, Stanford University, 2008.

Horkheimer, M., Adorno, T.W., Dialectic of Enlightenment: Philosophical Fragments E. Jephcott (tr.). Stanford University Press, Stanford, 2002.

Hughes, V., Jackson, P. The Influence of Technical, Social and Structural Factors on the Effective Use of Information in a Policing Environment. Electronic Journal of Knowledge Management. 2004; 2:65–76.

Hurd, J., Isaak, J. IT Standardization: The Billion Dollar Strategy. International Journal of IT Standards & Standardization Research. 2005; 3:68–74.

Isaak, J. The Role of Individuals and Social Capital in POSIX Standardization. International Journal of IT Standards & Standardization Research. 2006; 4:1–23.

Kao, A.H. Montague Grammar. http://www-personal.umich.edu/~akao/NLP_Paper.htm. 2004. [(accessed 18 January 2010)].

Katz, M.L., Shapiro, C. Network Externalities, Competition, and Compatibility. American Economic Review. 1985; 75:424–440.

Keller, R. Analysing Discourse. An Approach from the Sociology of Knowledge. Forum: Qualitative Social Research. 2005; 6:32.

Komatsu, L.K. Recent Views of Conceptual Structure. Psychological Bulletin. 1992; 112:500.

Kracht, M. Compositionality in Montague Grammar. http://www.homes.uni-bielefeld.de/mkracht/html/montague.pdf. 2008. [(accessed 26 July 2010)].

Krechmer, K. Open Standards Requirements. International Journal of IT Standards & Standardization Research. 2006; 4:43–61.

Kuhn, T.S. The Structure of Scientific Revolutions, 2nd edn. Chicago, IL: University of Chicago Press; 1970.

Kurihara, S. Foundations and Future Prospects of Standards Studies: Multidisciplinary Approach. International Journal of IT Standards & Standardization Research. 2008; 6:1–20.

Lakoff, G. Women, Fire and Dangerous Things. Chicago, IL: University of Chicago Press; 1987.

Lakoff, G., Johnson, M. Metaphors We Live By. Chicago, IL: University of Chicago Press; 1980.

Lanzenberger, M., Sampson, J.J., Rester, M., Naudet, Y., Latour, T. Visual Ontology Alignment for Knowledge Sharing and Reuse. Journal of Knowledge Management. 2008; 12:102–120.

Latour, B. We Have Never Been Modern. Cambridge, MA: Harvard University Press; 1993.

Latour, B. On Using ANT for Studying Information Systems: A (Somewhat) Socratic Dialogue. In: Avgerou C., Ciborra C., Land F., eds. The Social Study of Information and Communication Technology: Innovation, Actors and Contexts. Oxford: Oxford University Press; 2004:62–76.

Latour, B., Woolgar, S. [1979]. Laboratory Life: The Construction of Scientific Facts. 2nd edn., Princeton: Princeton University Press; 1986.

Lausen, H., Ding, Y., Stollberg, M., Fensel, D., Hernandez, R.L., Han, S.K. Semantic Web Portals: State-Of-The-Art Survey. Journal of Knowledge Management. 2005; 9:40.

Lauser, B., Johannsen, G., Caracciolo, C., Keizer, J., van Hage, W.R., Mayr, P., Comparing Human and Automatic Thesaurus Mapping Approaches in the Agricultural Domain. DCMI 2008: Proceedings of the 8th International Conference on Dublin Core and Metadata Applications. Universitätsverlag Göttingen, Göttingen, 2008.

Law, J. Notes on the Theory of the Actor-Network: Ordering, Strategy, and Heterogeneity. Systemic Practice and Action Research. 1992; 5:379–393.

Law, J. Enacting Naturecultures: A Note from STS. Centre for Science Studies, Lancaster University http://www.lancs.ac.uk/fass/sociology/papers/law-enacting-naturecultures.pdf. 2004. [(accessed 9 February 2010)].

Leech, G.N. Semantics. Harmondsworth: Penguin; 1981.

Loyola, W. Comparison of Approaches toward Formalising Context: Implementation Characteristics and Capacities. Electronic Journal of Knowledge Management. 2007; 5:203–215.

Lyotard, J.F., The Postmodern Condition: A Report on Knowledge G. Bennington and B. Massumi (trs), F. Jameson (foreword). University of Minnesota Press, Minneapolis, MN, 1984.

Macris, A., Papadimitriou, E., Vassilacopoulos, G. An Ontology-Based Competency Model for Workflow Activity Assignment Policies. Journal of Knowledge Management. 2008; 12:72–88.

Mannheim, K., Ideology and Utopia [1936]. Routledge, Abingdon, 1998.

Marie, A., Gal, A., Boosting Schema Matchers. Proceedings of the OTM 2008 Confederated International Conferences. Springer, Berlin and Heidelberg, 2008:283–300.

Medin, D.L. Concepts and Conceptual Structure. American Psychologist. 1989; 44:1469–1481.

Medin, D.L., Schaffer, M.M. Context Theory of Classification Learning. Psychological Review. 1978; 85:207–238.

Medin, D.L., Ross, N.O., Atran, S., Cox, D., Coley, J., Proffitt, J.B., Blok, S. Folkbiology of Freshwater Fish. Cognition. 2006; 99:237–273.

Mietlewski, Z., Walkowiak, R. Knowledge and Life Cycle of an Organization. Electronic Journal of Knowledge Management. 2007; 5:449–452.

Minsky, M. A Framework for Representing Knowledge. http://dspace.mit.edu/handle/1721.1/6089. 1974. [(accessed 19 January 2010)].

Montague, R. The Proper Treatment of Quantification in Ordinary English. In: Portner P., Partee B.H., eds. Formal Semantics: The Essential Readings. Oxford: Blackwell Publishing, 1974.

Mueller-Vollmer, K. The Hermeneutics Reader: Texts of the German Tradition from the Enlightenment to the Present. New York: Continuum International Publishing Group; 1988.

Nonaka, I.A., Takeuchi, H.A. The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Oxford: Oxford University Press; 1995.

Norberg, A.L. Oral History Interview with Marvin L. Minsky. http://www.cbi.umn.edu/oh/display.phtml?id=107. 1989. [(accessed 19 January 2010)].

Noy, N.F. Semantic Integration: A Survey of Ontology-Based Approaches. SIGMOD Record. 2004; 33:65–70.

Okafor, E.C., Osuagwu, C.C. Issues in Structuring the Knowledge-base of Expert Systems. Electronic Journal of Knowledge Management. 2007; 5:313–322.

Park, S. Some Retrospective Thoughts of an Economist on the 3rd IEEE Conference on Standardization and Innovation in Information Technology. International Journal of IT Standards & Standardization Research. 2004; 2:76–79.

Partee, B.H. Reflections of a Formal Semanticist. In: Compositionality in Formal Semantics: Selected Papers. Maiden, MA: Blackwell; 2004.

Parthasarathy, B., Srinivasan, J. How the Development of ICTs Affects ICTs for Development: Social Contestation in the Shaping of Standards for the Information Age. Science Technology & Society. 2008; 13:279–301.

Pinker, S. The Language Instinct. New York: HarperCollins; 1995.

Popper, K.R., Adorno, T.W. The Positivist Dispute in German Sociology. London: Heinemann; 1976.

Quillian, M.R. Word Concepts: A Theory and Simulation of Some Basic Semantic Capabilities. Behavioral Science. 1967; 12:410–430.

Quine, W.V.O., From a Logical Point of View [1953]. 2nd edn. Harvard University Press, Cambridge, MA, 1980.

Rahm, E., Bernstein, P.A. A Survey of Approaches to Automatic Schema Matching. VLDB Journal. 2001; 10:334–350.

Restler, S.G., Woolis, D. Actors and Factors: Virtual Communities for Social Innovation. Electronic Journal of Knowledge Management. 2007; 5:89–96.

Rogers, T.T., McClelland, J.L. Semantic Cognition: A Parallel Distributed Processing Approach. Cambridge, MA: MIT Press; 2004.

Rosch, E. Cognitive Representations of Semantic Categories. Journal of Experimental Psychology: General. 1975; 104:192–233.

Ross, N., Medin, D.L. Ethnography and Experiments: Cultural Models and Expertise Effects Elicited with Experimental Research Techniques. Field Methods. 2005; 17:131.

Saussure, F., Course in General Linguistics R. Harris (tr.). Bally, C., Sechehaye, A., Riedlinger, A. Open Court Classics, Chicago, IL, 1986.

Schmitz, V., Leukel, J. Findings and Recommendations from a Pan-European Research Project: Comparative Analysis of E-Catalog Standards. International Journal of IT Standards & Standardization Research. 2005; 3:51–65.

Searle, J.R. Speech Acts. Cambridge, UK: Cambridge University Press; 1969.

Sellars, W., Empiricism and the Philosophy of Mind [1956]. Harvard University Press, Cambridge, MA, 1997.

Shen, X. Developing Country Perspectives on Software: Intellectual Property and Open Source. International Journal of IT Standards & Standardization Research. 2005; 3:21–43.

Shvaiko, P., Euzenat, J. A Survey of Schema-Based Matching Approaches. Journal on Data Semantics. 2005; 4:146–171.

Shvaiko, P., Euzenat, J., Ten Challenges for Ontology Matching. OTM 2008: Proceedings of the 7th International Conference on Ontologies, Databases, and Applications of Semantics (ODBASE). Springer, Berlin and Heidelberg, 2008:1164–1182.

Shvaiko, P., Euzenat, J. Ontology Alignment Evaluation Initiative. http://oaei.ontologymatching.org. 2009. [(accessed 22 February 2010)].

Smart, A., Tutton, R., Martin, P., Ellison, G.T.H., Ashcroft, R. The Standardization of Race and Ethnicity in Biomedical Science Editorials and UK Biobanks. Social Studies of Science. 2008; 38:407.

Soley, M., Pandya, K.V. Culture as an Issue in Knowledge Sharing: A Means of Competitive Advantage. Electronic Journal on Knowledge Management. 2003; 1:205–212.

Sommerlund, J. Classifying Microorganisms: The Multiplicity of Classifications and Research Practices in Molecular Microbial Ecology. Social Studies of Science. 2006; 36:909–928.

Strang, T., Linnhoff-Popien, C., Frank, K. CoOL: A Context Ontology Language to Enable Contextual Interoperability. In Distributed Applications and Interoperable Systems, vol. 2893, Berlin: Springer; 2003:236–247.

Sure, Y., Erdmann, M., Angele, J., Staab, S., Studer, R., Wenke, D., OntoEdit: Collaborative Ontology Development for the Semantic Web. ISWC 2002: Proceedings of the First International Semantic Web Conference 2002. vol. 2342. Springer, Berlin, 2002:221–235.

Swidler, A., Arditi, J. The New Sociology of Knowledge. Annual Review of Sociology. 1994; 20:305–329.

Tarnacha, A., Maitland, C. Structural Effects of Platform Certification on a Complementary Product Market: The Case of Mobile Applications. International Journal of IT Standards & Standardization Research. 2008; 6:48–65.

Tarski, A. Logic, Semantics, Metamathematics. Indianapolis, IN: Hackett Publishing Company; 1957.

Tartir, S., Arpinar, I.B., Moore, M., Sheth, A.P., Aleman-Meza, B., OntoQA: Metric-Based Ontology Quality Analysis. Proceedings of IEEE Workshop on Knowledge Acquisition from Distributed, Autonomous, Semantically Heterogeneous Data and Knowledge Sources. IEEE Computer Society, 2005:45–53.

Van Wegberg, M. Standardization and Competing Consortia: The Trade-Off between Speed and Compatibility. International Journal of IT Standards & Standardization Research. 2004; 2:18–33.

Varela, F.J., Thompson, E., Rosch, E. The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press; 1992.

Volkov, D., Garanina, T. Intangible Assets: Importance in the Knowledge-Based Economy and the Role in Value Creation of a Company. Electronic Journal of Knowledge Management. 2007; 5:539–550.

Vrandečić, D., Sure, Y., How to Design Better Ontology Metrics. Proceedings of the 4th European Conference on The Semantic Web. Springer, Berlin and Heidelberg, 2007:311–325.

Wierzbicka, A. Lingua Mentalis: The Semantics of Natural Language. London: Academic Press; 1980.

Wittgenstein, L. Tractatus Logico-Philosophicus. Mineola, IA: Dover Publications; 1921.

Wittgenstein, L. (first published 1953). Philosophical Investigations. 3rd edn., Oxford: Basil Blackwell; 1967.

Yao, H., Orme, A.M., Etzkorn, L. Cohesion Metrics for Ontology Design and Application. Journal of Computer Science. 2005; 1:107–113.

Zhao, H. ICT Standardization: Key to Economic Growth? International Journal of IT Standards & Standardization Research. 2004; 2:46–48.

Zhdanova, A.V., Shvaiko, P., Community-Driven Ontology Matching. ESWC 2006: Proceedings of the Third European Semantic Web Conference 2006. vol. 4011. Springer, Berlin and Heidelberg, 2006:34–49.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset