OCHRE Ontology

A discussion of the foundational ontology that underlies the OCHRE database platform and makes it useful for any kind of research project by modeling any and all domain-specific ontologies and project taxonomies.

By David Schloen (June 2023)

CHOIR: A Comprehensive Hierarchical Ontology for Integrative Research

The OCHRE platform makes use of an innovative semistructured graph database that facilitates the integration of heterogeneous data that has been derived from multiple sources and recorded in accordance with diverse conceptual distinctions and terminologies. XML Schema was used to specify the logical structure of the OCHRE database as consisting of a set of XML document types suitable for an XQuery database system. These XML document types correspond to basic classes of database items that implement a foundational ontology (meta-ontology) which is universal in scope and can thus model any domain-specific or project-specific ontology.

The term “ontology” in this context denotes a formal specification of concepts and relations in a particular domain of knowledge whereas the term “schema” denotes a specification of the logical structures of a working computer system. A conceptual ontology is distinct from the logical schema of the computer system in which that ontology is implemented. 

The database items in the OCHRE database (keyed-and-indexed data objects represented by XML documents) belong to a limited number of ontological classes: Project, Agent, Spatial, Temporal, Epigraphic, Discourse, Sign, Text, Lexical, Bibliographic, Resource, Concept, Attribute, Value, Query, Set, Sequence, and Hierarchy. Only the Attribute and Hierarchy classes have subclasses.

The ontological classes of OCHRE database items are sometimes called categories, especially in older documentation, but the term “category” is being used here in a non-philosophical way and does not denote an Aristotelian or Kantian category like substance, quantity, quality, relation, etc.

A network-graph specification of these ontological classes is currently being developed using the Web Ontology Language (OWL), one of the Semantic Web standards published by the World Wide Web Consortium (W3C). This will result in a formal specification of the OCHRE database structure in a widely supported format that is not dependent on the OCHRE database software and XML document types.

The OWL version of the OCHRE ontology has a different name. It is called CHOIR (Comprehensive Hierarchical Ontology for Integrative Research) to distinguish it from the OCHRE database implementation of the ontology. The OWL-CHOIR ontology specifies classes, subclasses, and relations that correspond to those found in OCHRE database. The OWL-CHOIR ontology will serve to define the structure and meaning of RDF triples exported from the OCHRE database as an RDF archive or dynamically exposed on the Web for other software to consume as a SPARQL endpoint.

OWL ontologies are often used in this way to specify the semantics of a set of RDF triples that conform to a given ontology. RDF triples represent subject-predicate-object statements of knowledge. RDF triples that conform to the OWL-CHOIR ontology specification are functionally identical to the item-attribute-value triples in the OCHRE database and preserve the high degree of atomization within that database. In this way, the entities and relations stored in the OCHRE database can be exposed on the Web in a lossless fashion that retains all their nuances and distinctions while conforming to the W3C Semantic Web standards.

The OWL-CHOIR ontology specification will be released at some point with accompanying annotations and examples of RDF triples that conform to it. In the meantime, the OCHRE database page of this website describes the ontological classes and subclasses of database items that correspond to the OWL-CHOIR classes and subclasses.

Philosophical Considerations

I am grateful to my colleague in Philosophy at the University of Chicago, Malte Willer, for his contributions to this section while absolving him of any errors it may contain.—DS

Ever since Aristotle’s Organon, scholars have recognized the central importance of ontologies as instruments for structuring and systematizing human knowledge (the Greek word “organon” means instrument or tool). Their role in the digital era is to help us manage the vast amount of data accumulating in every domain of inquiry. Researchers in many fields face the challenge of combining heterogeneous data from multiple sources to answer questions by means of comprehensive automated querying and analysis. A foundational ontology can facilitate data integration by accommodating multiple domain ontologies (each relevant to a particular domain of knowledge) or project ontologies (each specific to a particular research project) within a larger ontological structure.

The philosophical considerations underlying the OCHRE ontology are discussed here by way of comparison to other foundational ontologies (also called “top-level” or “upper” ontologies) designed to integrate heterogeneous data recorded in accordance with diverse domain ontologies or project-specific ontologies. Examples of foundational ontologies are:

    • BFO (Basic Formal Ontology; Arp, Smith, and Spear 2015)
    • Cyc (Lenat and Guha 1990; Lenat 1995)
    • DOLCE (Descriptive Ontology for Linguistic and Cognitive Engineering; Gangemi et al. 2002)
    • SUMO (Suggested Upper Merged Ontology; Pease et al. 2002)

The following are examples of domain ontologies that are not foundational ontologies but serve a similar integrative purpose for broad domains in the humanities:

    • CIDOC CRM (Conceptual Reference Model of the International Committee for Documentation and the International Council of Museums; an ontology for cultural heritage data)
    • EDM (Europeana Data Model; another ontology for cultural heritage data)
    • TEI (Text Encoding Initiative; an XML markup scheme widely used in the humanities that is not formalized as an ontology but expresses a general ontology of the structure and contents of texts)

Computational Challenges for Data Integration

Three kinds of heterogeneity create computational challenges when integrating data drawn from many sources which has been recorded in different times and places by different people using diverse terminologies and digital formats: (1) syntactic heterogeneity, (2) schematic heterogeneity, and (3) semantic heterogeneity.

1. Syntactic Heterogeneity

Syntactic heterogeneity was formerly a major challenge but this problem has been solved by the widespread adoption of the Unicode character-encoding standard and the Extensible Markup Language (XML) tagged-text format. These formats provide a standardized mechanism, built into all modern operating systems, for exchanging any kind of data on the Internet across all computing platforms and devices, including both relatively unstructured documents and highly structured data of the kind found in database systems.

2. Schematic Heterogeneity

Schematic heterogeneity arises from the use of different data structures to represent similar information (e.g., tables with different columns or document-markup tags that specify hierarchies with different names and different levels of nesting). This kind of heterogeneity can be overcome by using a domain ontology that specifies a set of classes and relations to be used in a given domain of knowledge. The information contained in diverse data structures (tables and marked-up character strings) can be mapped onto common classes and relations in the domain ontology and so made amenable to automated comparison.

An example of a domain ontology is SNOMED CT, which specifies medical concepts (classes and relations) and is widely used in biomedical informatics. A domain ontology that is widely used in the humanities is the TEI textual markup scheme, which reflects a general ontology of textual structures and contents and can be used to integrate data from many different projects.

An ontology is implemented computationally using a description logic, which is “a subset of first-order logic that is restricted to unary relations (called concepts or classes) and binary relations (called roles or properties). The concepts describe sets of individuals in the domain, and the roles describe binary relationships between individuals” (Doan et al. 2012: 328; see also Baader et al. 2017). Description logics are intended to support both knowledge representation and automated reasoning, although they often run into difficulties with the latter.

The Web Ontology Language (OWL) we are using for the OWL-CHOIR specification of the OCHRE ontology is an adaptation of description logics suitable for online use on the Web. It is a product of the Semantic Web initiative of the World Wide Web Consortium. This non-profit consortium is responsible for the core technical standards that underlie the Web (e.g., HTTP, HTML, XML, etc.) and it has published additional technical standards such as OWL to encourage data integration.

3. Semantic Heterogeneity

In addition to syntactic heterogeneity and schematic heterogeneity, for which computational solutions are available, we must reckon with the semantic heterogeneity of computer systems — the fact that the ontological classes and relations specified in different domain ontologies and implemented in the logical schemas of different systems reflect different human situations, sets of concerns, and views of the world. Semantic heterogeneity is found in the sciences as well as the humanities and does not necessarily imply relativism or anti-realism but is compatible with the metaphysical realism still commonly assumed by many scientific researchers, insofar as there are different ways to describe the same reality. Attempts have been made to overcome semantic heterogeneity by using foundational ontologies defined at a high level of abstraction (see the list above), which are designed to accommodate many different domain-specific and project-specific ontologies.

Semi-automated Semantic Integration

Breathless claims about the possibility of a purely automated mechanism for overcoming semantic heterogeneity amount to the claim of having achieved general, domain-free artificial intelligence (AI). Such claims should be treated with skepticism because they run up against the background-relevance problem repeatedly encountered in “first-wave” symbolic AI, which is discussed below. Nonetheless, there is considerable practical benefit in using an appropriate foundational ontology to enable semi-automated semantic integration, even though the computer system will remain reliant at key points on human intervention.

In the case of OCHRE, semi-automated semantic mappings can be made from one project’s taxonomy to another’s or to widely used domain ontologies that have been published on the Web, which provide a lingua franca between projects (e.g., the Getty Vocabularies for cultural information). But these semantic mappings must be confirmed by humanly embodied researchers themselves, who remain responsible for the semantics of the system. Non-symbolic “connectionist” AI based on deep neural networks can now analyze and generate natural language with remarkable accuracy and can be used to propose semantic matches for ontology alignment. However, these matches must still be confirmed by a human user to ensure that they make sense in the context of the research being done — assuming that the goal is to preserve in the database precise scholarly descriptions that honor the careful conceptual distinctions researchers routinely make in the course of their research.

The very fact that deep-learning AI can assist human experts but still cannot entirely replace the human activity of constructing ontologies and aligning them semantically across domains of knowledge sends us back to a conception of ontology, not as a universal view from nowhere, but as an organon or instrument for achieving particular human purposes in particular contexts. We must strike a balance between the ineradicable semantic role of the embodied human users of a computer system, whose concern is to extract meaning from the system according to their own purposes, and the powerful allure of formalization and automation. Computers can greatly aid researchers in their work of integrating and querying heterogeneous data to answer questions they care about, but computers cannot replace the human work of interpretation.

An Ontology of Practice

The foundational ontology of OCHRE is quite simple in comparison to other top-level ontologies and is more modest in its ambitions. Unlike most other foundational ontologies, it does not try to classify reality itself but merely classifies the kinds of statements scholars may make about what they study, with emphasis on statements about social agents (individual or collective), spatial units of observation (spatial locations and material objects), temporal periods and events, units of textual inscription (epigraphic units) and linguistic discourse, writing systems and the signs of which they consist, lexical units (including dictionary meanings and sub-meanings), and also taxonomic entities (attributes and values) and the taxonomic hierarchies in which they are organized. It is an ontology of the practice of singling out items of interest and relating them to one another and making statements about them.

Social agents constitute a basic class in the OCHRE database and the OWL-CHOIR ontology. Every database item is linked to a social agent (a person or a group of people) who is credited as the observer, creator, author, or editor of that item, usually with the date and time when the item was observed or created and with the proviso that the same item can be described in different ways by different people at different times. In other words, it is assumed that each piece of information in the database consists of a statement uttered by someone about something at a particular time, and it is assumed that it is essential for modern methods of critical research that we be able to identify who said what, and when they said it, about a phenomenon of interest.

Because OCHRE’s foundational ontology is focused on scholars making statements about phenomena of interest to them, it is not committed either to metaphysical realism or to anti-realism. It is not trying to model reality directly but is simply modeling the linguistic practices of spatially and temporally situated agents as they make statements about the world, without regard to whether those statements refer to mind-independent entities.

This rather simple approach, which side-steps many of the semantic goals of other foundational ontologies and makes fewer assumptions, nonetheless provides practical benefits in facilitating the computational integration of heterogeneous data. Data is organized in OCHRE’s semistructured graph database in a way that is similar to an RDF network-graph database (“triple store”) consisting of subject-predicate-object statements. These are called item-attribute-value statements in OCHRE, a triple-structure and terminology that was developed in the early 1990s, before the introduction of RDF in 1999. However, such statements are semantically constrained in OCHRE in a way that enables semantically rich querying and matching because items of interest are classified as belonging to one of a small number of basic ontological classes: Project, Agent, Spatial, Temporal, Epigraphic, Discourse, Sign, Text, Lexical, Bibliographic, Resource, Concept, Attribute, Value, Query, Set, Sequence, and Hierarchy.

Two crucial classes of OCHRE database items that make all the difference are Attribute items and Value items (used to represent qualitative nominal or ordinal values of attributes). A project’s taxonomy or domain-specific ontology is represented by Attribute items and Value items recursively nested in alternate levels of a taxonomic hierarchy, specifying the allowable values of each attribute and the semantic inheritance relations among taxonomic entities. Once the taxonomy of a project has been built in the database (or borrowed from another project), a statement about a property of an item in any class can be made by linking the item to an Attribute item that is linked in turn to a Value item (or itself contains a value, in the case of a numeric attribute). This creates an item-attribute-value triple statement. Relational Attribute items can be used to make a statement about how an item is related to another item, establishing a named relation, supplementing the inter-item relations represented by configurational items in the Set, Sequence, and Hierarchy classes.

Each statement in OCHRE is attributed to a named person or group of people represented by an Agent item. Multiple statements about the same items can be made by different persons at different times or by the same person at different times. Thus the database can represent the full range of interpretations of the same phenomenon without privileging any one interpretation and it can attribute each interpretation to its author.

Furthermore, unlike most graph databases, OCHRE recursively nests items of interest within hierarchical tree structures that make statements about part-whole (parthood) relations, semantic class-subclass relations, or grouping (associational) relations among items, depending on the item class. The extensive use of hierarchical tree structures provides a predictable database structure that makes querying much more efficient by exploiting the power of an optimized XQuery processor in a native-XML database.

Moreover, hierarchies enable the use of recursion and allow a high degree of modular re-use in the OCHRE software because the same recursive function, written at the right level of abstraction, can be applied to many different hierarchies consisting of items of different classes. For example, the very same code operates over parthood hierarchies regardless of the item class, be it textual, linguistic, spatial, or temporal. Recursion and modularity make the software code much more compact and easier to maintain.

The key point here is that ontological classifications of phenomena are not defined in the OCHRE database beyond the basic ontological classes of Project items, Agent items, Spatial items, Temporal items, Epigraphic items, Discourse items, Sign items, Text items, Lexical items, Bibliographic items, Resource items, and Concept items, plus taxonomic Attribute and Value items and the configurational concepts represented by Query, Set, Sequence, and Hierarchy items. All other classifications are project-defined, not ontologically predetermined, being regarded as the product of historically contingent statements made by spatially and temporally (and linguistically and culturally) situated agents about phenomena of interest to them.

Accordingly, most of the classes and relations specified in other foundational ontologies and in domain ontologies are regarded by OCHRE, not as universal or predetermined, but as contingent statements made by particular agents at particular times — in this case, by the creators of the ontology. The classes and relations of other foundational ontologies would be treated as “data” in the OCHRE database and represented by Attribute Items and Value items. Likewise, any domain ontology or project-specific ontology can be represented within the OCHRE database, if a research project finds it useful to do so. For example, the BFO foundational ontology could be used for scientific data, the EDM domain ontology for cultural heritage data, and the TEI markup scheme for textual studies. They can all be represented as Attribute items (including the subclass of Relational Attribute items) and Value items within the OCHRE database and so be included within a project’s taxonomic hierarchy.

Finally, it should be said that the built-in OCHRE classes were themselves chosen pragmatically in terms of current scholarly practices and are not assumed to be timeless and universal structures of thought, like Kantian categories. Having said that, these few predefined classes have proved to be appropriate and sufficient for every research project we have encountered or can imagine encountering.

Realism, Anti-realism, and Hermeneutic Plural Realism

It is useful to compare the ontology of OCHRE to Basic Formal Ontology (BFO), a philosophically sophisticated foundational ontology described in the book Building Ontologies with Basic Formal Ontology by Robert Arp, Barry Smith, and Andrew D. Spear (Cambridge, Mass.: MIT Press, 2015). BFO builds on the work of Barry Smith, a professor of philosophy at SUNY Buffalo. Following Edmund Husserl’s Logical Investigations, Smith refers to foundational ontologies as “formal” ontologies in contrast to domain-specific ontologies, which he calls “material” ontologies. A formal ontology is defined as “a representational artifact, comprising a taxonomy as proper part, whose representations are intended to designate some combination of universals, defined classes, and certain relations between them” (Arp, Smith, and Spear 2015).

Smith and his co-authors are philosophically committed to scientific realism and defend the existence of universals. They have constructed BFO accordingly. Thus, BFO has a clear bias toward the natural sciences and its ontological subclasses are geared toward scientific understandings of reality. It is difficult to see how it would assist data integration in the humanities, and even in some sciences, where there is a great deal of ontological heterogeneity. For research in those domains, we need an ontology that helps us keep track of disagreements about what constitutes an object of study, let alone how to describe it, and does not assume that such objects enjoy the same metaphysical status as the objects of natural science. We need an ontology in which the agents making statements about objects of study are first-class members and the focus is on the practice of making such statements.

BFO is an impressive achieve­ment on its own terms and reflects the most philosophically sophisticated approach to ontology design today. However, many philosophers reject universals and subscribe to some form of nominalism and so would not distinguish universals from other kinds of classes when constructing ontologies. It is worth asking whether a useful foundational ontology can be devised that accommodates all the varieties of realism and anti-realism propounded by philosophers and domain specialists across the sciences and the humanities.

The OCHRE ontology is intended to be just such a metaphysically agnostic ontology. To understand the approach it takes, it is useful to revisit the philosophical debate concerning realism and anti-realism, and especially the discussion in the late 1970s between Hubert Dreyfus, Charles Taylor, and Richard Rorty on the difference between the natural and human sciences (Review of Metaphysics 34 [1980]: 3–55). This debate has received a fresh impetus from Dreyfus and Taylor’s recent book Retrieving Realism (Cambridge, Mass.: Harvard University Press, 2015).

Dreyfus and Taylor argue, from the perspective of the hermeneutic phenomenology pioneered by Heidegger and Merleau-Ponty, that a robust form of realism is still philosophically viable despite the rejection by Rorty and others of the possibility of a “view from nowhere” (e.g., in Rorty’s 1979 book Philosophy and the Mirror of Nature). Rorty’s epistemological critique has been very influential in the humanities and social sciences. It is now common to hear scholars speak of the concepts used in the natural sciences as themselves socially constructed and culturally contingent, as opposed to being rooted in a mind-independent realm.

Dreyfus and Taylor, however, are not so ready to dismiss scientific realism. They opt for what they call a “pluralistic robust realism,” in which: “there may be (1) multiple ways of interrogating reality . . . which nevertheless (2) reveal truths independent of us, that is, truths that require us to revise and adjust our thinking to grasp them . . . and where (3) all attempts fail to bring the different ways of interrogating reality into a single mode of questioning that yields a unified picture or theory” (Dreyfus and Taylor 2015: 154).

The philosophical debate between realists and anti-realists is by no means settled. For this reason, in contrast to BFO, OCHRE avoids any claim to realism or anti-realism and aims to classify, not the subject matter of some type of inquiry, but the inquisitive practices themselves. This provides us with a quite simple ontology that can be efficiently implemented in a working computer system and serve as an effective means of semi-automated data integration.

To sum up: the OCHRE ontology does not presume to prescribe the content of what may be stated, in terms of universals or other predefined classes, but more humbly and pragmatically prescribes the structure of agents-making-statements-about-items-of-interest, including potentially conflicting statements about which classes exist and which individuals are members of a given class, as well as statements about the relations among classes and among individuals. Of course, there is still an ontological starting point in terms of OCHRE’s predetermined classes of agency, space, time, discourse, etc. But here we can argue on phenomenological and transcendental grounds à la Heidegger that the spatially and temporally situated utterances of embodied linguistic agents are the inescapable horizon within which any ontological discussion is able to make sense.

Foundational Ontologies and Artificial Intelligence

The ontological approach taken here has the air of modesty in that it does not pretend to carve reality at the joints. It is then natural to ask further how a foundational ontology like that of OCHRE might respond to another famous plea for modesty from the philosophical literature — the one that flows from Hubert Dreyfus’s skepticism about the possibility of modeling all relevant aspects of human knowledge in a purely symbolic system. This skepticism was powerfully articulated in his 1972 book What Computers Can’t Do (Dreyfus 1972; third edition 1992) and stands in tension with the stated ambitions of prominent ontology projects such as Douglas Lenat’s Cyc project, which aims to represent a vast amount of common-sense knowledge to enable automated reasoning in the spirit of symbolic AI (Lenat and Guha 1990; Lenat 1995; cf. Dreyfus 1992: xvi–xxx on the futility of Lenat’s project).

Statements of knowledge and the agents who make them are represented in a digital computer using formal symbols defined according to standardized conventions and combinable according to certain rules. With knowledge of the relevant conventions and rules, one can automate the syntactic and schematic integration of heterogeneous data derived from multiple sources. Beyond this, many people have been enamored of the idea that semantic integration across domains of knowledge could also be achieved by automated reasoning using symbols and rules. This was the focus of early research in symbolic AI from the 1960s to the 1980s. This research led to description logics, which are the basis of modern computational implementations of ontologies, as in the Web Ontology Language, but it did not achieve automated reasoning except in circumscribed domains.

Despite considerable effort and investment over many years, symbolic AI is now considered by many to have failed in practice. This has drawn attention to the long-ignored philosophical criticisms voiced by Dreyfus and others, who argued that symbolic AI is unworkable in principle. In my view, the most effective philosophical critique of symbolic AI is from the perspective of the hermeneutic (“existential”) phenomenology of Heidegger and Merleau-Ponty, who argued that human intelligence depends on the human form of embodiment. This Heideggerian critique of AI was advanced by Dreyfus (1992 [1972]), Charles Taylor (1985), and John Haugeland (1985, 1998). From a very different perspective but arriving at a similar conclusion, John Searle (1980, 1984) argued via his Chinese Room Argument that symbolic AI is impossible because semantics cannot be derived from syntax.

To be sure, symbolic AI can work well in narrowly defined domains, but it fails in open-ended reasoning tasks that depend in unpredictable ways on background knowledge (“common sense”). As Dreyfus (1991: 117–119) pointed out, it is often impossible to determine algorithmically which background knowledge is significant and relevant. The problem of selecting relevant facts with which to reason cannot be solved simply by representing more and more facts inside the computer system because the required kind of relevance can never be captured in a formal calculus but emerges from our physical embodiment as agents engaged moment-by-moment in a “world” of involvements. Relevance is a function of a mode of being that first of all entails preconceptual skillful coping in the physical, social, and cultural situations in which we are embedded, and about which we care, and only secondarily and derivatively entails rational calculation. A computer is not in a situation; hence, as Haugeland (1998: 47) put it: “The trouble with artificial intelligence is that computers don’t give a damn.”

The OCHRE ontology takes into account the limitations of symbolic knowledge representation, description logics, and automated reasoning. It does not try to solve the intractable problem of generalized (domain-free) automated reasoning using formal symbols and rules, on which early AI researchers spent so much effort. This effort is what Haugeland called GOFAI, “Good Old-Fashioned AI,” in contrast to non-symbolic connectionist AI based on neural networks. This more recent form of AI relies on statistical machine learning to do pattern matching and prediction after training on vast amounts of data.

In reaction to GOFAI’s failure in practice, and also (in some circles) in response to Dreyfus’s philosophical critique, work in AI since the 1980s has largely abandoned symbolic knowledge representation and automated reasoning based on predetermined rules. Much of what is today called AI is based on the very different, non-symbolic paradigm of machine learning, which does not rely on rules-based programming but is “a [probabilistic] set of [algorithmic] methods that can automatically detect patterns in data, and then use the uncovered patterns to predict future data, or to perform other kinds of decision-making under uncertainty” (Murphy 2012: 1).

Some philosophically attuned computer scientists and cognitive scientists have made explicit attempts to apply machine learning to the design of computational systems in what they take to be a Heideggerian fashion (e.g., Winograd and Flores 1986; Winograd 1995; Agre 1997; Wheeler 2005; cf. Brooks 1991). The most recent AI work using many-layered deep neural networks and large language models has yielded impressive results, not just in machine translation and image recognition but also in the generation of natural language texts (e.g., ChatGPT) and the generation of images (e.g., DALL-E) from verbal prompts. But deep-learning AI has not overcome the semantic problems encountered by GOFAI in its attempt to simulate generalized intelligence (see Brian Cantwell Smith 2019).

Dreyfus and Taylor would say that this is because AI researchers have not yet abandoned the old rationalist idea — vigorously attacked by Heidegger and his intellectual heirs — that intelligence must somehow be based on having an inner representation of an outer world as opposed to resting fundamentally on our unmediated preconceptual contact with reality as embodied agents inextricably engaged and coping with a “world” of commitments (Dreyfus and Taylor 2015: 71–101). In other words, human intelligence needs a human-like body and deep-learning AI is just as disembodied as old-fashioned symbolic AI. (See the essays in Schear [ed.] 2013 on the debate between Hubert Dreyfus and John McDowell concerning the possibility of a preconceptual embodied understanding of the world as opposed to the view espoused by McDowell and others in the Kantian and analytic tradition that human experience is permeated with conceptual rationality.)

We can see, then, that the semantic integration of data presents a problem of a different order than syntactic or schematic integration. It is not simply an engineering problem but is bound up with fundamental philosophical debates concerning the dependence of meaning (or not) on particular social, cultural, and bodily contexts. And it is not simply a matter of opposing the realism endemic in natural science against the relativism prevalent in the humanities. Rejecting the possibility of a view from nowhere, which many scholars in the humanities would do, may still be compatible with some form of scientific realism, as has been argued by Dreyfus and Taylor (2015), who defend pluralistic robust realism in response to Richard Rorty.

The OCHRE ontology does not try to decide these difficult questions but merely assumes that, regardless of one’s philosophical stance, a foundational ontology that is computationally useful for semantic integration across projects can represent knowledge in the form of agents making statements in particular contexts. But this means that agency and discourse must themselves be basic classes in the ontology, along with space and time, and it means that the ontology must be able to embrace multiple conflicting statements about the same things.

This approach conforms to long-standing scholarly practices of critical autonomy and individual attribution and citation, eschewing the uncritical acceptance of  anonymous or institutional semantic authority and embracing a productive conflict of interpretations. And it does so while using a computationally tractable ontological framework that can accommodate competing interpretations and competing philosophical stances with regard to realism and relativism, without deciding the question in advance. In this way, we can ensure that ontology plays its proper role, as an organon for inquiry and not a replacement for inquiry.

References

Agre, Philip E. 1997. Computation and Human Experience. Cambridge: Cambridge University Press.

Arp, Robert, Barry Smith, and Andrew D. Spear. 2015. Building Ontologies with Basic Formal Ontology. Cambridge, Mass.: MIT Press.

Baader, Franz, Ian Horrocks, Carsten Lutz, and Uli Sattler. 2017. An Introduction to Description Logic. Cambridge: Cambridge University Press.

Brooks, Rodney A. 1991. “Intelligence without Representation.” Artificial Intelligence 47: 139–159.

Doan, AnHai, Alon Halevy, and Zachary Ives. 2012. Principles of Data Integration. Waltham, Mass.: Morgan Kauffman/Elsevier.

Dreyfus, Hubert L. 1991. Being-in-the-World: A Commentary on Heidegger’s Being and Time, Division I. Cambridge, Mass.: MIT Press.

Dreyfus, Hubert L. 1992. What Computers Still Can’t Do: A Critique of Artificial Reason. 3d ed. [orig. ed. 1972] Cambridge, Mass.: MIT Press.

Dreyfus, Hubert L., and Charles Taylor. 2015. Retrieving Realism. Cambridge, Mass.: Harvard University Press.

Gangemi, Aldo, Nicola Guarino, Claudio Masolo, Alessandro Oltramari, Luc Schneider. 2002. “Sweeten­ing Ontologies with DOLCE.” Pp. 166–181 in A. Gómez-Pérez and V. R. Benjamins (eds.), Knowl­edge Engineering and Knowledge Management: Ontologies and the Semantic Web. Berlin: Springer.

Haugeland, John. 1985. Artificial Intelligence: The Very Idea. Cambridge, Mass.: MIT Press.

Haugeland, John. 1998. Having Thought: Essays in the Metaphysics of Mind. Cambridge, Mass.: Harvard University Press.

Lenat, Douglas B. 1995. “CYC: A Large-Scale Investment in Knowledge Infrastructure.” Communica­tions of the ACM 38/11: 33–38.

Lenat, Douglas B., and R. V. Guha. 1990. Building Large Knowledge-Based Systems: Representation and Inference in the Cyc Project. Boston: Addison-Wesley.

Murphy, Kevin P. 2012. Machine Learning: A Probabilistic Perspective. Cambridge, Mass.: MIT Press.

Pease, Adam, Ian Niles, and John Li. 2002. “The Suggested Upper Merged Ontology: A Large Ontology for the Semantic Web and Its Applications.” AAAI Technical Report WS-02-11.

Schear, Joseph K., ed. 2013. Mind, Reason, and Being-in-the-World: The McDowell-Dreyfus Debate. New York: Routledge.

Searle, John. 1980. “Minds, Brains and Programs.” Behavioral and Brain Sciences 3: 417–457.

Searle, John. 1984. Minds, Brains and Science. Cambridge, Mass.: Harvard University Press.

Smith, Barry. 2000. “Logic and Formal Ontology.” Manuscrito 23: 275–323.

Smith, Barry. 2003. “Ontology.” Pp. 155–166 in L. Floridi (ed.), Blackwell Guide to the Philosophy of Computing and Information. Oxford: Blackwell.

Smith, Brian Cantwell. 2019. The Promise of Artificial Intelligence: Reckoning and Judgment. Cambridge, Mass.: MIT Press.

Taylor, Charles. 1985. “Cognitive Psychology.” Pp. 187–212 in Human Agency and Language: Philo­soph­ical Papers, vol. 1. Cambridge: Cambridge University Press.

Wheeler, Michael. 2005. Reconstructing the Cognitive World: The Next Step. Cambridge, Mass.: MIT Press.

Winograd, Terry. 1995. “Heidegger and the Design of Computer Systems.” Pp. 108–127 in A. Feenberg and A. Hannay (eds.), Technology and the Politics of Knowledge. Indiana University Press.

Winograd, Terry, and Fernando Flores. 1986. Understanding Computers and Cognition: A New Founda­tion for Design. Norwood, N.J.: Ablex.

Click the buttons below for more information about these OCHRE topics:

Scroll to Top