The Synergy of NLP and Computational Lexicography Tasks

Ken Litkowski, CL Research, 9208 Gue Road, Damascus, Maryland 20872,


Working on machine-readable dictionaries for NLP applications concurrently with lexicographic tasks for dictionary publishers and lexicographers provides a synergistic environment with benefits for both. We describe several lexicographic tasks and the use of machine-tractable dictionaries and automatically created semantic networks for word sense disambiguation, question-answering, and information extraction and how these have benefited from this relationship. We describe new machine-readable dictionaries available for the academic and commercial research communities.


We report several novel activities in investigating machine-readable dictionaries (MRDs), using dictionaries not previously available to the research community. This work is noteworthy because of the close interaction with lexicographers of these publishers and the focus on using these MRDs for specific NLP tasks, particularly word-sense disambiguation, question-answering, and information extraction. We describe our efforts in: (1) creating machine-tractable dictionaries (MTDs) from the MRDs, particularly linguistically relevant information not usually captured, (2) definition parsing and pattern matching for creating semantic network links, and (3) consistency analysis, including mapping definitions among dictionaries, primitive-finding analysis of definitions in thesaurus categories, and augmenting sense descriptions with semantic slots (for word sense disambiguation and information extraction).

1 Creating Machine-Tractable Dictionaries

To perform this work, we have used the DIMAP dictionary creation and maintenance programs with (The Macquarie Dictionary, 1997) and (The New Oxford Dictionary of English, 1998) (NODE).(1) Both dictionaries exist in databases with their own markup tags. The first step in gaining NLP control over the dictionaries is putting them into a general format where the dictionary data can be readily accessed. This involved uploading the dictionary data into DIMAP, where the data can be manipulated and analyzed in a GUI format. Although the data could probably have been put directly into the proper format, we wrote conversion programs (about 2 person-weeks of effort for each dictionary, spread out over a longer period).

The conversion program generally extracts the dictionary data into explicit fields that match the tags used by the publishers. We describe the general principles guiding the creation of the MTD and then some specific mechanisms for phrases and collocations and other syntactic and semantic information which can be identified during the conversion process.

1.1 Lexical Redundancy

In addition to attempting to capture all the data, we have followed a general principle of lexical redundancy (see (Jackendoff 1975)) of trying to create entries for all variations of a lexeme, repeating all the information, but keeping links to the root entry and definitions. Thus, we keep cross-references from the variant to the root, entering all the data from the root entry, but then creating a varOf link to the root inside the variant. This is in addition to the dictionaries’ own systems of cross-references, for which propagation of data would take place after the data are uploaded into DIMAP.

We also maintain explicit links between runons (undefined words “run on” to an entry, the result of some suffixation process). We note that these derivations are “derived from” a root form. We have used this information directly in question-answering when we look for “who assassinated McKinley” by noting that assassination is “derived from” assassinate. We later populate the entry for assassination with a definition by matching suffixes to create the act of assassinating.

1.2 Regular Expressions for Phrases and Collocations

A considerable amount of language use is captured in set phrases, which have been captured by lexicographers in idioms, phrases, and multiword main entries. In Senseval (Kilgarriff & Palmer, 2000), recognition of these units was an important disambiguation tool. The Senseval dictionary data introduced the notion of a kind, corresponding to hyponyms of an entry like band, such as brass band and silver band.

(Litkowski, 2000a) reported the development of kind equations for his lexical database in Senseval, handcrafting phrases into regular expressions centered about their headword. We automated this process to handle phrasal entries in the two dictionaries. For example, in NODE, under happy, the phrase (as) happy as a sandboy (or brit. Larry or N. amer. a clam) yielded the kind equation

(as) ~ as (a sandboy|Larry|a clam)

with ~ marking the target word or entry (i.e., happy) under which the phrase appears. But, we also generated 6 headwords containing all the variations (i.e., with and without the leading as and with each of the other alternatives), each containing the definition for the phrase. Frequently, such phrases contain elements like one’s, which we convert into [prpos] in the equations indicating that any possessive pronoun will satisfy this element. In processing these phrases from the Macquarie dictionary and NODE, we specifically print out all instances of variant phrases. We estimate our success in producing these equations as almost perfect.

We were able to take this even further with the NODE data, which often identifies particle usage in an example. For one sense of hand (“an active role in influencing something”, in the set phrase a hand), one example is “he had a big hand in organizing the event”. By noting the emphasized words in the example, we were able to create a clue

had [a big] ~ in

where the bracketed phrase represents either a literal or a lexical preference.

1.3 Miscellaneous Data

Many definitions frequently begin with parenthesized expressions such as handle (“(of a vehicle) respond in a specified manner when being driven or controlled”), habit-forming (“(of a drug or activity) addictive”), or hamza (“(in Arabic script) a symbol representing a glottal stop”). For verbs, the expression is extracted as a TypicalSubject; for adjectives, as a TypicalModificand; and for “in” phrases, as a specialized domain (in addition to the dictionaries’ explicit mechanisms for identifying subject areas). These constitute lexical preferences from the dictionary data.

A unique feature of NODE is the notion of a “core sense” -- the definitions are arranged with an attempt to capture the most important senses in current usage and then to characterize subsenses which extend the meaning of the core sense. We capture this hierarchical structure by explicitly identifying that a sense is so related; this will be used to analyze the ways in which the subsenses add components of meaning.

Many entries come with various forms of “usage notes” specifying other words (typically particles) or accompanying phrase types (e.g., adverbials of direction). These are extracted into specific features that can then be used as disambiguating devices.

2 Definition parsing and pattern matching for semantic network creation

The principal functionality of DIMAP is to parse and analyze definitions to create semantic relation between entries. This functionality is an end in itself, but also serves lexicographic needs (described in the next section on definition consistency analysis) and allows bootstrapping more lexical information (first noted by (Richardson, 1994)).

Identification of semantic links takes place in two ways in DIMAP: (1) hard-coded routines that place definitions into sentence frames parsed into constituent structures and then analyzed to extract links and (2) regular expression patterns containing literals or parts of speech, added as supplemental “parts of speech” (“defining patterns”) to the parsing dictionary.

While definitions generally correspond to constituents of sentences (such as NPs for noun definitions, and infinitive phrases for verb definitions), there are several nuances that may provide misleading results and that make it difficult to parse them directly. Transitive verb definitions frequently contain a parenthesized expression specifying lexical preferences for the object of the verb; the parentheses (but not the contents) need to be removed, while remembering that what is contained in the parentheses should be extracted as the sense’s TypicalObject. Many transitive verb senses have no object (e.g., hurry, “cause to move or proceed with haste”), where a placeholder “something” should be inserted after “cause”.

Many definitions contain words (such as “especially”) that need to be treated differently. For habit (“an addictive practice, especially one of taking drugs”), there are really two definitions present. The first is “an addictive practice” and the second should be transformed into “the addictive practice of taking drugs”.

Further, as described in the previous section, we may know the typical subject of a verb or the typical modificand of an adjective. These can be inserted into sentence frames. Thus, for the example of habit-forming (which has the definition “addictive”), where we identified the typical modificand “a drug or activity”, we would want to parse a sentence “This is a drug or activity that is addictive”. Analyzing such a definition would then give rise to a qualia structure for the adjective habit-forming as modifying the formal quale of drug or activity (as outlined in (Pustejovsky, 1995)).

After creating and parsing the sentences, the parse output (a parse tree showing the sentence constituents, with the words in the sentence as leaf nodes of the tree) is analyzed to identify some key semantic relations, most notably the hypernym(s) to be associated with the sense.

Defining patterns in definitions are also significant sources of semantic relations (Ahlswede & Evens, 1988). They were also used extensively in creating EuroWordNet (Vossen, 1998). Analysis of definition parse output is a significant source of semantic relations identified in MindNet (Montemagni & Vanderwende, 1993). In addition to using these previous findings, interactions with lexicographers at Macquarie and Oxford are assisting with further identification of appropriate patterns.(2)

For noun, verb, and preposition definitions, we seek the leading NP (and its head noun(s)), the main verb(s), and the final preposition (or verb, if none) as the hypernym. For nouns, we examine whether the head of the first NP is “empty” (e.g., the phrase “a kind of”), where we would then look for the head noun of the following PP as the hypernym(s). We search the parse tree for “manner” PPs, extracting the adjective modifying “manner” to provide a Manner relation (usually for a verb definition).

In examining the parse tree, we look particularly for prepositional phrases and whether the preposition has an associated “defining pattern” as a part of speech. For of, we have associated several such patterns, including

(1) made ~ rep01(det) rep0n(adj) noun sr(has-constituent)

(2) adj(nbrth) noun ~ rep01(det(0)) rep0n(adj) noun sr(mem-of)

(3) purpose ~ adj noun sr(purpose)

where ~ corresponds to the target preposition (i.e., of), specific words correspond to literals to be matched, and parts of speech correspond to matching any word with that part of speech. The rep01, rep0n, and rep1n correspond to matching 0 or 1, 0 to n, or 1 to n occurrences of the given part of speech. The final sr identifies the type of relation that is created. All material on the right of the ~ is extracted as the link from the semantic relation, except that parts of speech with a following 0 are not included.

The result of the definition parsing and pattern matching is an augmentation of the dictionaries with semantic relation links. The resultant semantic network, particularly when viewed through its hypernymic links, is, in effect, an ontology; the other types of links provide slots and values for various semantic components of the individual senses. A considerable number of synonym links are also established, so that the dictionaries are similar to WordNet as well.

In parsing the entire Macquarie dictionary (130,000 entries, 281,000 definitions, taking 17 hours), about 223,000 semantic relations were identified automatically.(3) In a limited assessments of the hypernyms, lexicographers found an agreement rate of about 75 percent for the nouns and verbs.

The Macquarie lexical database (including a thesaurus) was successfully used in the TREC9 question-answering track (Litkowski, 2000b). For “where” questions, the database was able to use location components in definitions in judging whether there was a match between a proposed answer’s definition and the specifications is the question. For questions involving “size” determinations, in potential answers containing numbers modifying some noun, it was possible to examine the hypernym of the noun to determine if is a “unit”. For questions like “which city”, it was possible to combine the dictionary and the thesaurus data, since Macquarie provides direct links to its thesaurus from individual definitions. Thus, for example, where “Shanghai” has “municipality” as its hypernym, it was possible to compare the thesaurus categories of “municipality” and “city” and make the judgment that “Shanghai” was a viable answer.

3 Definition consistency analysis

An important lexicographic task is that of maintaining consistency throughout a single dictionary and across dictionaries. We describe three such tasks: (1) maintaining consistency among several dictionaries from a single publisher, (2) examining the integrity of a thesaurus, and (3) populating dictionary entries with a superset of information from definitions with similar information. We also identify how these tasks have benefits for NLP tasks.

3.1 Mapping multiple dictionaries

In Senseval (Kilgarriff & Rosenzweig, 2000), the issue of using dictionary definitions arose in two contexts, one concerned with mapping between two dictionary sources (WordNet and the Hector dictionary provided for Senseval) and the other with providing a baseline for disambiguation using definition text and example sentences. The mapping was considered “not altogether satisfactory” and may have contributed to some reductions in performance. The baseline was important as a mechanism for using a straightforward statistical analysis for disambiguation.

The baseline technique followed (Lesk, 1986) in using dictionary definitions and examples as the basis for computing an inverse document frequency and matching surrounding context in the Senseval sentences. (Litkowski, 1999) reported using a variation of this technique for mapping definitions between WordNet and Hector, but reported a relatively low success rate (36.1%) when measured against lexicographer mappings.

The issue of mapping senses is important to dictionary publishers who may have many dictionaries (such as thumbnail, childrens, learners, collegiate, and unabridged). Macquarie publishes 15 such dictionaries, in principle, all derived from the unabridged dictionary. However, as reported to us, these dictionaries have been developed, each using slightly different editorial policies (based on the type of dictionary), over a period of time when the unabridged dictionary underwent three major editions. The lexicographic task was to map each of these dictionaries into the unabridged dictionary as a step for maintaining definitional consistency. Given the fact that the dictionaries had at least been published by one publisher, it was expected that the mapping problem would not be as difficult as that of mapping between dictionaries of different publishers.

The Lesk-style word overlap reported in (Litkowski, 1999) was not quite satisfactory and was modified to include labels (such as register, geographic coverage, and subject domain) attached to definitions. In general, the technique used content words only (i.e., using a stop list to eliminate function words) as a percentage of the definitions in the unabridged dictionary. The method also used certain syntactic information (e.g., restriction to same part of speech and within part of speech, to senses having identical syntactic properties such as verb transitivity).

After experimenting with various options (e.g., with and without a stop list), extensive samples for several of the dictionaries were examined by lexicographers to assess the success of the mappings. The agreement rates for the several dictionaries ranged from 90 to 95 percent. Many of the failures could be attributed to the presence of similar wording in several definitions of an entry (and hence, indicative of defining issues that require the lexicographers’ attention). Many definitions were not mapped, indicating the presence of completely different wording and perhaps that a smaller dictionary had senses not included in the unabridged dictionary. The agreement rates were judged satisfactory and full mappings were undertaken for the 15 smaller dictionaries. All dictionaries were uploaded into DIMAP (approximately 20 hours per dictionary) and the mappings performed (a few hours per dictionary).

The mapping results are now being used to link the smaller dictionaries to the unabridged dictionary. Corrections to the mapping will be recorded. This will enable the results to be used as a “gold standard” which can then be used to examine other mapping techniques (such as the componential analysis described in (Litkowski, 1999)). Not only will such techniques improve the quality of the mappings, but they would then be available for application across dictionaries and for use in more general word sense disambiguation. Further, the linkages between the smaller and the unabridged dictionaries are available for further analyses, as described in the next two sections.

3.2 Analysis of primitives in a thesaurus category

As indicated above, the Macquarie dictionary is unique in having sense by sense linkages to thesaurus entries.(4) Such linkages have been used directly in question-answering and would likely have considerable benefit in many other NLP applications, since, when combined with the DIMAP enhancements of adding hypernym links, the resultant lexical database has many structural similarities to WordNet. As described in (Fellbaum, 1998), there are many potential applications for such lexical databases.

The thesaurus links also provide an opportunity for serving additional lexicographic tasks, particularly in improving consistency within both the dictionary and the thesaurus. The Roget-style thesaurus is organized into more than 800 sections (e.g., 038 – Approval), with each section broken down into paragraphs, perhaps several for each part of speech, and subparagraphs separated by semicolons with one word highlighted as the key concept of semicolon-delimited group. (Kilgarriff & Yallop, 2000) noted that the semicolon-delimited groups are similar to WordNet sysnsets, with the key concept being either synonymous or acting as a hypernym for the other words in the group. They noted that the paragraphs and subparagraphs are frequently related by simple linguistic operations (such as morphological derivations or scope), but also other kinds of semantic relations. For Nature, the main category of terms for nature was surrounded by noun groups for balance of nature, study of nature, person who studies nature, and adjective groups pertaining to humans, animals, and humans derogatorily. They concluded that the compilers had made use of implicit categorization schemes, possibly with inconsistencies, but not made explicit.

The linkages to the unabridged dictionary makes it possible to examine the thesaurus groups in a more principled way. With DIMAP, it is possible to create a subdictionary consisting only of definitions linked to a single thesaurus group. While this is convenient for visually examining a set of definitions, the functionality of DIMAP allows for a more rigorous analysis. DIMAP can analyze the dictionary digraph as established by the hypernym links for a set of definitions. This analysis identifies “non-primitive” words (defined only by words within the group, but not themselves used in defining words in the group), definitional cycles (leading to identification of strong components within the set), and primitive definitions (those used in the formation of the core concepts in the group). (See (Litkowski, 1978) and (Litkowski, 1980) for details of modeling the semantic structure of a dictionary with labelled directed graphs.)

For verbs of approval, we are able to eliminate as non-primitive such dictionary entries as approbate, advocate, and hold a brief for. We see that approve and sanction are in the same strong component (defining one another and hence not adding to the meaning of either). We see that verbs such as treat, accord, take, think, and receive are used to define the core concepts of approve. We also notice that the verbs confirm and ratify are used synonymously in defining the core concepts of approve, but are not present in the thesaurus category.

These few observations, produced automatically after parsing the definitions to identify hypernyms, provide the beginning of a “rationalization” of the thesaurus category. Examination of the other semantic relations produced by the definition parsing provides further information that can be used to identify meaning components in the concept approval. This process provides a firmer basis on which to (1) group the synonyms in subparagraphs, (2) make the relationships between subparagraphs and paragraphs explicit, and (3) make changes to the underlying definitions for this group of words, so that they are more consistent and are phrased in the simplest terms possible to highlight their meaning.

At a higher level, this type of analysis provides a more consistent dictionary from which a more complete and more accurate semantic network can be created and used for NLP applications. We are currently working to integrate these methods into the analysis of the thesaurus.

3.3 Automatic template and slot generation

Most of the preceding discussion focused on dictionary data as used for syntactic and semantic analysis (i.e., word sense disambiguation). In general, this is what (Allen, 1995) terms the syntactic pattern. However, just as important is the representation of meaning, the logical form, that is to be used in creating a meaning of the text within which a particular sense appears. Several lexicographic tasks move in this direction.

NODE frequently indicates that a verb requires an adverbial of direction. Such an adverbial can be expressed by an adverb or an adverbial prepositional phrase (overtly, in the opposite direction or indirectly, to Montreal). Many definitions of verbs so labelled contain the phrase in a particular direction (e.g., herd “to move in a particular direction”). However, many definitions containing similar phrases (e.g., hand “hold the hand of (someone) in order to help them move in the specified direction”) do not. Identifying such instances assists in bringing greater consistency. In any event, this suggests that adverbs containing such phrases can usefully be labelled with a feature direction. Most importantly, such definitions legitimize the creation of a slot labelled direction. For definitions that indicate a particular direction, the value of the slot has been lexicalized.

More generally, the appearance in a definition of a word like specified, particular, or certain indicates the presence of a slot that must be filled by the context. Thus, for hail (“acclaim enthusiastically as being a specified thing”), the object of the verb must be characterized in some way (here either through an as prepositional phrase or modified by a relative clause). This can even occur in definitions of nouns, such as half-life “the time taken for the radioactivity of a specified isotope to fall to half its original value”, which indicates not only that an isotope must be present in the context, but also that isotope can have a property half-life (something not indicated in the definition of isotope).

More commonly, a slot is predicted by the words someone and something. Frequently, these occur in a verb definition in parentheses and serve as general placeholders for the object of the verb, a very general lexical preference. When they occur in other positions, they provide both semantic and syntactic information. For example, halo (“a circle or ring of something resembling a halo”) creates a slot for the something and attaches properties to the slot that it has shape circle or ring and a relation resembling halo.

In FrameNet (Baker, et al., 1998), with its finer granularity of semantic roles (cf. (Fillmore, 1968)), the preceding considerations provide some methods for automatic generation of frame elements and frame element groups, with some indication of their required syntactic and semantic contexts. Further analysis of definitions can lead to an even richer identification of frame elements. At SIGLEX99, Fillmore noted that an utterance implicitly contained many nested frames. Using the example of approval (“the act of approving”) cited earlier, the appearance of the word in context implicitly requires filling a slot approver and approval-object to instantiate an approve event.(5) These methods bear some similarity to those described in (Collier, 1998) for automatic template creation, but switches the relative importance of corpus and dictionary evidence. Thus, improving definitional consistency will contribute greatly to the goal of automatic template generation.

Another lexicographic task will benefit the characterization of templates. Given the general interest in collocations, examination of adjective-noun and noun-noun collocations present in a dictionary will provide additional co-compositional characterizations (Pustejovsky, 1995). In MindNet (Richardson, 1997), such collocations present in both definitions and examples were an important source (i.e., a corpus) of statistical associations established in the dictionary entries. We have just begun efforts to extract such collocations for detailed lexicographic analysis (such as lexical preferences) rather than simply statistical associations. Such characterizations are likely to consist not only of overt taxonomic categories, but also covert categories (Cruse, 1986).

This collocational evidence is extremely important in another respect. In developing MindNet (Richardson, 1997), it was found that analysis of definitions and examples under a particular entry could only partially characterize the sense. Such analysis could only reveal direct semantic relations (semrels). Of even greater importance was the identification of “inverted” semrels. Thus, the entry for car provides only limited information of its parts, while there are many entries that indicate they are parts of cars.

The inverted semrels emphasize the importance of the lexicographic objective of trying to bring together all the relevant information pertaining to an entry at one point. Thus, we have begun steps to back-propagate information developed through a forward analysis of definitions, that is, to create inverse links whenever forward links have been created. This will facilitate creation of templates. Thus, in analyzing the definition for pen, we will first create a slot for ink and then note in the definition of indelible that it is a modifier of ink or a pen, and hence will be able to license the value indelible for the ink slot.


We have described two machine-readable dictionaries newly available to the research community and several actions we have performed in making them machine-tractable. Further, we have outlined several novel perspectives for analyzing the dictionary data, frequently building from past work on MRDs. The novelty stems from the synergy of meeting lexicographic objectives and of dealing with problems in real-world NLP applications. At the working level, this has meant a continuing relationship with lexicographers, with different perspectives leading into expanding areas of research (many of them reflected in this paper). We hope that we have added to the value of continued collaboration between dictionary publishers and NLPers (as described in (Kilgarriff, 2000)).

At the same time, it is clear that the efforts described here only scratch the surface. There has been considerable research on lexical issues in NLP and one of our objectives is make use of such results for analyzing the dictionary. But, in addition, it is valuable to consider how this research can improve the quality and content of dictionaries, in paper form as well as in MRDs.


Ahlswede, T., & Evens, M. (1988). Parsing Vs. Text Processing in the Analysis of Dictionary Definitions. 25th Annual Meeting of the Association for Computational Linguistics. Buffalo, NY: Association for Computational Linguistics.

Allen, J. (1995). Natural language understanding (2nd). Redwood City, CA: The Benjamin/Cummings Publishing Company, Inc.

Baker, C. F., Fillmore, C. J., & Lowe, J. B. (1998). The Berkeley FrameNet Project. 38th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics.

Collier, R. (1998). Automatic Template Creation for Information Extraction [Diss], Sheffield, England: University of Sheffield.

Cruse, D. A. (1986). Lexical Semantics. Cambridge: Cambridge University Press.

Fellbaum, C. (1998). WordNet: An electronic lexical database. Cambridge, Massachusetts: MIT Press.

Fillmore, C. J. (1968). The case for case. In E. Bach & R. Harms (Eds.), Universals in linguistic theory (pp. 1-90). New York: Holt, Rinehart, and Winston.

Jackendoff, R. (1975). Morphological and semantic regularities in the lexicon. Language, 51(3), 639-671.

Kilgarriff, A. (2000). Business Models for Dictionaries and NLP. International Journal of Lexicography, 13(2), 107-118.

Kilgarriff, A., & Palmer, M. (2000). Introduction to the Special Issue on SENSEVAL. Computers and the Humanities, 34(1-2), 1-13.

Kilgarriff, A., & Rosenzweig, J. (2000). Frameword and Results for English SENSEVAL. Computers and the Humanities, 34(1-2), 15-48.

Kilgarriff, A., & Yallop, C. (2000). What's in a thesaurus. Second Conf on Language Resources and Evaluation. Athens.

Lesk, M. (1986). Automatic Sense Disambiguation Using Machine Readable Dictionaries: How to Tell a Pine Cone from an Ice Cream Cone. Proceedings of SIGDOC.

Litkowski, K. C. (1978). Models of the semantic structure of dictionaries. American Journal of Computational Linguistics, Mf.81, 25-74.

Litkowski, K. C. (1980). Requirements of text processing lexicons. 18th Annual Meeting of the Association for Computational Linguistics. Philadelphia, PA: Association for Computational Linguistics.

Litkowski, K. C. (1999). Towards a Meaning-Full Comparison of Lexical Resources. Association for Computational Linguistics Special Interest Group on the Lexicon Workshop. College Park, MD.

Litkowski, K. C. (2000a). SENSEVAL: The CL Research Experience. Computers and the Humanities, 34(1-2), 153-158.

Litkowski, K. C. (2000b). Syntactic Clues and Lexical Resources in Question-Answering. CL Research. Damascus, MD.

The Macquarie Dictionary (A. Delbridge, J. R. L. Bernard, D. Blair, S. Butler, P. Peters, & C. Yallop, Eds.) (3rd). (1997). Australia: The Macquarie Library Pty Ltd.

Montemagni, S., & Vanderwende, L. (1993). Structural patterns versus string patterns for extracting semantic information from dictionaries. In K. Jensen, G. Heidorn & S. Richardson (Eds.), Natural language processing: The PLNLP approach (pp. 149-159). Boston, MA: Kluwer Academic Publishers.

The New Oxford Dictionary of English (J. Pearsall, Ed.). (1998). Oxford: Clarendon Press.

Pustejovsky, J. (1995). The generative lexicon. Cambridge, MA: The MIT Press.

Richardson, S. D. (1997). Determining similarity and inferring relations in a lexical knowledge base [Diss], New York, NY: The City University of New York.

Richardson, S. D. (1994). Bootstrapping statistical processing into a rule-based natural language parser. In Workshop of the Association for Computational Linguistics. Las Cruces, New Mexico.

Vossen, P. (1998). Introduction to EuroWordNet. Computers and the Humanities, 32(2-3), 73-89.

(1) DIMAP is available from CL Research. Machine-tractable and DIMAP-enhanced versions of the two dictionaries are available for academic and commercial research through CL Research. See below for further description of the DIMAP enhancements to the MTDs.

(2) An overview of research on and lists of semantic relations is at

(3) Virtually all of these semantic relations were among the nouns and verbs, with only limited results for adjectives and adverbs.

(4) As of this writing, the linkages are about one-third complete, with estimated completion in April 2001.

(5) We are grateful to Robert Amsler for pointing out this distinction.