Last edited by Doulabar
Sunday, August 2, 2020 | History

2 edition of Measures and applications of lexical distributional similarity found in the catalog.

Measures and applications of lexical distributional similarity

Julie Elizabeth Weeds

Measures and applications of lexical distributional similarity

by Julie Elizabeth Weeds

  • 197 Want to read
  • 5 Currently reading

Published .
Written in English


Edition Notes

StatementJulie Elizabeth Weeds.
SeriesSussex theses ; S 5639
The Physical Object
Pagination201 leaves :
Number of Pages201
ID Numbers
Open LibraryOL22080067M

The literature suggests two major approaches for learning lexical semantic relations: distributional similarity and pattern-based. The first approach recognizes that two words (or multi-word terms), e.g. country and state are semantically similar based on distributional similarity of the different contexts in which the two words occur. 3. Hierarchical and distributional lexical field theory. For lexical field theory to overcome the weaknesses evident in Louw and Nida’s implementation of semantic domains, it could be based on the following methodological principles that I will explore in this section: distributional measurement and lexicogrammatical categorization.

Some applications benefit from a detailed representation of the structure of For distributional lexical and phrasal semantics, one challenge is to obtain appropriate weights for inference rules (Roller, Erk, and Boleda ). is a distributional similarity measure, like cosine, and f . Characterising measures of lexical distributional similarity. In: Proceedings of the 20th International Conference on Computational Linguistics, COLING ‘04, Association for Computational Linguistics.

  A new sentence similarity measure based on lexical, syntactic, semantic analysis. • It combines statistical and semantic methods to measure similarity between words. • The measure was evaluated using state-of-art datasets: Li et al., SemEval , CNN. • It presents an application to eliminate redundancy in multi-document summarization. Distributional Semantics involves finding vector space representations of words which are con-structed by counting or modeling the contexts in which a particular word appears. According to the Distributional Hypothesis, words with similar vectors can be assumed to have similar .


Share this book
You might also like
Studia Hibernica.

Studia Hibernica.

Mica fields of India

Mica fields of India

Energy-Efficient Compact Screw-In fluorescent Lamp

Energy-Efficient Compact Screw-In fluorescent Lamp

King Edward the Sixth on the supremacy

King Edward the Sixth on the supremacy

Mosbys guide to nursing diagnosis

Mosbys guide to nursing diagnosis

Heat Treating Library Ferrous Version 1.1

Heat Treating Library Ferrous Version 1.1

The life-giving power of the cross

The life-giving power of the cross

forest by night

forest by night

The sand kings of Oman

The sand kings of Oman

Fife Coal Company Limited

Fife Coal Company Limited

effects of Public law fifteen on the local insurance agent

effects of Public law fifteen on the local insurance agent

Lessons in elocution, or, A selection of pieces in prose and verse, for the improvement of youth in reading and speaking

Lessons in elocution, or, A selection of pieces in prose and verse, for the improvement of youth in reading and speaking

A muddy kind of magic

A muddy kind of magic

ThanksLiving Treasures

ThanksLiving Treasures

chicken-wagon family

chicken-wagon family

Measures and applications of lexical distributional similarity by Julie Elizabeth Weeds Download PDF EPUB FB2

This thesis is concerned with the measurement and application of lexical distributional similarity. Two words are said to be distributionally similar if they appear in similar : Julie Weeds. the properties which make a good measure of lexical distributional similarity.

We start by intro-ducing the concept of lexical distributional similarity. We discuss potential applications, which can be roughly divided into distributional or language modelling applications and semantic ap-plications, and methods of evaluation (Chapter 2).

CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): This thesis is concerned with the measurement and application of lexical distributional similarity.

Two words are said to be distributionally similar if they appear in similar contexts. Characterising Measures of Lexical Distributional Similarity. Julie Weeds, David Weir, Diana McCarthy. Anthology ID: C Volume: COLING Proceedings of the 20th International Conference on Computational Linguistics Month: Aug 23–Aug 27 Year: Address: Geneva, Switzerland Venue: COLING SIG: Publisher: COLING Note:Cited by: Characterising Measures of Lexical Distributional Similarity.

July ; DOI: / Authors: Julie Weeds. p oten tial application of distributional similarit y. tributionally similar events. Other potential applications apply the hy-pothesised relationship (Harris, ) between distributional similarity and semantic similar-ity; i.e., similarity in the meaning of words can be predicted from their distributional similarity.

One. Julie Weeds and David Weir. A general framework for distributional similarity. In Proceedings of EMNLP, pagesSapporo, Japan. Google Scholar Digital Library; Julie Weeds. Measures and Applications of Lexical Distributional Similarity. Ph.D. thesis, Department of Informatics, University of Sussex.

Google Scholar. Abstract. In practice, lexical chains are typically built using term reiteration or resource-based measures of semantic distance. The former approach misses Measures and applications of lexical distributional similarity book on a significant portion of the inherent semantic information in a text, while the latter suffers from the limitations of the linguistic resource it depends upon.

Measures of Distributional Similarity Lillian Lee Department of Computer Science Cornell University Ithaca, NY [email protected] Abstract We study distributional similarity measures for the purpose of improving probability estima-tion for unseen cooccurrences.

Our contribu-tions are three-fold: an empirical comparison. Text similarity measures play an increasingly important role ESA) similarity between words; WordNet is a large lexical database [18] is a multilingual generalization of ESA. CL-ESA Distributional similarity between words assumes that words with similar meaning occur in similar context.

Large text collections are statistically analyzed. Characterising Measures of Lexical Distributional Similarity. s distributionally nearest neighbours with respect to the similarity measure used. We identify one type of variation as being the relative frequency of the neighbour words with respect to the frequency of the target word.

Finally, we consider the impact that this has on one. In relation to distributional similarity, we thoroughly investigated the semantic properties of grammatical relationships in regulating word meanings, whereby over 80% precision can be reached in extracting synonyms or near-synonyms.

This book provides a systematic guidance on computing taxonomic similarity and distributional similarity. This work investigates the variation in a word's distributionally nearest neighbours with respect to the similarity measure used.

We identify one type of variation as being the relative frequency of the neighbour words with respect to the frequency of the target word.

We then demonstrate a three-way connection between relative frequency of similar words, a concept of distributional gnerality. Uses of lexical chains (1) 7 •Discourse segmentation (Morris and Hirst) •Measuring text similarity; constructing hypertext and document indexes (GreenAl-Halimi and KazmanEllmanCramer, Finthammer, and Storrer ).

°Start and end of chain tends to correspond to change of topic. ° Words in a chain collectively indicate. The quantification of lexical semantic relatedness has many applications in NLP, and many different measures have been proposed.

We evaluate five of these measures, all of which use WordNet as their central resource, by comparing their performance in detecting and correcting real-word spelling errors.

Distributional semantics is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. The basic idea of distributional semantics can be summed up in the so-called Distributional hypothesis: linguistic items with similar distributions.

In other applications, distributional similarity is taken to be an approximation to semantic similarity. However, due to the wide range of potential applications and the lack of a strict definition of the concept of distributional similarity, many methods of calculating distributional similarity have been proposed or.

An information-content-based measure proposed by Jiang and Conrath is found superior to those proposed by Hirst and St-Onge, Leacock and Chodorow, Lin, and Resnik. In addition, we explain why distributional similarity is not an adequate proxy for lexical semantic relatedness.

Abstract. In this study, measures of distributional similarity such as KL-divergence are applied to cluster documents instead of traditional cosine measure, which is the most prevalent vector similarity measure for document clustering.

riety of applications including Question Answer-ing (Ravichandran and Hovy, ), Information Extraction (Shinyama and Sekine, ), and as a main component in Textual Entailment systems (Dinu and Wang, ; Dagan et al., ).

Most approaches to the task used distributional similarity as a major component within their sys-tem. Evaluating WordNet-based Measures of Lexical Semantic Relatedness and Resnik. In addition, we explain why distributional similarity is not an adequate proxy for lexical semantic relatedness.

1. Introduction proposed for use in applications in natural language processing and information.to collect similar words from a corpus. A typical measure of similar-ity between words is based on their distributional similarity [9], [3]. Similarity measures based on distributional hypothesis compare a pairof weighted feature vectors that characterize two words.

Features typically correspond to other words that co-occur with the character.applications like question answering. But applications of computational semantics are For distributional lexical and phrasal semantics, one challenge is to obtain appropriate weights for inference rules (Roller, Erk, and Boleda ).

simis a distributional similarity measure, like cosine, and fis a function that maps the.