Test Collection Based Evaluation Of Information Retrieval Systems
Download Test Collection Based Evaluation Of Information Retrieval Systems full books in PDF, epub, and Kindle. Read online free Test Collection Based Evaluation Of Information Retrieval Systems ebook anywhere anytime directly on your device. Fast Download speed and no annoying ads.
Author |
: Mark Sanderson |
Publisher |
: Now Publishers Inc |
Total Pages |
: 143 |
Release |
: 2010-06-03 |
ISBN-10 |
: 9781601983602 |
ISBN-13 |
: 1601983603 |
Rating |
: 4/5 (02 Downloads) |
Synopsis Test Collection Based Evaluation of Information Retrieval Systems by : Mark Sanderson
Use of test collections and evaluation measures to assess the effectiveness of information retrieval systems has its origins in work dating back to the early 1950s. Across the nearly 60 years since that work started, use of test collections is a de facto standard of evaluation. This monograph surveys the research conducted and explains the methods and measures devised for evaluation of retrieval systems, including a detailed look at the use of statistical significance testing in retrieval experimentation. This monograph reviews more recent examinations of the validity of the test collection approach and evaluation measures as well as outlining trends in current research exploiting query logs and live labs. At its core, the modern-day test collection is little different from the structures that the pioneering researchers in the 1950s and 1960s conceived of. This tutorial and review shows that despite its age, this long-standing evaluation method is still a highly valued tool for retrieval research.
Author |
: Diane Kelly |
Publisher |
: Now Publishers Inc |
Total Pages |
: 246 |
Release |
: 2009 |
ISBN-10 |
: 9781601982247 |
ISBN-13 |
: 1601982240 |
Rating |
: 4/5 (47 Downloads) |
Synopsis Methods for Evaluating Interactive Information Retrieval Systems with Users by : Diane Kelly
Provides an overview and instruction on the evaluation of interactive information retrieval systems with users.
Author |
: Katja Hofmann |
Publisher |
: |
Total Pages |
: 134 |
Release |
: 2016-06-07 |
ISBN-10 |
: 1680831631 |
ISBN-13 |
: 9781680831634 |
Rating |
: 4/5 (31 Downloads) |
Synopsis Online Evaluation for Information Retrieval by : Katja Hofmann
Provides a comprehensive overview of the topic. It shows how online evaluation is used for controlled experiments, segmenting them into experiment designs that allow absolute or relative quality assessments. It also includes an extensive discussion of recent work on data re-use, and experiment estimation based on historical data.
Author |
: Christopher D. Manning |
Publisher |
: Cambridge University Press |
Total Pages |
: |
Release |
: 2008-07-07 |
ISBN-10 |
: 9781139472104 |
ISBN-13 |
: 1139472100 |
Rating |
: 4/5 (04 Downloads) |
Synopsis Introduction to Information Retrieval by : Christopher D. Manning
Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Slides and additional exercises (with solutions for lecturers) are also available through the book's supporting website to help course instructors prepare their lectures.
Author |
: Stefan Buttcher |
Publisher |
: MIT Press |
Total Pages |
: 633 |
Release |
: 2016-02-12 |
ISBN-10 |
: 9780262528870 |
ISBN-13 |
: 0262528878 |
Rating |
: 4/5 (70 Downloads) |
Synopsis Information Retrieval by : Stefan Buttcher
An introduction to information retrieval, the foundation for modern search engines, that emphasizes implementation and experimentation. Information retrieval is the foundation for modern search engines. This textbook offers an introduction to the core topics underlying modern search technologies, including algorithms, data structures, indexing, retrieval, and evaluation. The emphasis is on implementation and experimentation; each chapter includes exercises and suggestions for student projects. Wumpus—a multiuser open-source information retrieval system developed by one of the authors and available online—provides model implementations and a basis for student work. The modular structure of the book allows instructors to use it in a variety of graduate-level courses, including courses taught from a database systems perspective, traditional information retrieval courses with a focus on IR theory, and courses covering the basics of Web retrieval. In addition to its classroom use, Information Retrieval will be a valuable reference for professionals in computer science, computer engineering, and software engineering.
Author |
: Tetsuya Sakai |
Publisher |
: Springer Nature |
Total Pages |
: 225 |
Release |
: 2021 |
ISBN-10 |
: 9789811555541 |
ISBN-13 |
: 9811555540 |
Rating |
: 4/5 (41 Downloads) |
Synopsis Evaluating Information Retrieval and Access Tasks by : Tetsuya Sakai
This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, todays smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students--anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one.
Author |
: Donna Harman |
Publisher |
: Springer Nature |
Total Pages |
: 107 |
Release |
: 2022-05-31 |
ISBN-10 |
: 9783031022760 |
ISBN-13 |
: 3031022769 |
Rating |
: 4/5 (60 Downloads) |
Synopsis Information Retrieval Evaluation by : Donna Harman
Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. The second chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / "Batch" Evaluation Since 1992 / Interactive Evaluation / Conclusion
Author |
: Karen Sparck Jones |
Publisher |
: Morgan Kaufmann |
Total Pages |
: 614 |
Release |
: 1997 |
ISBN-10 |
: 1558604545 |
ISBN-13 |
: 9781558604544 |
Rating |
: 4/5 (45 Downloads) |
Synopsis Readings in Information Retrieval by : Karen Sparck Jones
This compilation of original papers on information retrieval presents an overview, covering both general theory and specific methods, of the development and current status of information retrieval systems. Each chapter contains several papers carefully chosen to represent substantive research work that has been carried out in that area, each is preceded by an introductory overview and followed by supported references for further reading.
Author |
: David Hawking |
Publisher |
: Springer Nature |
Total Pages |
: 162 |
Release |
: 2022-06-01 |
ISBN-10 |
: 9783031023231 |
ISBN-13 |
: 3031023234 |
Rating |
: 4/5 (31 Downloads) |
Synopsis Simulating Information Retrieval Test Collections by : David Hawking
Simulated test collections may find application in situations where real datasets cannot easily be accessed due to confidentiality concerns or practical inconvenience. They can potentially support Information Retrieval (IR) experimentation, tuning, validation, performance prediction, and hardware sizing. Naturally, the accuracy and usefulness of results obtained from a simulation depend upon the fidelity and generality of the models which underpin it. The fidelity of emulation of a real corpus is likely to be limited by the requirement that confidential information in the real corpus should not be able to be extracted from the emulated version. We present a range of methods exploring trade-offs between emulation fidelity and degree of preservation of privacy. We present three different simple types of text generator which work at a micro level: Markov models, neural net models, and substitution ciphers. We also describe macro level methods where we can engineer macro properties of a corpus, giving a range of models for each of the salient properties: document length distribution, word frequency distribution (for independent and non-independent cases), word length and textual representation, and corpus growth. We present results of emulating existing corpora and for scaling up corpora by two orders of magnitude. We show that simulated collections generated with relatively simple methods are suitable for some purposes and can be generated very quickly. Indeed it may sometimes be feasible to embed a simple lightweight corpus generator into an indexer for the purpose of efficiency studies. Naturally, a corpus of artificial text cannot support IR experimentation in the absence of a set of compatible queries. We discuss and experiment with published methods for query generation and query log emulation. We present a proof-of-the-pudding study in which we observe the predictive accuracy of efficiency and effectiveness results obtained on emulated versions of TREC corpora. The study includes three open-source retrieval systems and several TREC datasets. There is a trade-off between confidentiality and prediction accuracy and there are interesting interactions between retrieval systems and datasets. Our tentative conclusion is that there are emulation methods which achieve useful prediction accuracy while providing a level of confidentiality adequate for many applications. Many of the methods described here have been implemented in the open source project SynthaCorpus, accessible at: https://bitbucket.org/davidhawking/synthacorpus/
Author |
: Karen Sparck Jones |
Publisher |
: Butterworth-Heinemann |
Total Pages |
: 372 |
Release |
: 1981 |
ISBN-10 |
: UOM:39015002905431 |
ISBN-13 |
: |
Rating |
: 4/5 (31 Downloads) |
Synopsis Information Retrieval Experiment by : Karen Sparck Jones