![]() |
Information Retrieval Book Review Homepage |
Books Available for Review
Books Under Review Book Reviews Completed Book Reviews Published Information for Reviewers Information for Publishers Editors-in-Chief: William Hersh, M.D., Josiane Mothe, and Justin Zobel
Editorial Board |
Book Reviews Published
Research and Trends in Data Mining Technologies and Applications Information Representation and Retrieval in the Digital Age
Spotting and Discovering Terms through Natural Language Processing
Finding Out About: A Cognitive Perspective on Search Engine Technology and the WWW
Information Retrieval: Algorithms and Heuristics
The Text in the Machine: Electronic Texts in the Humanities
Research and Trends in Data Mining Technologies and Applications
IGI Global, 2007. Activities in data warehousing and mining are constantly emerging. Data mining methods, algorithms, online analytical processes, data mart and practical issues consistently evolve, providing a challenge for professionals in the field. Research and Trends in Data Mining Technologies and Applications focuses on the integration between the fields of data warehousing and data mining, with emphasis on the applicability to real-world problems. This book provides an international perspective, highlighting solutions to some of researchers toughest challenges. Developments in the knowledge discovery process, data models, structures, and design serve as answers and solutions to these emerging challenges.
Data Mining the Web: Uncovering Patterns in Web Content, Structure, and Usage
Wiley, 2007. This book introduces the reader to methods of data mining on the web, including uncovering patterns in web content (classification, clustering, language processing), structure (graphs, hubs, metrics), and usage (modeling, sequence analysis, performance).
The Geometry of Information Retrieval
Cambridge University Press, 2005. Reviewed by: Kantor, Paul; School of Communication, Information and Library Studies, Rutgers University Information retrieval, IR, the science of extracting information from any potential source, can be viewed in a number of ways: logical, probabilistic and vector space models are some of the most important. In this book, the author, one of the leading researchers in the area, shows how these views can be reforged in the same framework used to formulate the general principles of quantum mechanics. All the usual quantum-mechanical notions have their IR-theoretic analogues, and the standard results can be applied to address problems in IR, such as pseudo-relevance feedback, relevance feedback and ostensive retrieval. The relation with quantum computing is also examined. To keep the book self-contained appendices with background material on physics and mathematics are included. Each chapter ends with bibliographic remarks that point to further reading. This is an important, ground-breaking book, with much new material, for all those working in IR, AI and natural language processing.
New Directions in Question Answering
MIT Press, 2005. Reviewed by: Azzam, Saliha; Research, Microsoft Corporation Question answering systems, which provide natural language responses to natural language queries, are the subject of rapidly advancing research encompassing both academic study and commercial applications, the most well-known of which is the search engine Ask Jeeves. Question answering draws on different fields and technologies, including natural language processing, information retrieval, explanation generation, and human computer interaction. Question answering creates an important new method of information access and can be seen as the natural step beyond such standard Web search methods as keyword query and document retrieval. This collection charts significant new directions in the field, including temporal, spatial, definitional, biographical, multimedia, and multilingual question answering.
Information Representation and Retrieval in the Digital Age
Medford: Information Today Inc, 2003. Review published in Information Retrieval, Vol. 9, No. 1, January 2006 Reviewed By: Soergel Dagobert, College of Information Studies, University of Maryland This is the first book to offer a clear, comprehensive view of Information Representation and Retrieval (IRR). With an emphasis on principles and fundamentals, author Heting Chu, Ph.D. (College of Information and Computer Science at Long Island University) first reviews key concepts and major developmental stages of the field, then systematically examines information representation methods, IRR languages, retrieval techniques and models, and Internet retrieval systems. Chu discusses the retrieval of multilingual, multimedia, and hyper-structured information, explores the user dimension and evaluation issues, and analyzes the role and potential of artificial intelligence (AI) in IRR. Chus thoroughly researched monograph is an indispensable guide for the individual who needs broad and current knowledge of this rapidly growing field.
Profiling Machines: Mapping the Personal Information Economy
MIT Press, 2004. Reviewed by: Desouza, Kevin; Information Schoool, University of Washington In this book Greg Elmer brings the perspectives of cultural and media studies to the subject of consumer profiling and feedback technology in the digital economy. He examines the multiplicity of processes that monitor consumers and automatically collect, store, and cross-reference personal information. When we buy a book at Amazon.com or a kayak from L.L. Bean, our transactions are recorded, stored, and deployed to forecast our future behavior--thus we may receive solicitations to buy another book by the same author or the latest in kayaking gear. Elmer charts this process, explaining the technologies that make it possible and examining the social and political implications. Elmer begins by establishing a theoretical framework for his discussion, proposing a "diagrammatic approach" that draws on but questions Foucault's theory of surveillance. In the second part of the book, he presents the historical background of the technology of consumer profiling, including such pre-electronic tools as the census and the warranty card, and describes the software and technology in use today for demographic mapping. In the third part, he looks at two case studies--a marketing event sponsored by Molson that was held in the Canadian Arctic (contrasting the attendees and the indigenous inhabitants) and the use of "cookies" to collect personal information on the World Wide Web, which (along with other similar technologies) automate the process of information collection and cross-referencing. Elmer concludes by considering the politics of profiling, arguing that we must begin to question our everyday electronic routines.
Mapping Scientific Frontiers: The Quest for Knowledge Visualization
Springer, 2003. Reviewed by: Ivory-Ndiaye, Melody; Information School, University of Washington "Mapping Scientific Frontiers" examines the history and the latest developments in the quest for knowledge visualization from an interdisciplinary perspective, ranging from theories of invisible colleges and competing paradigms, to practical applications of visualization techniques for capturing intellectual structures, and the rise and fall of scientific paradigms. Containing simple and easy to follow diagrams for modeling and visualization procedures, as well as detailed case studies and real world examples, this is a valuable reference source for researchers and practitioners, such as science policy analysts, funding agencies, consultancy firms, and higher education institutions. It presents 163 illustrations, 111 in color, including maps, paintings, images, computer visualizations and animations. Topics and features: * Simple and easy-to-follow diagrams for modeling and visualization procedures. * Interdisciplinary perspectives, involving bibliometrics, cartography, information visualization, and philosophy of science. * Real-world examples of co-word analysis, co-citation analysis, and patent citation analysis. * Detailed case studies of visualizing scientific paradigms, including the mass extinction debates, the active galactic nuclei paradigm and mad cow disease. * Full-color reproduction of 163 figures from maps, paintings, images, visualizations, and animations. This book is essential for researchers and practitioners. It is a valuable reference source for science policy analysts, funding agencies, consultancy firms, and higher education institutions. The book is suitable for graduate courses on knowledge domain visualization, scientometrics, information visualization, and bibliometrics.
Knowledge Management in the SocioTechnical World - The Graffiti Continues
New York, NY: Springer-Verlag New York Inc, 2002. Reviewed by: Patricia Katopol, Information School, University of Washington This book follows on from Elayne Coakes'previous book in the CSCW series, The New SocioTech (published April 2000). Whereas that book gave a broad introduction to the re-emerging area of sociotechnical design, this one applies these principles specifically to the area of Knowledge Management (KM). KM has been a key tool in ensuring that people and technology work together to optimum effect within organisations for many years, but recent studies have called for a more systemic approach to the topic. This book examines that problem via sociotechnical principles which have recently re-emerged as one of the most widely used approaches to information systems and organisational design. Including contributions from academics and practitioners, this book looks at key aspects of the field such as: - Knowledge management strategy formulation - Knowledge requirements - Case studies from corporate learning environments and industry It will be of interest to practitioners, researchers, and managers who are involved in any aspect of information systems/sociotechnical design or knowledge management. It will also useful for advanced students on information systems or related courses.
Looking for Information
Lexington, KY: Academic Press, 2002. Reviewed by: Dr. Paul Solomon, School of Information and Library Science, University of North Carolina at Chapel Hill Looking for Information presents examples of information seeking and it reviews studies of the information seeking-behavior of both general and specific social and occupational groups: scientists, engineers, social scientists, humanists, policy experts, the aged, the poor, and "the public" in general. It also discusses general research on information seeking, including basic research on human communication behavior as found in the literature of psychology, anthropology, sociology and other disciplines.
Principles of Data Mining
Cambridge: The MIT Press, 2001. Reviewed by: Dr. Scott Sisson, Dept of Mathematics and Computer Science, Faculty of Natural Sciences, University of Puerto Rico, Rio Piedras Campus The growing interest in data mining is motivated by a common problem across disciplines: how does one store, access, model, and ultimately describe and understand very large data sets? Historically, different aspects of data mining have been addressed independently by different disciplines. This is the first truly interdisciplinary text on data mining, blending the contributions of information science, computer science, and statistics.
Spotting and Discovering Terms through Natural Language Processing
Cambridge: The MIT Press, 2001. Reviewed by: Dr. Nina Wacholder, School of Communication, Information and Library Studies, Rutgers University In this book Christian Jacquemin shows how the power of natural language processing (NLP) can be used to advance text indexing and information retrieval (IR). Jacquemin's novel tool is FASTR, a parser that normalizes terms and recognizes term variants. Since there are more meanings in a language than there are words, FASTR uses a metagrammar composed of shallow linguistic transformations that describe the morphological, syntactic, semantic, and pragmatic variations of words and terms. The acquired parsed terms can then be applied for precise retrieval and assembly of information. The use of a corpus-based unification grammar to define, recognize, and combine term variants from their base forms allows for intelligent information access to, or "linguistic data tuning" of, heterogeneous texts. FASTR can be used to do automatic controlled indexing, to carry out content-based Web searches through conceptually related alternative query formulations, to abstract scientific and technical extracts, and even to translate and collect terms from multilingual material. Jacquemin provides a comprehensive account of the method and implementation of this innovative retrieval technique for text processing.
Finding Out About: A Cognitive Perspective on Search Engine Technology and the WWW
New York: Cambridge University Press, 2000. Reviewed by: Mr. Paul Thompson, West Group The World Wide Web is rapidly filling with more text than anyone could have imagined a short time ago. However, the task of determining which data is relevant has become appreciably harder. In this original new work Richard Belew brings a cognitive science perspective to the study of information as a computer science discipline. He introduces the idea of Finding Out About (FOA), the process of actively seeking out information relevant to a topic of interest. Belew describes all facets of FOA, ranging from creating a good characterization of what the user seeks to evaluating the successful performance of search engines. His volume clearly shows how to build many of the tools that are useful for searching collections of text and other media. While computer scientists make up the book's primary audience, Belew skillfully presents technical details in a manner that makes important themes accessible to readers more comfortable with words than equations.
Information Retrieval: Algorithms and Heuristics
Boston: Kluwer Academic Publishers, 1998. Reviewed by: Dr. Hugo Zaragoza, The Neural Networks Group Information Retrieval: Algorithms and Heuristics is a comprehensive introduction to the study of information retrieval covering both effectiveness and run-time performance. The focus of the presentation is on algorithms and heuristics used to find documents relevant to the user request and to find them fast. Through multiple examples, the most commonly used algorithms and heuristics needed are tackled. To facilitate understanding and applications, introductions to and discussions of computational linguistics, natural language processing, probability theory and library and computer science are provided. While this text focuses on algorithms and not on commercial product per se, the basic strategies used by many commercial products are described. Techniques that can be used to find information on the Web, as well as in other large information collections, are included.
The Text in the Machine: Electronic Texts in the Humanities
New York: Haworth Press, Inc., 1999. Reviewed by: M. Zoe Holbrooks, Information School, University of Washington The first comprehensive guide to explore the growing field of electronic information, The Text in the Machine: Electronic Texts in the Humanities will help you create and use electronic texts. This book explains the processes involved in developing computerized books on library Web sites, CD-ROMs, or your own Web site. With the information provided by The Text in the Machine, youll be able to successfully transfer written words to a digitized form and increase access to any kind of information |