Classification as a Tool for Research

Classification as a Tool for Research
Author :
Publisher : Springer Science & Business Media
Total Pages : 825
Release :
ISBN-10 : 9783642107450
ISBN-13 : 3642107451
Rating : 4/5 (50 Downloads)

Synopsis Classification as a Tool for Research by : Hermann Locarek-Junge

Clustering and Classification, Data Analysis, Data Handling and Business Intelligence are research areas at the intersection of statistics, mathematics, computer science and artificial intelligence. They cover general methods and techniques that can be applied to a vast set of applications such as in business and economics, marketing and finance, engineering, linguistics, archaeology, musicology, biology and medical science. This volume contains the revised versions of selected papers presented during the 11th Biennial IFCS Conference and 33rd Annual Conference of the German Classification Society (Gesellschaft für Klassifikation - GfKl). The conference was organized in cooperation with the International Federation of Classification Societies (IFCS), and was hosted by Dresden University of Technology, Germany, in March 2009.

Classification as a Tool of Research

Classification as a Tool of Research
Author :
Publisher : North Holland
Total Pages : 524
Release :
ISBN-10 : WISC:89013058078
ISBN-13 :
Rating : 4/5 (78 Downloads)

Synopsis Classification as a Tool of Research by : Classification Society. Meeting

This work contains a selection of papers presented at the meeting. The subjects covered include: Data analysis: Methods of scaling, Loglinear models, Correspondence analysis, Pattern recognition and discrimination, Analysis and aggregation of discrete structures, Measures of similarity and association. Numerical classification: Clustering methods, Robustness of methods, Fuzzy clustering, Statistical models. Concept analysis: Construction and reconstruction of concepts, Theories of characteristics and of definitions, Impact on artificial intelligence. Indexing languages and terminologies as information resources: Classification systems, Thesauri, Conceptual structure utilization, Identification of analogies. Software tools (especially on microcomputers): Availability of programs, Interfaces to data base systems, Information retrieval systems, Method base systems, Graphical representation, Comparisons of algorithms. Applications of classification examined here include economics, business administration, natural sciences, social science and humanities, chemistry research, library science, and linguistics. Contributors: P. Arabie, I. Balderjahn, P.M. Bentler, H.-H. Bock, I.

The Classification of Research

The Classification of Research
Author :
Publisher :
Total Pages : 200
Release :
ISBN-10 : UCAL:B5005095
ISBN-13 :
Rating : 4/5 (95 Downloads)

Synopsis The Classification of Research by : Oliver D. Hensley

Based on a selection of ten papers from the 1984 SRA Research Symposium in San Diego.

Classification as a Tool for Research

Classification as a Tool for Research
Author :
Publisher : Springer
Total Pages : 823
Release :
ISBN-10 : 3642107443
ISBN-13 : 9783642107443
Rating : 4/5 (43 Downloads)

Synopsis Classification as a Tool for Research by : Hermann Locarek-Junge

Clustering and Classification, Data Analysis, Data Handling and Business Intelligence are research areas at the intersection of statistics, mathematics, computer science and artificial intelligence. They cover general methods and techniques that can be applied to a vast set of applications such as in business and economics, marketing and finance, engineering, linguistics, archaeology, musicology, biology and medical science. This volume contains the revised versions of selected papers presented during the 11th Biennial IFCS Conference and 33rd Annual Conference of the German Classification Society (Gesellschaft für Klassifikation - GfKl). The conference was organized in cooperation with the International Federation of Classification Societies (IFCS), and was hosted by Dresden University of Technology, Germany, in March 2009.

Using Classification and Regression Trees

Using Classification and Regression Trees
Author :
Publisher : IAP
Total Pages : 166
Release :
ISBN-10 : 9781641132398
ISBN-13 : 1641132396
Rating : 4/5 (98 Downloads)

Synopsis Using Classification and Regression Trees by : Xin Ma

Classification and regression trees (CART) is one of the several contemporary statistical techniques with good promise for research in many academic fields. There are very few books on CART, especially on applied CART. This book, as a good practical primer with a focus on applications, introduces the relatively new statistical technique of CART as a powerful analytical tool. The easy-to-understand (non-technical) language and illustrative graphs (tables) as well as the use of the popular statistical software program (SPSS) appeal to readers without strong statistical background. This book helps readers understand the foundation, the operation, and the interpretation of CART analysis, thus becoming knowledgeable consumers and skillful users of CART. The chapter on advanced CART procedures not yet well-discussed in the literature allows readers to effectively seek further empowerment of their research designs by extending the analytical power of CART to a whole new level. This highly practical book is specifically written for academic researchers, data analysts, and graduate students in many disciplines such as economics, social sciences, medical sciences, and sport sciences who do not have strong statistical background but still strive to take full advantage of CART as a powerful analytical tool for research in their fields.

Qualitative Research: Analysis Types & Tools

Qualitative Research: Analysis Types & Tools
Author :
Publisher : Routledge
Total Pages : 338
Release :
ISBN-10 : 9781135388027
ISBN-13 : 1135388024
Rating : 4/5 (27 Downloads)

Synopsis Qualitative Research: Analysis Types & Tools by : Renata Tesch

First published in 1990. There was a time when most researchers believed that the only phenomena that counted in the social sciences were those that could be measured. To make that perfectly clear, they called any phenomenon they intended to study a 'variable', indicating that the phenomenon could vary in size, length, amount, or any other quantity. Unfortunately, not many phenomena in the human world comes naturally in quantities. If we cannot even give a useful answer to what qualitative analysis is and how it works, then it seems rather incongruent to try and involve a computer, the very essence of precision and orderliness. Isn't qualitative analysis a much too individualistic and flexible an activity to be supported by a computer? Won't a computer do exactly what qualitative researchers want to avoid, namely standardize the process? Won't it mechanize and rigidify qualitative analysis? The answer to these questions is NO, and this book explains why.

Developing and Testing a Tool for the Classification of Study Designs in Systematic Reviews of Interventions and Exposures

Developing and Testing a Tool for the Classification of Study Designs in Systematic Reviews of Interventions and Exposures
Author :
Publisher :
Total Pages :
Release :
ISBN-10 : OCLC:723033355
ISBN-13 :
Rating : 4/5 (55 Downloads)

Synopsis Developing and Testing a Tool for the Classification of Study Designs in Systematic Reviews of Interventions and Exposures by :

BACKGROUND: Classification of study design can help provide a common language for researchers. Within a systematic review, definition of specific study designs can help guide inclusion, assess the risk of bias, pool studies, interpret results, and grade the body of evidence. However, recent research demonstrated poor reliability for an existing classification scheme. OBJECTIVES: To review tools used to classify study designs; to select a tool for evaluation; to develop instructions for application of the tool to intervention/exposure studies; and to test the tool for accuracy and interrater reliability. METHODS: We contacted representatives from all AHRQ Evidence-based Practice Centers (EPCs), other relevant organizations, and experts in the field to identify tools used to classify study designs. Twenty-three tools were identified; 10 were relevant to our objectives. The Steering Committee ranked the 10 tools using predefined criteria. The highest-ranked tool was a design algorithm for studies of health care interventions developed, but no longer advocated, by the Cochrane Non-Randomised Studies Methods Group. This tool was used as the basis for our classification tool and was revised to encompass more study designs and to incorporate elements of other tools. A sample of 30 studies was used to test the tool. Three members of the Steering Committee developed a reference standard (i.e., the "true" classification for each study); 6 testers applied the revised tool to the studies. Interrater reliability was measured using Fleiss' kappa (o) and accuracy of the testers' classification was assessed against the reference standard. Based on feedback from the testers and the reference standard committee, the tool was further revised and tested by another 6 testers using 15 studies randomly selected from the original sample. RESULTS: In the first round of testing the inter-rater reliability was fair among the testers (o = 0.26) and the reference standard committee (o = 0.33). Disagreements occurred at all decision points in the algorithm; revisions were made based on the feedback. The second round of testing showed improved interrater reliability (o = 0.45, moderate agreement) with improved, but still low, accuracy. The most common disagreements were whether the study was "experimental" (5/15 studies) and whether there was a comparison (4/15 studies). In both rounds of testing, the level of agreement for testers who had completed graduate-level training was higher than for testers who had not completed training. CONCLUSION: Potential reasons for the observed low reliability and accuracy include the lack of clarity and comprehensiveness of the tool, inadequate reporting of the studies, and variability in user characteristics. Application of a tool to classify study designs in the context of a systematic review should be accompanied by adequate training, pilot testing, and documented decision rules.

Data Classification

Data Classification
Author :
Publisher : CRC Press
Total Pages : 710
Release :
ISBN-10 : 9781498760584
ISBN-13 : 1498760589
Rating : 4/5 (84 Downloads)

Synopsis Data Classification by : Charu C. Aggarwal

Comprehensive Coverage of the Entire Area of ClassificationResearch on the problem of classification tends to be fragmented across such areas as pattern recognition, database, data mining, and machine learning. Addressing the work of these different communities in a unified way, Data Classification: Algorithms and Applications explores the underlyi

Validity and Inter-Rater Reliability Testing of Quality Assessment Instruments

Validity and Inter-Rater Reliability Testing of Quality Assessment Instruments
Author :
Publisher : CreateSpace
Total Pages : 108
Release :
ISBN-10 : 1484077148
ISBN-13 : 9781484077146
Rating : 4/5 (48 Downloads)

Synopsis Validity and Inter-Rater Reliability Testing of Quality Assessment Instruments by : U. S. Department of Health and Human Services

The internal validity of a study reflects the extent to which the design and conduct of the study have prevented bias(es). One of the key steps in a systematic review is assessment of a study's internal validity, or potential for bias. This assessment serves to: (1) identify the strengths and limitations of the included studies; (2) investigate, and potentially explain heterogeneity in findings across different studies included in a systematic review; and (3) grade the strength of evidence for a given question. The risk of bias assessment directly informs one of four key domains considered when assessing the strength of evidence. With the increase in the number of published systematic reviews and development of systematic review methodology over the past 15 years, close attention has been paid to the methods for assessing internal validity. Until recently this has been referred to as “quality assessment” or “assessment of methodological quality.” In this context “quality” refers to “the confidence that the trial design, conduct, and analysis has minimized or avoided biases in its treatment comparisons.” To facilitate the assessment of methodological quality, a plethora of tools has emerged. Some of these tools were developed for specific study designs (e.g., randomized controlled trials (RCTs), cohort studies, case-control studies), while others were intended to be applied to a range of designs. The tools often incorporate characteristics that may be associated with bias; however, many tools also contain elements related to reporting (e.g., was the study population described) and design (e.g., was a sample size calculation performed) that are not related to bias. The Cochrane Collaboration recently developed a tool to assess the potential risk of bias in RCTs. The Risk of Bias (ROB) tool was developed to address some of the shortcomings of existing quality assessment instruments, including over-reliance on reporting rather than methods. Several systematic reviews have catalogued and critiqued the numerous tools available to assess methodological quality, or risk of bias of primary studies. In summary, few existing tools have undergone extensive inter-rater reliability or validity testing. Moreover, the focus of much of the tool development or testing that has been done has been on criterion or face validity. Therefore it is unknown whether, or to what extent, the summary assessments based on these tools differentiate between studies with biased and unbiased results (i.e., studies that may over- or underestimate treatment effects). There is a clear need for inter-rater reliability testing of different tools in order to enhance consistency in their application and interpretation across different systematic reviews. Further, validity testing is essential to ensure that the tools being used can identify studies with biased results. Finally, there is a need to determine inter-rater reliability and validity in order to support the uptake and use of individual tools that are recommended by the systematic review community, and specifically the ROB tool within the Evidence-based Practice Center (EPC) Program. In this project we focused on two tools that are commonly used in systematic reviews. The Cochrane ROB tool was designed for RCTs and is the instrument recommended by The Cochrane Collaboration for use in systematic reviews of RCTs. The Newcastle-Ottawa Scale is commonly used for nonrandomized studies, specifically cohort and case-control studies.