faculty
Sign In

BCH 498 

Course Title:       Research and Seminar 1

Credit Hours:                       1(0+1) 

Course Description:   It is a one credit hour course which gives the student the opportunity to develop skills necessary for writing a theoretical review article on selected topic. Moreover the student will be asked to make oral presentation and will take oral exam by three teaching staff.

 

  Topics

1-     Novel Breast Cancer Biomarkers 

2-     Chromosome Mutations Related to Down Syndrome

3-     Glucose 6 phosphate dehydrogenase deficiency

4-     Novel Genomics Technologies

5-     Genetic Disorders in Saudi Arabia

6-     Molecular Characterization in Childhood Leukemia

 

  Read Manscript

        How to Critically Read a Journal Research Article

ABSTRACT The purpose of this article is to present the concepts involved in reader evaluation of research literature. The essential elements of a research article are the title, abstract, introduction, method, results, discussion and conclusion. The introduction should contain the following elements: statement of the problem, literature review, purpose and expected results (hypothesis). The method should define the subjects, instrumentation and apparatus, procedure, and data analysis. The data analysis is divided into statistical tests for continuous data and discrete data. The results section should succinctly present the results with no interpretation of their meaning. The discussion is where the knowledge and insight of the author(s) are allowed to bloom.

1) Title The title of an article is very important since initially it is The only exposure readers have to the article. As readers peruse the table of contents of a journal, they will appreciate titles that are both short and informative. A good title should give insight into what (was done), whom (it was done to) and how (it was done). Practitioners have precious little time to review the literature and cannot afford to expend time fishing through minutiae to identify important articles. If a title is too long or loaded with complex technical jargon, chances are the article will be skipped. This is unfortunate since the article may contain significant findings. However, this is a problem for the authors and the journal's editorial board, not the readers. It is not reasonable to expect readers to take the time to read an article that is improperly or inappropriately titled. If the title hooks the readers, they will be motivated to read the abstract.

 2) Abstract The abstract should contain a brief statement about the study's purpose, method, results, conclusion and clinical relevance. Reading the abstract is a time-efficient way for readers to determine if the article suits their needs. If an abstract is well written, some hurried readers may choose to read only the abstract, then return to the article later as more in-depth information or convincing evidence is needed. A well-written abstract gives readers a good idea of what the study is about, how it was conducted, and the findings and recommendations of the author. Readers should remember not to accept the conclusions before critically reading the entire article. The abstract should pique readers' interest, but they should reserve judgment until they read the more substantial evidence presented in the body of the paper. Given the increasing number of journals available for review, determining which ones deserve readers' fullattention requires a screening system. Using the abstract as a screening device is a recommended starting point.

 3) Introduction The introduction to a research article should contain the following major elements: statement of the problem, literature review, purpose of the study and expected results (hypothesis).

3-1) Statement of the Problem The statement of the problem should describe the questions and concerns that led the author to undertake the investigation. Readers should ask themselves, "Why did the author conduct this study? What question did the author try to answer?" Readers should get a sense of the answers to these questions early in the introduction.

3-2) Literature Review The literature review should establish a theoretical and historical basis for the research paper and should provide support for construct validity. Construct validity is the theoretical conceptualization of intervention and response (3). It is a type of measurement validity that informs readers of the degree to which a theoretical construct is measured by an instrument (3). The author should attempt to identify a "gap of knowledge" between what is known (or previously documented) and what is desired to be known. Readers perusing the introduction should try to identify the "gap" as well as find information in the literature that supports the concept and approach of the study. In this section, the author should explain how his/her work is an attempt to close the gap by explaining why the study was conducted. Another way the author can close the gap is to critically review the published work of others and point out flaws, inconsistencies or areas where no conclusions can be drawn.

A literature review should be current; i.e., cited references should not be more than five to seven years old unless they are "classics." Readers should determine whether the author has failed to cite references on any crucial points. The literature review should be sufficient to meet the objectives stated above, but the author should avoid "overkill" in reviewing the literature. Very little is to be gained from citing 10 references to make one point.

A general statement should be made identifying the type of study (e.g., experimental, correlational or descriptive). This statement alerts readers to expect certain information in a particular format. For example, if the study is experimental or correlational, the author should delineate the expected results or the null hypothesis (this subject will be given more attention later). If the study is descriptive, the author should identify the need to collect the descriptive data or report the findings. The literature review should provide readers with a clear idea of what has been done in the past and provide conceptual support to the method. Readers can easily tell the author has spent a reasonable amount of time reviewing the literature if the review is a synthesis of reports logically arranged in sequential and chronological order.

3-3) Purpose of the Study The purpose of the study should be described in a direct, clear statement. The author who cannot clearly state the purpose of his! her research will most likely produce results that are not applicable in clinical situations.

3-4) Expected Results Ideally, the author of a research article should frame the research question in the form of a hypothesis. For these purposes, a hypothesis is defined as a tentative theory or supposition provisionally adopted to explain certain facts and to guide readers into further investigation (6). A report of a study should include an explicit statement of the study's hypothesis or expected research results. A research hypothesis states the researcher's true expectation of results; it is a statement that guides the interpretation of outcomes and conclusions. However, the statistical analysis of data is based on testing a statistical or null hypothesis, which differs from the research hypothesis in that it will always express no difference, or no relationship between the independent and dependent variables is expected. After the statement of the problem. literature review, purpose of the study and expected results have been examined in the introduction, the method used in the study is described.

 4) Method The method section of the research report should clearly explain how the study was conducted. Critical readers should pretend they are going to replicate the study: Is there sufficient detail in the method to conduct the study and obtain similar results? For clarity and convenience, the method can be divided into the following subsections: subjects, instrumentation and apparatus, procedure, and data analysis.

4-1) Subjects The author should summarize and describe the subjects who participated in the study in terms of age, sex, diagnosis and other pertinent demographic characteristics. If a particular diagnosis or characteristic is required for inclusion in the study, the criteria should be explained.

The extent to which readers are able to use the results of the study depends on how the sample of subjects was selected and how many subjects were included in the sample. Ideally, the subjects should be selected randomly so each individual in a larger population has the same chance of being included in the sample as anyone else.

4-2) Instrumentation and Apparatus The instruments used to measure variables should be described in such a way that readers can replicate the study. Footnotes specifying model numbers, corporate names and addresses, and other pertinent details about the instruments should be included here. If standardized questionnaires are used, they must be referenced. Any apparatus designed and developed by the researcher should be fully described with a drawing, photograph and description. If a questionnaire is developed by the researcher, it also should be presented.

Readers should rely on their natural curiosity when evaluating the instrumentation's appropriateness for measuring the study variables. Were the instruments calibrated? How were they calibrated? Are they reliable? Are they repeatable day-to-day? Is the instrument measuring what it is purported to measure? One common measurement error occurs when the author intends to measure pressure but measures force or torque instead. These are entirely different physical entities and cannot be interchanged without impunity. Some researchers refer to reliability when describing the instrumentation or apparatus. Reliability refers to the reproducibility of results at a different time or by a different investigator. Readers should be wary of fickle instruments that only a well-trained technician familiar with all their idiosyncrasies can operate; in someone else's hands, different results may occur. Some research projects are undertaken solely for the purpose of establishing the reliability of an instrument. If this is the case, the author is obligated to reference that in his or her article.

4-3) Procedure The procedure section of the method should explain exactly how and when the steps of the study were applied and how the data were collected. Readers who have a clear idea of how the research was conducted also will have a clear idea of how to apply the results or determine if they can accept the author's conclusions.

4-4) Data Analysis The data analysis section should describe all testing applied to the data. Readers must assess if the author chose the appropriate statistical tests for the type of study and design. This part of the method should not contain any results. When analyzing data, arithmetic operations too frequently are misapplied to data based on nominal and ordinal levels of measurement (10). The most common error is analyzing ordinal data as though they were quantitative (interval or ratio).

There is nothing wrong with this procedure for sorting the answers and performing tallies. However, a problem occurs when the arbitrary numerical assignments are analyzed with conventional statistical tests as though the answers were measured with a calibrated instrument. Even experienced investigators sometimes fail to realize, or to remember, that arithmetic operations (addition, subtraction, multiplication, division, squaring) cannot be performed legitimately on numbers associated with nominal or ordinal measurements. Ordinal scores merely reflect "greater than" or "less than" values, and the differences between the scores are not equal.

Different from ordinal and nominal data are continuous data for which mathematical manipulation is valid. There are two types of continuous data: interval and ratio. The difference between interval and ratio data is the zero-value for interval data is arbitrary (e.g., temperature) whereas the zero-value for ratio data is absolute (e.g., height, velocity, etc.).  An important and often missed step in the treatment of the data is screening (11). Readers can have more confidence in the statistical analysis when the author mentions the data were screened for errors in data entry, outliers and distribution. In computerized data management, there are numerous opportunities to err. It is helpful to categorize four types of analyses: descriptive, comparative, associative and predictive. It is common for continuous data to use means and standard deviations to summarize data sets whereas it is common practice to use frequencies, counts or percentages to summarize ordinal or nominal data. The comparative tests are a little more complicated; for continuous data, authors should use the t-test when comparing one or two devices and the ANOVA (analysis of variance) when comparing more than two devices. Associative tests are used to establish relationships between variables, and predictive tests are used to fit curves through data and extrapolate beyond the range of measured data.

5) Results The results section of a journal article should include the findings of the data analysis without commentary. Two groups of statistics, descriptive and inferential, may be included. Descriptive statistics summarize the raw data such as means and standard deviations. Inferential statistics are more complex and allow readers to infer conclusions from the data. It's not necessary to publish raw data in its entirety. Charts, graphs, tables and histograms are welcome additions when attempting to develop an overall summary of the results. The following concepts may help readers gain an understanding of two basic inferential statistical concepts. The first concept is the level of significance (12). Statistical tests have what is known as a level of significance.

6) Discussion Readers will be able to judge the knowledge and insight of the investigator in the discussion section. Has the author tied the results to the material presented in the introduction? Is the research question answered? Has the author given meaning to the results? While reviewing this section, readers should think back to the logic of the arguments presented and consider the issues related to the original problem. Is there a succinct reference to the original hypothesis? Has the author considered broader implications of his/her findings? One common pitfall readers should watch for is a discussion of insignificant results described as though they were significant. Imputing meaning to data that may reflect chance differences is misleading because it suggests significance where none exists. Readers also should be wary of unsupported conclusions.

Drawing conclusions from future experiments is fraught with suspicious bias. Most research is not of a dramatic, profound, profession-changing nature and usually creates more questions than it answers. Readers should ask themselves, has the author made suggestions for future studies to expand upon his/her lead? Finally, critical readers must judge if the researcher has conducted fair and objective research.

7) Conclusion The conclusion section of a research article contains a brief restatement of the experimental results and describes the implications of the study. Because the abstract summarizes the entire article, only key points are given in the conclusion.

8) References:

1. Fishbein M. Medical writing: the technician and the art, 4th ed. Springfield, Ill.: Charles C. Thomas, 1978.

2. Currier DP. Elements of research in physical therapy, 2nd ed. Baltimore: Williams and Wilkins, 1984:298.

3. Portney LG, Watkins, MP. Foundations of clinical research- applications and practice. Norwalk, Calif.: Appleton & Lange, 1993:680.

4. Sutherland DH. An electromyographic study of the plantarflexors of the ankle in normal walking on the level. JBJS

January 1966;48A:1 :66-71.

5. Cappa AJ, Burke SB, Axlerod FB, Levine DB. Orthotic management of scoliosis in familial dysautonomia. JPOSummer 1994:6:3:74-8.

6. Webster's new collegiate dictionary, 2nd ed. Springfield. Mass.: G & C Merriam Co.

7. Lunsford TR, Lunsford BR. The research sample, part I: sampling. JPO Summer 1995;7:3:105-12.

8. Lunsford TR, Lunsford BR. The research sample, part II: sample size. JPO Fall 1995;7:4:137-41.

9. Lunsford TR. Types of clinical studies. JPO October 1993;5:4:105-11.

10. Lunsford BR. Methodology: variables and levels of measurement. JPO October 1 993;5:4:1 21-4.

11. Lunsford BR. Statistics: screening and data summary. JPO October 1993;5:4:125-30.

12. Domholdt E, Malone T. Evaluating research literature: the educated clinician. Phvs Ther April 1985;65:4:487-9

 

 

 Manscripts