Statistic Reviow
(All You Need to Know about your research Statistic)
Statistics is the scientific application of mathematical principles to the collection, analysis, and presentation of numerical data.
Today, statistics has become an important tool in the work of many academic disciplines such as medicine, psychology, education, sociology, engineering and physics, just to name a few. Statistics is also important in many aspects of society such as business, industry and government.
Because of the increasing use of statistics in so many areas of our lives, it has become very desirable to understand and practise statistical thinking. This is important even if you do not use statistical methods directly.
The role of a statistical test is, basically, quite simple: It asks whether or not the result you obtained from your analysis might have occurred by chance.
The following is a short summary with links to related websights
Qualitative vs. quantitative data
Qualitative research involves analysis of data such as words (e.g., from interviews), pictures (e.g., video), or objects (e.g., an artifact). Quantitative research involves analysis of numerical data.
The strengths and weaknesses of qualitative and quantitative research are a perennial, hot debate, especially in the social sciences.
The sights below are a helpfull tool to learn about qualitative and quantitative data
http://www.regentsprep.org/Regents/math/ALGEBRA/AD1/qualquant.htm
http://www.csse.monash.edu.au/~smarkham/resources/qual.htm
http://www.gifted.uconn.edu/siegle/research/Qualitative/qualquan.htm
http://www.wilderdom.com/OEcourses/PROFLIT/Class6Qualitative1.htm
http://www.wilderdom.com/OEcourses/PROFLIT/Class4QuantitativeResearchDesigns.htm
http://www.regentsprep.org/Regents/math/ALGEBRA/AD1/DataPrac.htm
Reasearch Sampling:
Researchers must choose probability sampling methods over the ease of nonprobability sampling so they can generalize their study results and reduce the risk of bias.
Why Use Probability Sampling?
For example: If school administrators wished to conduct a survey assessing the popularity of pizza on the cafeteria menu, they could stop students on the way to the library and ask them the survey questions. Although this nonprobability sampling type is a convenient way to conduct a survey, it’s not as accurate or rigorous as some probability sampling modalities.
In any field of research, researchers must set up a process that assures that the different members of a population have an equal chance of selection. This allows researchers to draw some general conclusions beyond those people included in the study. Another reason for probability sampling is the need to eliminate any possible researcher bias. Returning to the pizza survey example, the survey administrator might not be inclined to stop the troublemaker who threw water balloons in the cafeteria last week.
http://www.regentsprep.org/Regents/math/ALGEBRA/AD1/biased.htm
http://www.regentsprep.org/Regents/math/ALGEBRA/AD1/Tdata.htm
Researchers can choose from several types of probability sampling such as:

Stratified Random Sampling

Simple Random Sampling

Cluster Sampling

Multistage Sampling
The sight below is a helpfull tool to learn about diffrent type of sampling:
http://www.coventry.ac.uk/ec/~nhunt/meths/index.html
Simple Liner Regression
What it does: Simple Linear Regression tells you the amount of variance accounted for by one variable in predicting another variable
Regression is a method by which a functional relationship in the real world may be described by a mathematical model which may then, like all models, be used to explore, describe or predict the relationship.
Regression vs Correlation:
Firstly, the difference between regression and correlation needs to be emphasised. Both methods attempt to describe the association between two (or more) variables, and are often confused by students and professional scientists alike!
Correlation makes no a priori assumption as to whether one variable is dependent on the other(s) and is not concerned with the relationship between variables; instead it gives an estimate as to the degree of association between the variables. In fact, correlation analysis tests for interdependence of the variables.
As regression attempts to describe the dependence of a variable on one (or more) explanatory variables; it implicitly assumes that there is a oneway causal effect from the explanatory variable(s) to the response variable, regardless of whether the path of effect is direct or indirect. There are advanced regression methods that allow a nondependence based relationship to be described (eg. Principal Components Analysis or PCA) and these will be touched on later.
The sights below are helpfull tool to learn about simple liner regression:
http://www.graphpad.com/curvefit/linear_regression.htm
http://www.le.ac.uk/bl/gat/virtualfc/Stats/regression/regr1.html
The Correlation
The correlation is one of the most common and most useful statistics. A correlation is a single number that describes the degree of relationship between two variables.
Pearson's Correlation
The most widelyused type of correlation coefficient is Pearson r, also called linear or product moment correlation.
It is a measure of the degree of linear relationship between two variables, usually labeled X and Y. While in regression the emphasis is on predicting one variable from the other, in correlation the emphasis is on the degree to which a linear model may describe the relationship between two variables.
In regression the interest is directional, one variable is predicted and the other is the predictor; in correlation the interest is nondirectional, the relationship is the critical aspect.
The sights below are helpfull tool to learn about correlation
Analysis Of Variance (ANOVA)
Analysis of variance, which is usually shortened to ANOVA, is the most commonly used statistical method for testing hypotheses about 3 or more means. The ANOVA statistic is called the Ftest, after its developer, Fisher.
The reason for doing an ANOVA is to see if there is any difference between groups on some variable. We use ANOVA when we want to test the null hypothesis (Ho) that 3 or more means are drawn from the same population. If we have 2 means, we use the ttest which turns out to be just a special case of ANOVA.
Like the t, F depends on degrees of freedom to determine probabilities and critical values. But there is a difference between t and F in terms of the degrees of freedom concept. F has two different degrees of freedom to calculate. In contrast, t has only one formula for calculating degrees of freedom.
The sight below is helpfull tool to learn about ANOVA
OneWay ANOVA
Comparing the averages among several groups, It’s called “oneway” because there is only one grouping of the observations into categories. We consider the effect of one factor on the values taken by a variable. The twoaway ANOVA deals with the case where there are two factors.
The sights below are a helpfull tool to learn about OneWay ANOVA
TwoWay ANOVA
Twoway analysis of variance experiments have two independent treatment factors each of which has two or more levels. Twoway ANOVA tests for significant differences between the factor level means within a factor and for interactions between the factors.
You may also use it to test the interaction between two factors. In addition, you can calculate actual power for a specified alpha level and hypothetical power for varying sample sizes.
The sight below is helpfull tool to learn about Twoway analysis:
The ttest
What does it mean to say that the averages for two groups are statistically different?
The ttest answer this question: Are two sets of data really different? The ttest assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups. For example, compare whether systolic blood pressure differs between a control and treated group, between men and women, or any other two groups.
Don't confuse t tests with correlation and regression. The t test compares one variable (perhaps blood pressure) between two groups. Use correlation and regression to see how two variables (perhaps blood pressure and heart rate) vary together. Also don't confuse t tests with ANOVA. The t tests (and related nonparametric tests) compare exactly two groups. ANOVA (and related nonparametric tests) compare three or more groups.
Finally, don't confuse a t test with analyses of a contingency table (Fishers or chisquare test). Use a t test to compare a continuous variable (e.g., blood pressure, weight or enzyme activity).
The sights below are helpfull tool to learn about ttest
Using SPSS softwear for your statistic :
Press the blue X mark to chose what you want to do:
http://www.wellesley.edu/Psychology/Psych205/tree.html
Dr. Hisham S. AbouAuda SPSS VideoTutorials:
http://faculty.ksu.edu.sa/hisham/Pages/SPSS_Tut.aspx#