KSU Faculty Member websites > الاخصائيه صفيه الملا

Statistic Reviow

Statistics is the scientific application of mathematical principles to the collection, analysis, and presentation of numerical data.

Today, statistics has become an important tool in the work of many academic disciplines such as medicine, psychology, education, sociology, engineering and physics, just to name a few. Statistics is also important in many aspects of society such as business, industry and government.

Because of the increasing use of statistics in so many areas of our lives, it has become very desirable to understand and practise statistical thinking. This is important even if you do not use statistical methods directly.

The role of a statistical test is, basically, quite simple: It asks whether or not the result you obtained from your analysis might have occurred by chance.

The following is a short summary with links to related websights

Qualitative vs. quantitative data

Qualitative research involves analysis of data such as words (e.g., from interviews), pictures (e.g., video), or objects (e.g., an artifact). Quantitative research involves analysis of numerical data.

The strengths and weaknesses of qualitative and quantitative research are a perennial, hot debate, especially in the social sciences.

The sights below are a helpfull tool to learn about qualitative and quantitative data

http://www.csse.monash.edu.au/~smarkham/resources/qual.htm

http://www.wilderdom.com/OEcourses/PROFLIT/Class6Qualitative1.htm

http://www.wilderdom.com/OEcourses/PROFLIT/Class4QuantitativeResearchDesigns.htm

Reasearch Sampling:

Researchers must choose probability sampling methods over the ease of non-probability sampling so they can generalize their study results and reduce the risk of bias.

### Why Use Probability Sampling?

For example: If school administrators wished to conduct a survey assessing the popularity of pizza on the cafeteria menu, they could stop students on the way to the library and ask them the survey questions. Although this non-probability sampling type is a convenient way to conduct a survey, it’s not as accurate or rigorous as some probability sampling modalities.

In any field of research, researchers must set up a process that assures that the different members of a population have an equal chance of selection. This allows researchers to draw some general conclusions beyond those people included in the study. Another reason for probability sampling is the need to eliminate any possible researcher bias. Returning to the pizza survey example, the survey administrator might not be inclined to stop the troublemaker who threw water balloons in the cafeteria last week.

Researchers can choose from several types of probability sampling such as:

• ### Multistage Sampling

The sight below is a helpfull tool to learn about diffrent type of sampling:

Simple Liner Regression

What it does: Simple Linear Regression tells you the amount of variance accounted for by one variable in predicting another variable

Regression is a method by which a functional relationship in the real world may be described by a mathematical model which may then, like all models, be used to explore, describe or predict the relationship.

Regression vs Correlation:

Firstly, the difference between regression and correlation needs to be emphasised. Both methods attempt to describe the association between two (or more) variables, and are often confused by students and professional scientists alike!

Correlation makes no a priori assumption as to whether one variable is dependent on the other(s) and is not concerned with the relationship between variables; instead it gives an estimate as to the degree of association between the variables. In fact, correlation analysis tests for interdependence of the variables.

As regression attempts to describe the dependence of a variable on one (or more) explanatory variables; it implicitly assumes that there is a one-way causal effect from the explanatory variable(s) to the response variable, regardless of whether the path of effect is direct or indirect. There are advanced regression methods that allow a non-dependence based relationship to be described (eg. Principal Components Analysis or PCA) and these will be touched on later.

The sights below are helpfull tool to learn about simple liner regression:

http://www.le.ac.uk/bl/gat/virtualfc/Stats/regression/regr1.html

# Analysis Of Variance (ANOVA)

Analysis of variance, which is usually shortened to ANOVA, is the most commonly used statistical method for testing hypotheses about 3 or more means. The ANOVA statistic is called the F-test, after its developer, Fisher.

The reason for doing an ANOVA is to see if there is any difference between groups on some variable. We use ANOVA when we want to test the null hypothesis (Ho) that 3 or more means are drawn from the same population. If we have 2 means, we use the t-test which turns out to be just a special case of ANOVA.

Like the t, F depends on degrees of freedom to determine probabilities and critical values. But there is a difference between t and F in terms of the degrees of freedom concept. F has two different degrees of freedom to calculate. In contrast, t has only one formula for calculating degrees of freedom.

One-Way ANOVA

Comparing the averages among several groups, It’s called “one-way” because there is only one grouping of the observations into categories. We consider the effect of one factor on the values taken by a variable. The two-away ANOVA deals with the case where there are two factors.

The sights below are a helpfull tool to learn about One-Way ANOVA

Two-Way ANOVA

Two-way analysis of variance experiments have two independent treatment factors each of which has two or more levels. Two-way ANOVA tests for significant differences between the factor level means within a factor and for interactions between the factors.

You may also use it to test the interaction between two factors. In addition, you can calculate actual power for a specified alpha level and hypothetical power for varying sample sizes.

The t-test

What does it mean to say that the averages for two groups are statistically different?

The t-test answer this question: Are two sets of data really different? The t-test assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups. For example, compare whether systolic blood pressure differs between a control and treated group, between men and women, or any other two groups.

Don't confuse t  tests with correlation and regression. The t test compares one variable (perhaps blood pressure) between two groups. Use correlation and regression to see how two variables (perhaps blood pressure and heart rate) vary together. Also don't confuse t  tests with ANOVA. The t  tests (and related nonparametric tests) compare exactly two groups. ANOVA (and related nonparametric tests) compare three or more groups.

Finally, don't confuse a t test with analyses of a contingency table (Fishers or chi-square test). Use a t  test to compare a continuous variable (e.g., blood pressure, weight or enzyme activity).

Using  SPSS softwear for your statistic :

Press the blue X mark to chose what you want to do:

http://www.wellesley.edu/Psychology/Psych205/tree.html

Dr. Hisham S. Abou-Auda  SPSS VideoTutorials: