Expert answer:STAT250 Multiple choice questions using respondus lockdown on Blackboard login information will be provided Attached: Review packet Formula packet
formula_packet.pdf
review_for_the_final.docx
Unformatted Attachment Preview
STAT 250 Formulas
Descriptive Statistics
Probability Rules
P
x
n
x̄ =
P
2
s =
0 ≤ P (A) ≤ 1
(x − x̄)2
n−1
P (Ac ) = 1 − P (A)
number of successes
p̂ =
number of trials
IQR = Q3 − Q1
P (A OR B) = P (A) + P (B) − P (A AND B)
Lower Fence = Q1 − (1.5 × IQR)
Upper Fence = Q3 + (1.5 × IQR)
z-score =
P (A AND B) = P (A)P (B) when A and B
are independent
value − mean
x − x̄
=
standard deviation
s
Normal Distribution
Binomial Distribution
z=
x−µ
σ
µ = np
σ=
q
np (1 − p)
Central Limit Theorem for p̂
Conditions:
x = σz + µ
Central Limit Theorem for x̄
Conditions:
1. Random sample and independence
1. Random sample and independence
2. np ≥ 10 and n (1 − p) ≥ 10
2. Normal population or n ≥ 25
3. N ≥ 10n
3. N ≥ 10n
Mean = p
s
SE =
p (1 − p)
n
Mean = µ
σ
SE = √
n
Inferential Statistics
confidence interval = point estimate ± margin of error
standardized test statistic =
parameter
confidence interval
s
p
µ
p̂ ± z ∗
p̂ (1 − p̂)
n
s
x̄ ± t∗ √
n
s
µ 1 − µ2
(x̄1 − x̄2 ) ± t∗
µdifference
x̄difference ± t∗
statistic − parameter
standard error
test statistic
z=s
t=
s21
s2
+ 2
n1 n2
sdifference
√
n
p̂ − p0
p0 (1 − p0 )
n
x̄ − µ0
s
√
n
x̄1 − x̄2 − 0
t= s
s2
s21
+ 2
n1 n2
t=
degree of
freedom
x̄difference − 0
√
sdifference / n
Common z ∗ Values
Confidence Level
z∗
80%
1.282
90%
1.645
95%
1.960
99%
2.576
n−1
min (n1 − 1, n2 − 1)
n − 1, where n is
number of pairs
Cumulative
Probability
Cumulative Probability for z is The Area Under
the Standard Normal Curve to The Left of z
z
Table 2: Standard Normal Cumulative Probabilities
z
.00
z
.00
.01
.02
.03
.04
.05
.06
.07
.08
.09
– 5.0
.000000287
-3.4
.0003
.0003
.0003
.0003
.0003
.0003
.0003
.0003
.0003
.0002
– 4.5
.00000340
-3.3
.0005
.0005
.0005
.0004
.0004
.0004
.0004
.0004
.0004
.0003
– 4.0
.0000317
-3.2
.0007
.0007
.0006
.0006
.0006
.0006
.0006
.0005
.0005
.0005
– 3.5
.000233
-3.1
-3.0
-2.9
-2.8
-2.7
-2.6
-2.5
-2.4
-2.3
-2.2
-2.1
-2.0
-1.9
-1.8
-1.7
-1.6
-1.5
-1.4
-1.3
-1.2
-1.1
-1.0
-0.9
-0.8
-0.7
-0.6
-0.5
-0.4
-0.3
-0.2
-0.1
-0.0
.0010
.0009
.0009
.0009
.0008
.0008
.0008
.0008
.0007
.0007
.0013
.0019
.0026
.0035
.0047
.0062
.0082
.0107
.0139
.0179
.0228
.0287
.0359
.0446
.0548
.0668
.0808
.0968
.1151
.1357
.1587
.1841
.2119
.2420
.2743
.3085
.3446
.3821
.4207
.4602
.5000
.0013
.0018
.0025
.0034
.0045
.0060
.0080
.0104
.0136
.0174
.0222
.0281
.0351
.0436
.0537
.0655
.0793
.0951
.1131
.1335
.1562
.1814
.2090
.2389
.2709
.3050
.3409
.3783
.4168
.4562
.4960
.0013
.0018
.0024
.0033
.0044
.0059
.0078
.0102
.0132
.0170
.0217
.0274
.0344
.0427
.0526
.0643
.0778
.0934
.1112
.1314
.1539
.1788
.2061
.2358
.2676
.3015
.3372
.3745
.4129
.4522
.4920
.0012
.0017
.0023
.0032
.0043
.0057
.0075
.0099
.0129
.0166
.0212
.0268
.0336
.0418
.0516
.0630
.0764
.0918
.1093
.1292
.1515
.1762
.2033
.2327
.2643
.2981
.3336
.3707
.4090
.4483
.4880
.0012
.0016
.0023
.0031
.0041
.0055
.0073
.0096
.0125
.0162
.0207
.0262
.0329
.0409
.0505
.0618
.0749
.0901
.1075
.1271
.1492
.1736
.2005
.2296
.2611
.2946
.3300
.3669
.4052
.4443
.4840
.0011
.0016
.0022
.0030
.0040
.0054
.0071
.0094
.0122
.0158
.0202
.0256
.0322
.0401
.0495
.0606
.0735
.0885
.1056
.1251
.1469
.1711
.1977
.2266
.2578
.2912
.3264
.3632
.4013
.4404
.4801
.0011
.0015
.0021
.0029
.0039
.0052
.0069
.0091
.0119
.0154
.0197
.0250
.0314
.0392
.0485
.0594
.0721
.0869
.1038
.1230
.1446
.1685
.1949
.2236
.2546
.2877
.3228
.3594
.3974
.4364
.4761
.0011
.0015
.0021
.0028
.0038
.0051
.0068
.0089
.0116
.0150
.0192
.0244
.0307
.0384
.0475
.0582
.0708
.0853
.1020
.1210
.1423
.1660
.1922
.2206
.2514
.2843
.3192
.3557
.3936
.4325
.4721
.0010
.0014
.0020
.0027
.0037
.0049
.0066
.0087
.0113
.0146
.0188
.0239
.0301
.0375
.0465
.0571
.0694
.0838
.1003
.1190
.1401
.1635
.1894
.2177
.2483
.2810
.3156
.3520
.3897
.4286
.4681
.0010
.0014
.0019
.0026
.0036
.0048
.0064
.0084
.0110
.0143
.0183
.0233
.0294
.0367
.0455
.0559
.0681
.0823
.0985
.1170
.1379
.1611
.1867
.2148
.2451
.2776
.3121
.3483
.3859
.4247
.4641
Cumulative
Probability
Cumulative Probability for z is The Area Under
the Standard Normal Curve to The Left of z
z
Standard Normal Cumulative Probabilities (continued)
z
.00
.01
.02
.03
.04
.05
.06
.07
.08
.09
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.0
3.1
3.2
3.3
3.4
.5000
.5398
.5793
.6179
.6554
.6915
.7257
.7580
.7881
.8159
.8413
.8643
.8849
.9032
.9192
.9332
.9452
.9554
.9641
.9713
.9772
.9821
.9861
.9893
.9918
.9938
.9953
.9965
.9974
.9981
.9987
.9990
.9993
.9995
.9997
.5040
.5438
.5832
.6217
.6591
.6950
.7291
.7611
.7910
.8186
.8438
.8665
.8869
.9049
.9207
.9345
.9463
.9564
.9649
.9719
.9778
.9826
.9864
.9896
.9920
.9940
.9955
.9966
.9975
.9982
.9987
.9991
.9993
.9995
.9997
.5080
.5478
.5871
.6255
.6628
.6985
.7324
.7642
.7939
.8212
.8461
.8686
.8888
.9066
.9222
.9357
.9474
.9573
.9656
.9726
.9783
.9830
.9868
.9898
.9922
.9941
.9956
.9967
.9976
.9982
.9987
.9991
.9994
.9995
.9997
.5120
.5517
.5910
.6293
.6664
.7019
.7357
.7673
.7967
.8238
.8485
.8708
.8907
.9082
.9236
.9370
.9484
.9582
.9664
.9732
.9788
.9834
.9871
.9901
.9925
.9943
.9957
.9968
.9977
.9983
.9988
.9991
.9994
.9996
.9997
.5160
.5557
.5948
.6331
.6700
.7054
.7389
.7704
.7995
.8264
.8508
.8729
.8925
.9099
.9251
.9382
.9495
.9591
.9671
.9738
.9793
.9838
.9875
.9904
.9927
.9945
.9959
.9969
.9977
.9984
.9988
.9992
.9994
.9996
.9997
.5199
.5596
.5987
.6368
.6736
.7088
.7422
.7734
.8023
.8289
.8531
.8749
.8944
.9115
.9265
.9394
.9505
.9599
.9678
.9744
.9798
.9842
.9878
.9906
.9929
.9946
.9960
.9970
.9978
.9984
.9989
.9992
.9994
.9996
.9997
.5239
.5636
.6026
.6406
.6772
.7123
.7454
.7764
.8051
.8315
.8554
.8770
.8962
.9131
.9279
.9406
.9515
.9608
.9686
.9750
.9803
.9846
.9881
.9909
.9931
.9948
.9961
.9971
.9979
.9985
.9989
.9992
.9994
.9996
.9997
.5279
.5675
.6064
.6443
.6808
.7157
.7486
.7794
.8078
.8340
.8577
.8790
.8980
.9147
.9292
.9418
.9525
.9616
.9693
.9756
.9808
.9850
.9884
.9911
.9932
.9949
.9962
.9972
.9979
.9985
.9989
.9992
.9995
.9996
.9997
.5319
.5714
.6103
.6480
.6844
.7190
.7517
.7823
.8106
.8365
.8599
.8810
.8997
.9162
.9306
.9429
.9535
.9625
.9699
.9761
.9812
.9854
.9887
.9913
.9934
.9951
.9963
.9973
.9980
.9986
.9990
.9993
.9995
.9996
.9997
.5359
.5753
.6141
.6517
.6879
.7224
.7549
.7852
.8133
.8389
.8621
.8830
.9015
.9177
.9319
.9441
.9545
.9633
.9706
.9767
.9817
.9857
.9890
.9916
.9936
.9952
.9964
.9974
.9981
.9986
.9990
.9993
.9995
.9997
.9998
z
3.5
4.0
4.5
5.0
.00
.999767
.9999683
.9999966
.999999713
Right-Tailed
Probability
t
Table 4: t-Distribution Critical Values
Confidence Level
80%
90%
95%
98%
99%
99.8%
Right-Tailed Probability
df
t.100
t.050
t.025
t.010
t.005
t.001
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
40
50
60
80
100
∞
3.078
1.886
1.638
1.533
1.476
1.440
1.415
1.397
1.383
1.372
1.363
1.356
1.350
1.345
1.341
1.337
1.333
1.330
1.328
1.325
1.323
1.321
1.319
1.318
1.316
1.315
1.314
1.313
1.311
1.310
1.303
1.299
1.296
1.292
1.290
1.282
6.314
2.920
2.353
2.132
2.015
1.943
1.895
1.860
1.833
1.812
1.796
1.782
1.771
1.761
1.753
1.746
1.740
1.734
1.729
1.725
1.721
1.717
1.714
1.711
1.708
1.706
1.703
1.701
1.699
1.697
1.684
1.676
1.671
1.664
1.660
1.645
12.706
4.303
3.182
2.776
2.571
2.447
2.365
2.306
2.262
2.228
2.201
2.179
2.160
2.145
2.131
2.120
2.110
2.101
2.093
2.086
2.080
2.074
2.069
2.064
2.060
2.056
2.052
2.048
2.045
2.042
2.021
2.009
2.000
1.990
1.984
1.960
31.821
6.965
4.541
3.747
3.365
3.143
2.998
2.896
2.821
2.764
2.718
2.681
2.650
2.624
2.602
2.583
2.567
2.552
2.539
2.528
2.518
2.508
2.500
2.492
2.485
2.479
2.473
2.467
2.462
2.457
2.423
2.403
2.390
2.374
2.364
2.326
63.656
9.925
5.841
4.604
4.032
3.707
3.499
3.355
3.250
3.169
3.106
3.055
3.012
2.977
2.947
2.921
2.898
2.878
2.861
2.845
2.831
2.819
2.807
2.797
2.787
2.779
2.771
2.763
2.756
2.750
2.704
2.678
2.660
2.639
2.626
2.576
318.289
22.328
10.214
7.173
5.894
5.208
4.785
4.501
4.297
4.144
4.025
3.930
3.852
3.787
3.733
3.686
3.646
3.611
3.579
3.552
3.527
3.505
3.485
3.467
3.450
3.435
3.421
3.408
3.396
3.385
3.307
3.261
3.232
3.195
3.174
3.091
REVIEW FOR THE FINAL
CH 1: Data
Categorical vs. Quantitative
◦ Discrete vs. Continuous Quantitative Data
Populations vs. Samples
Data Analysis
◦ Identify the research objective
◦ Collect the information needed
◦ Organize and summarize the information
◦ Draw conclusions form the information
Data coding
Stacked vs. Unstacked Data.
Organizing categorical data in two-way tables
Row percentages
Column percentages
Types of studies
◦ Observational vs. Experiment (Advantages and disadvantages)
“Gold Standard” for Experiments
◦ Large sample size
◦ Random assignment to groups
◦ Placebo used
◦ Double-blind
CH 2: Visual Summaries
Categorical variables
◦ Bar Charts
◦ Pie Charts
Numerical variables
◦ Dot Plots
◦ Histograms
◦ Stem Plots
Shape (including deviations of the overall pattern)
◦ Symmetric
uniform
bell shaped
other symmetric shapes
◦ Asymmetric
right skewed
left skewed
◦ Unimodal, bimodal
Typical Value (center)
Variability (spread)
CH 3: Numerical Summaries
Symmetric Distributions
◦ Mean
◦ Variance and S.D.
Empirical Rule
◦ 68, 95, 99%
◦ Z-scores
Skewed Distributions
◦ Median
◦ IQR
Quartiles
Five-Number Summary
◦ Min, Q1, Median, Q3, Max
◦ Boxplot is visual representation
Fence Rule
CH 4: Regression Analysis: Exploring Associations Between Variables
Response (Y, predicted ) variable vs. Explanatory (X, predictor) variable
Scatterplots (Graphical descriptor of associations)
◦ Shape (linear, curved, etc)
◦ Trend ( + or -)
◦ Strength (how close are points?)
Possible outliers
Correlation (Numerical descriptor of associations)
◦ -1< r < 1
◦ The closer to -1 or 1 the stronger the relationship
Regression line
◦ y = a + bx
◦ Used often to predict future values
Coefficient of Determination
◦ R^2
◦ Estimates how much of the variability in Y can be explained by X
CH 5: Modeling Variation with Probability
Basic Probability Rules
◦ 0 < P(E) < 1.
◦ P(S) = 1
◦ P(EC) = 1- P(E)
◦ P(A or B) = P(A) + P(B) if two events are mutually exclusive
◦ P(A and B) = P(A) x P(B) if two events are independent
Law of Large Numbers
Empirical vs. Theoretical probabilities
Relationship between outcomes, events, and sample spaces
CH 6: Modeling Random Events
Discrete Random Variables
◦ Usually counts
◦ Probability distribution displayed as table
◦ We can find P(X=x)
◦ 0 ≤ P(xi) ≤ 1
◦ Sum of all probabilities =1
Continuous Random Variables
◦ Usually measurements
◦ Probability Distribution Displayed as graph or Formula
◦ Cannot find P(X=x)=0, we are interested in intervals and their area under curve
◦ Density Curve
The total area under the curve equals 1.
The curve must always be on or above the x-axis.
The Normal Distribution N(µ,σ)
◦ Symmetric distribution around the mean
◦ Empirical Rules Apply
◦ Convert to Standard Normal Distribution: N(0,1)
“Standardizing” process: X -> Z -> P
“Unstandardizing” process: P -> Z -> X
Binomial Distribution B(n,p)
◦ Check for Binomial Setting
There are a fixed number of trials n.
Each observation fall into one of just two categories (called success and failure).
The probability of a success is the same for each trial and is labeled, p.
The n trials are all independent
◦ Can find P(X=k) with Probability Function
◦ Find other probabilities with table or StatCrunch
P(X ≤ k)
P(X < k)
P(X ≥ k)
P(X > k)
CH 7: Survey Sampling and Inference
Sampling
◦ Simple Random Sampling
◦ Measurement Bias
◦ Sampling Bias
Statistical Inference: We want to estimate population parameters using sample statistics with:
◦ Accuracy (correctness, center, bias)
◦ Precision (constancy, spread, standard error)
Sample Statistics have their own distributions called sampling distributions.
Central Limit theorem lets us make assumptions about sampling distributions and holds if:
◦ We have a simple random Sample
◦ Large Sample: # s >10 , # f >10, n*p ≥ 10, n(1-p) ≥ 10
◦ Big population: ( N > 10*n)
Sampling distribution of the sample proportion:
◦ If the central limit theorem checks out, we can assume the sampling distribution of the
sample proportion is Normal with a mean of p, and Standard error of √
Confidence intervals for a single proportion
◦ First need to make sure CLT checks out
◦ Interval calculation (LCL , UCL)
◦ Interpretation: We are CL% confident that our constructed interval captures the true
population parameter.
Sampling distribution of the difference between two proportion:
◦ If the central limit theorem checks out, we can assume the sampling distribution of the
sample proportion is Normal with a mean of p1-p2 , and Standard error of
1 (1− 1 )
1
√
+
2 (1− 2 )
2
Confidence intervals for the difference in two proportions
◦ Need to Check CLT for both samples as well as independence of the two samples
◦ We are most interested in whether our constructed interval captures 0.
CH 8: Hypothesis Testing for Population proportions
(1− )
Hypothesis testing process:
◦ State the hypotheses
• Null and Alternative
◦ Determine the significance level
• α usually given
◦ Calculate the test statistic
◦ Make a decision to reject or fail to reject the null hypothesis
• Find P-value and compare to α
If P-val < α REJECT
If P-val > α FTR
◦ Draw conclusions and interpret results in the context of your question.
Types of Errors
◦ Type I error – (α) occurs if one rejects a true H0
◦ Type II error (β) occurs if one does not reject H0 when it is false.
Hypothesis Tests in Detail
◦ Relationship between Hypothesis tests and Cis
• Only works for two-tailed test
◦ Reducing probability of errors
• Increasing Sample size helps accuracy and precision
◦ Statistical Significance vs. Practical Significance
Full Hypothesis Tests for a population proportion
Full Hypothesis Tests for the difference in two proportions
CH 9: Inferring Population Means
σ known- CLT for means allows us to use Z procedures in the following situations…
◦ 1.) If our Population ~ N(µ,σ)
σ
Then ̅ ~ N(µ, ) for any n.
◦
√
2.) If our Population ~ ?(µ,σ) aka non-normal
σ
Then ̅ ~ N(µ, ) if n > 25
√
σ unknown – We will most likely use T procedures with n-1 D.o.F. iff…
◦ Our sample appears to come from a normal Population
We see no skewness (Check Histogram)
We see no outliers (Check Boxplot)
◦ For n > 25 we can be a bit more flexible with assumptions.
Hypothesis Tests and CIs for means using Z procedures
Hypothesis Tests and CIs for means using T procedures
2 Sample situations:
◦ Independent Samples -No relationship and randomness between the two samples
We will use 2 Sample T procedures with D.o.F. Min={n1-1, n1-1} iff…
We see no skewness in either sample (Check 2 Histograms)
We see no outliers in either sample (Check 2 Boxplots)
◦ Matched Pairs (dependent Samples) -An apparent, intentional relationship between the
two samples
We perform a one-sample T-test on the differences with D.o.F. nd-1 iff…
We see no skewness in the differences (Check Histogram)
We see no outliers in the differences (Check Boxplot)
◦ For n1 > 25 and n2 > 25 we can be a bit more flexible with assumptions.
…
Purchase answer to see full
attachment
You will get a plagiarism-free paper and you can get an originality report upon request.
All the personal information is confidential and we have 100% safe payment methods. We also guarantee good grades
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more