4.4 R1 In which of the following problems is Case/Control Sampling LEAST likely to make a positive impact?
A. Predicting a shopper‘s gender based on the products they buy
B. Finding predictors for a certain type of cancer
C. Predicting if an email is Spam or Not Spam
Correct answer: A
Explanation: Case/Control sampling is most effective when the prior probabilities of the classes are very unequal. We expect this to be the case for the cancer and spam problems, but not the gender problem.
4.5 R1 Suppose that in Ad Clicks (a problem where you try to model if a user will click on a particular ad) it is well known that the majority of the time an ad is shown it will not be clicked. What is another way of saying that?
A. Ad Clicks have a low Prior Probability.
B. Ad Clicks have a high Prior Probability.
C. Ad Clicks have a low Density.
D. Ad Clicks have a high Density.
Correct answer: A
Explanation: Whether or not an ad gets clicked is a Qualitative Variable. Thus, it does not have a density. The Prior Probability of Ad Clicks is low because most ads are not clicked.
4.6 R1 Which of the following is NOT a linear function in x:
A. f(x) = a + b^2x
B. The discriminant function from LDA.
C. \delta_k(x) = x\frac{\mu_k}{\sigma^2} - \frac{\mu_k^2}{2\sigma^2} +\log(\pi_k)
D. \text{logit}(P(y = 1 | x)) where P(y = 1 | x) is as in logistic regression
E. P(y = 1 | x) from logistic regression
Correct answer: E Explanation: P(y = 1 | x) from logistic regression is not linear because it involves both an exponential function of x and a ratio.
5.1 R2 What are reasons why test error could be LESS than training error?
A. By chance, the test set has easier cases than the training set.
B. The model is highly complex, so training error systematically overestimates test error.
C. The model is not very complex, so training error systematically overestimates test error.
Correct answer: A
Explanation: Training error usually UNDERestimates test error when the model is very complex (compared to the training set size), and is a pretty good estimate when the model is not very complex. However, it‘s always possible we just get too few hard-to-predict points in the test set, or too many in the training set.
---恢复内容结束---