EssayGhost Assignment代写,Essay代写,网课代修,Quiz代考

EssayGhost-Essay代写,作业代写,网课代修代上,cs代写代考

一站式网课代修,论文代写

高质量的Assignment代写、Paper代写、Report代写服务

EG1hao
网课代修代上,cs代写代考
Python代写
您的位置: 主页 > 编程案例 > Python代写 >
代写Python:Time Series代写 machine learning代写 Logistic Regression代写 task代写 - Python代写
发布时间:2021-07-25 21:38:07浏览次数:
where, for example, 1st quart6, means the first quartile of the sixth time series in each of the 88 instances.Time Series代写iii.Estimate the standard deviation of each of the time-domain features you extracted from the data. Then, use Python’s bootstrapped or any other method to build a 90% bootsrap confidence interval for the standarddeviation of eachiv.Use your judgement to select the three most important time-domain features (one option may be min, mean, andmax).Time Series代写(d)Binary Classification Using LogisticRegression2 Time Series代写1You are welcome to experiment to see if they make a difference.2Some logistic regression packages have a built-in L2 regularization. To remove the effect of L2 regular- ization, set λ = 0 or set the budget C → ∞ (i.e. a very large value).i.Assume that you want to use the training set to classify bending from other activities, i.e. you have  a  binary  classification  problem.  Depict  scatter  plots of the features you specified in 1(c)iv extracted from time series 1, 2, and 6 of eachinstance, and use color to distinguish bending   other activities.  (See 129 of thetextbook).3Time Series代写ii.Break each time series in your training set into two (approximately) equal length time series.Now instead of 6 time series for each of the training instances, you have 12 time series for each training instance. Repeat the experiment in 1(d)i, i.e depict scatter plots of the features extracted fromboth parts of the time series 1,2, and 12. Do you see any  considerable difference in the results with those of 1(d)i?iii.Break each time series in your training set into l    1, 2, . . . , 20time series of approximately equal length and use logistic regression4to solve the binary classification problem, using time-domain features. Remember that breaking each of the time series does not change the number of instances. It only changes the number of features for each instance. Calculate the p-values for your logistic regression parameters in each model corresponding to each value of l and refit a logistic regression model using your pruned set of features.Time Series代写5 Alternatively,you can use backward selection using sklearn.feature selection or glm in  Use 5-fold cross-validation to determine the best value of the pair (l, p), where p is the number of features used in recursive feature elimination. Explain what the right way and the wrong way are to perform cross-validation in this problem.6 Obviously, use the right way! Also, you may encounter the problem of class imbalance, which may make some of your folds not having any instances of the rare class. In such a case, you can use stratified cross validation. Research what it means and use it if needed.In the following, you can see an example of applying Python’s Recursive Feature Elimination, which is a backward selection algorithm, to logistic re- gression.Time Series代写# R e c u r s i v e Fe a tu r e E l i m i n a t i o n from s k l e a r n i mpo rt d a t a s e t sfrom  s k l e a r n . f e a t u r e  s e l e c t i o n i mpo rt RFEfrom s k l e a r n . l i n e a r m o d e l i mpo rt  L o g i s t i c R e g r e s s i o n #  l o a d th e i r i s d a t a s e t sd a t a s e t = d a t a s e t s . l o a d i r i s ( )3You are welcome to repeat this experiment with other features as well as with time series 3, 4, and 5 in each instance.4If you encountered instability of the logistic regression problem because of linearly separable classes, modify the Max-Iter parameter in logistic regression to stop the algorithm immaturely and prevent from its instability.5R calculates the p-values for logistic regression automatically.One way of calculating them in Python is to call R within Python. There are other ways to obtain the p-values as well.Time Series代写6This is an interesting problem in which the number of features changes depending on the value of the parameter l that is selected via cross validation. Another example of such a problem is Principal Component Regression, where the number of principal components is selected via cross validation.# c r e a t e a  b a s e c l a s s i f i e r used  to  e v a l u a t e  a  s u b s e t o f a t t r i b u t e s model = L o g i s t i c R e g r e s s i o n ( )# c r e a t e th e RFE  model and  s e l e c t  3  a t t r i b u t e s r f e = RFE( model , 3 )r f e = r f e . f i t ( d a t a s e t . data , d a t a s e t . t a r g e t ) # summarize th e s e l e c t i o n o f  th e  a t t r i b u t e s p r i n t ( r f e . s u p p o r t )p r i n t ( r f e . r a n k i n g )iv.Report the confusion matrix and show the ROC and AUC for your classifier on train data. Report the parameters of your logistic regression βi’s as well  as the p-values associated withv.Test  the classifier on the test set.  Remember to break the time series in   your test set into the same number of time series into which you broke your training set. Remember that the classifier has to be tested using the features extracted from the test set. Compare the accuracy on the test set with the cross-validation accuracy you obtained Time Series代写vi.Do your classes seem to be well-separated to cause instability in calculating logistic regressionparameters?vii.From the confusion matrices you obtained,  do you  see imbalanced classes? If yes, build a logistic regression model based on case-control sampling and adjust its parameters. Report the confusion matrix, ROC, and AUC of the model.(e)Binary Classification Using L1-penalized logisticregressioni.Repeat 1(d)iii using L1-penalized logistic regression,7e. instead of using p- values for variable selection, use L1 regularization. Note that in this problem, you have to cross-validate for both l, the number of time series into which you break each of your instances, and λ, the weight of L1 penalty in your logistic regression objective function (or C, the budget). Packages usually perform cross-validation for λ automatically.8Time Series代写ii.Compare the L1-penalized with variable selection using p-values. Which one performs better? Which one is easier toimplement?(f)Multi-class Classification (The RealisticCase)Time Series代写i.Find the best l in the same way as you found it in 1(e)i to build an L1 penalized multinomial regression model to classify all activities in yourtrain- ing set.9 Report your test error. Research how confusion matrices and ROC curves are defined for multiclass classification and show them for this problem if 10ii.Repeat 1(f)i using a Na¨ıve Bayes’ Use both Gaussian and Multi- nomial priors and compare theresults.7For L1-penalized logistic regression, you may want to use normalized/standardized features Time Series代写8Using the package Liblinear is strongly recommended.9New versions of scikit learn allow using L1-penalty for multinomial regression.10For example, the pROC package in R does the job.iii.Which method is better for multi-class classification in this problem?2.ISLR7.43.ISLR, 4.7.34.ISLR 4.7.75.Extra Practice (you do not need to submit the answers): ISLR 3.7.3, 3.7.5, 4.7.4, 4.7.9Time Series代写更多其他:C++代写 考试助攻 C语言代写 finance代写  计算机代写 report代写 project代写 物理代写 数学代写 java代写 程序代写 algorithm代写 C++代写 r代写合作平台:Essayghost 315代写 写手招聘 Essay代写

所有的编程代写范围:essayghost为美国、加拿大、英国、澳洲的留学生提供C语言代写、代写C语言、C语言代做、代做C语言、数据库代写、代写数据库、数据库代做、代做数据库、Web作业代写、代写Web作业、Web作业代做、代做Web作业、Java代写、代写Java、Java代做、代做Java、Python代写、代写Python、Python代做、代做Python、C/C++代写、代写C/C++、C/C++代做、代做C/C++、数据结构代写、代写数据结构、数据结构代做、代做数据结构等留学生编程作业代写服务。