91精品国产91久久久久久_国产精品二区一区二区aⅴ污介绍_一本久久a久久精品vr综合_亚洲视频一区二区三区

合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

代寫COMP9727、代做Python/C++程序
代寫COMP9727、代做Python/C++程序

時間:2025-06-17  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯



COMP9727: Recommender Systems Assignment: Content-Based Music Recommendation Due Date: Week 4, Friday, June 27, 5:00 p.m. Value: 30% This assignment is inspired by a typical application of recommender systems. The task is to build a content-based “music recommender” such as might be used by a streaming service (such as Spotify) to give users a personalized playlist of songs that match their interests. The main learning objective for the assignment is to give a concrete example of the issues that must be faced when building and evaluating a recommender system in a realistic context. It is not the purpose of this assignment to produce a very good music recommender system. Note that, while music recommender systems commonly make use of the ratings or listening history of users with similar tastes, our scenario is not unrealistic as sometimes a music recommender system can make recommendations using features of the songs liked by the users. For this assignment, you will be given a collection of 1500 songs that have been labelled as one of 5 main topics: dark, emotion, lifestyle, personal and sadness. The songs are in a single .tsv file with 6 fields: artist name, track name, release date, genre, lyrics and topic. The assignment is in three parts, corresponding to the components of a content-based recommender system. The focus throughout is on explanation of choices and evaluation of the various methods and models, which involves choosing and justifying appropriate metrics. The whole assignment will be prepared (and submitted) as a Jupyter notebook, similar to those being used in tutorials, that contains a mixture of running code and tutorial-style explanation. Part 1 of the assignment is to examine various supervised machine learning methods using a variety of features and settings to determine what methods work best for topic classification in this domain/dataset. For this purpose, simply concatenate all the information for one song into a single “document”. You will use Bernoulli Naive Bayes from the tutorial, Multinomial Naive Bayes from the lecture, and one other machine learning method of your choice from scikit-learn or another machine learning library, and NLTK for auxiliary functions if needed. Part 2 of the assignment is to test a potential recommender system that uses the method for topic classification chosen in Part 1 by “simulating” a recommender system with a variety of hypothetical users. This involves evaluating a number of techniques for “matching” user profiles with songs using the similarity measures mentioned in the lecture. As we do not have real users, for this part of the assignment, we will simply “invent” some (hopefully typical) users and evaluate how well the recommender system would work for them, using appropriate metrics. Again you will need to justify the choice of these metrics and explain how you arrived at your conclusions. Part 3 of the assignment is to run a very small “user study” which means here finding one person, preferably not someone in the class, to try out your recommendation method and give some informal comments on the performance of your system from the user point of view. This does not require any user interface to be built, the user can simply be shown the output (or use) the Jupyter notebook from Parts 1 and 2. However, you will have to decide how many songs to show the user at any one time, and how to get feedback from them on which songs they would click on and which songs match their interests. A simple “talk aloud” protocol is a good idea here (this is where you ask the user to use your system and say out loud what they are thinking/doing at the same time – however please do not record the user’s voice – for that we need ethics approval). Note that standard UNSW late penalties apply. Assignment Below are a series of questions to guide you through this assignment. Your answer to each question should be in a separate clearly labelled section of the Jupyter notebook you submit. Each answer should contain a mixture of explanation and code. Use comments in the code to explain any code that you think readers will find unclear. The “readers” here are students similar to yourselves who know something about machine learning and text classification but who may not be familiar with the details of the methods. Part 1. Topic Classification 1. (2 marks) There are a few simplifications in the Jupyter notebook in the tutorial: (i) the regex might remove too many special characters, and (ii) the evaluation is based on only one training- test split rather than using cross-validation. Explain how you are going to fix these mistakes and then highlight any changes to the code in the answers to the next questions. 2. (2 marks) Develop a Multinomial Naive Bayes (MNB) model similar to the Bernoulli Naive Bayes (BNB) model. Now consider all the steps in text preprocessing used prior to classification with both BNB and MNB. The aim here is to find preprocessing steps that maximize overall ac- curacy (under the default settings of the classifiers and using CountVectorizer with the standard settings). Consider the special characters to be removed (and how and when they are removed), the definition of a “word”, the stopword list (from either NLTK or scikit-learn), lowercasing and stemming/lemmatization. Summarize the preprocessing steps that you think work “best” overall and do not change this for the rest of the assignment. 3. (2 marks) Compare BNB and MNB models by evaluating them using the full dataset with cross-validation. Choose appropriate metrics from those in the lecture that focus on the overall accuracy of classification (i.e. not top-N metrics). Briefly discuss the tradeoffs between the various metrics and then justify your choice of the main metrics for evaluation, taking into account whether this dataset is balanced or imbalanced. On this basis, conclude whether either of BNB or MNB is superior. Justify this conclusion with plots/tables. 4. (2 marks) Consider varying the number of features (words) used by BNB and MNB in the classification, using the sklearn setting which limits the number to the top N most frequent words in the Vectorizer. Compare classification results for various values for N and justify, based on experimental results, one value for N that works well overall and use this value for the rest of the assignment. Show plots or tables that support your decision. The emphasis is on clear presentation of the results so do not print out large tables or too many tables that are difficult to understand. 5. (5 marks) Choose one other machine learning method, perhaps one mentioned in the lecture. Summarize this method in a single tutorial-style paragraph and explain why you think it is suitable for topic classification for this dataset (for example, maybe other people have used this method for a similar problem). Use the implementation of this method from a standard machine learning library such as sklearn (not other people’s code from the Internet) to implement this method on the music dataset using the same text preprocessing as for BNB and MNB. If the method has any hyperparameters for tuning, explain how you will select those settings (or use the default settings), and present a concrete hypothesis for how this method will compare to BNB and MNB. Conduct experiments (and show the code for these experiments) using cross-validation and com- ment on whether you confirmed (or not) your hypothesis. Finally, compare this method to BNB and MNB on the metrics you used in Step 3 and choose one overall “best” method and settings for topic classification. Part 2. Recommendation Methods 1. (6 marks) The aim is to use the information retrieval algorithms for “matching” user profiles to “documents” described in the lecture as a recommendation method. The overall idea is that the classifier from Part 1 will assign a new song to one of the 5 topics, and this song will be recommended to the user if the tf-idf vector for the song is similar to the tf-idf vector for the profile of the user in the predicted topic. The user profile for each topic will consist of the words, or top M words, representing the interests of the user in that topic, computed as a tf-idf vector across all songs predicted in that topic of interest to the user. To get started, assume there is “training data” for the user profiles and “test data” for the recommender defined as follows. There are 1500 songs in each file. Suppose that the order in the file is the time ordering of the songs, and suppose these songs came from a series of weeks, with 250 songs from each week. Assume Weeks 1–3 (songs 1–750) form the training data and Week 4 (songs 751–1000) are the test data. After splitting the training set into topics, use TfidfVectorizer on the documents in a topic to create a tf-idf matrix that defines a vector for each document (song) in that topic in the training set (so constuct 5 such matrices). Use these tf-idf values to define a user profile, which consists of a vector for each of the 5 topics. To do this, for each topic, combine the songs from the training set predicted to be in that topic that the user “likes” into one (larger) document, so there will be 5 documents, one for each topic, and use the vectorizer defined above to define a tf-idf vector for each such document (topic). Unfortunately we do not have any real users for our recommender system (because it has not yet been built!), but we want some idea of how well it would perform. We invent two hypothetical users, and simulate their use of the system. We specify the interests of each user with a set of keywords for each topic. These user profiles can be found in the files user1.tsv and user2.tsv where each line in the file is a topic and (followed by a tab) a list of keywords. All the words are case insensitive. Important: Although we know the pairing of the topic and keywords, all the recommender system “knows” is what songs the user liked in each topic. Develop user profiles for User 1 and User 2 from the simulated training data (not the keywords used to define their interests) by supposing they liked all the songs from Weeks 1–3 that matched their interests and were predicted to be in the right topic, i.e. assume the true topic is not known, but instead the topic classifier is used to predict the song topic, and the song is “shown” to the user under that topic. Print the top 20 words in their profiles for each of the topics. Comment if these words seem reasonable. Define another hypothetical “user” (User 3) by choosing different keywords across a range of topics (perhaps those that match your interests or those of someone you know), and print the top 20 keywords in their profile for each of their topics of interest. Comment if these words seem reasonable. 2. (6 marks) Suppose a user sees N recommended songs and “likes” some of them. Choose and justify appropriate metrics to evaluate the performance of the recommendation method. Also choose an appropriate value for N based on how you think the songs will be presented. Pay attention to the large variety of songs and the need to obtain useful feedback from the user (i.e. they must like some songs shown to them). Evaluate the performance of the recommendation method by testing how well the top N songs that the recommender suggests for Week 4, based on the user profiles, match the interests of each user. That is, assume that each user likes all and only those songs in the top N recommendations that matched their profile for the predicted (not true) topic (where N is your chosen value). State clearly whether you are showing N songs in total or N songs per topic. As part of the analysis, consider various values for M, the number of words in the user profile for each topic, compared to using all words. Show the metrics for some of the matching algorithms to see which performs better for Users 1, 2 and 3. Explain any differences between the users. On the basis of these results, choose one algorithm for matching user profiles and songs and explain your decision. Part 3. User Evaluation 1. (5 marks) Conduct a “user study” of a hypothetical recommender system based on the method chosen in Part 2. Your evaluation in Part 2 will have included a choice of the number N of songs to show the user at any one time. For simplicity, suppose the user uses your system once per week. Simulate running the recommender system for 3 weeks and training the model at the end of Week 3 using interaction data obtained from the user, and testing the recommendations that would be provided to that user in Week 4. Choose one friendly “subject” and ask them to view (successively over a period of 4 simulated weeks) N songs chosen at random for each “week”, for Weeks 1, 2 and 3, and then (after training the model) the recommended songs from Week 4. The subject could be someone else from the course, but preferably is someone without knowledge of recommendation algorithms who will give useful and unbiased feedback. To be more precise, the user is shown 3 randomly chosen batches of N songs, one batch from Week 1 (N songs from 1–250), one batch from Week 2 (N songs from 251–500), and one batch from Week 3 (N songs from 501–750), and says which of these they “like”. This gives training data from which you can then train a recommendation model using the method in Part 2. The user is then shown a batch of recommended songs from Week 4 (N songs from 751–1000) in rank order, and metrics are calculated based on which of these songs the user likes. Show all these metrics in a suitable form (plots or tables). Ask the subject to talk aloud but make sure you find out which songs they are interested in. Calculate and show the various metrics for the Week 4 recommended songs that you would show using the model developed in Part 2. Explain any differences between metrics calculated in Part 2 and the metrics obtained from the real user. Finally, mention any general user feedback concerning the quality of the recommendations. Submission and Assessment ? Please include your name and zid at the start of the notebook. ? Make sure your notebook runs cleanly and correctly from beginning to end. ? Do not clear cells in the notebook before submission. ? Make sure your plots are in your notebook and not loaded from your file system. ? Submit your notebook file using the following command: give cs9727 asst .ipynb You can check that your submission has been received using the command: 9727 classrun -check asst ? Assessment criteria include the correctness and thoroughness of code and experimental anal- ysis, clarity and succinctness of explanations, and presentation quality. Plagiarism Remember that ALL work submitted for this assignment must be your own work and no sharing or copying of code or answers is allowed. You may discuss the assignment with other students but must not collaborate on developing answers to the questions. You may use code from the Internet only with suitable attribution of the source. You may not use ChatGPT or any similar software to generate any part of your explanations, evaluations or code. Do not use public code repositories on sites such as github or file sharing sites such as Google Drive to save any part of your work – make sure your code repository or cloud storage is private and do not share any links. This also applies after you have finished the course, as we do not want next year’s students accessing your solution, and plagiarism penalties can still apply after the course has finished. All submitted assignments will be run through plagiarism detection software to detect similarities to other submissions, including from past years. You should carefully read the UNSW policy on academic integrity and plagiarism (linked from the course web page), noting, in particular, that collusion (working together on an assignment, or sharing parts of assignment solutions) is a form of plagiarism. Finally, do not use any contract cheating “academies” or online “tutoring” services. This counts as serious misconduct with heavy penalties up to automatic failure of the course with 0 marks, and expulsion from the university for repeat offenders

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp

掃一掃在手機(jī)打開當(dāng)前頁
  • 上一篇:618 NMN品牌終極戰(zhàn)!三井制藥NMN現(xiàn)象級表現(xiàn)成為養(yǎng)顏抗衰領(lǐng)域定海神針
  • 下一篇:泰山登頂攻略:征服高峰“拿下泰山”的終極旅程
  • 無相關(guān)信息
    合肥生活資訊

    合肥圖文信息
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務(wù)-企業(yè)/產(chǎn)品研發(fā)/客戶要求/設(shè)計(jì)優(yōu)化
    有限元分析 CAE仿真分析服務(wù)-企業(yè)/產(chǎn)品研發(fā)
    急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計(jì)優(yōu)化
    急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計(jì)優(yōu)化
    出評 開團(tuán)工具
    出評 開團(tuán)工具
    挖掘機(jī)濾芯提升發(fā)動機(jī)性能
    挖掘機(jī)濾芯提升發(fā)動機(jī)性能
    海信羅馬假日洗衣機(jī)亮相AWE  復(fù)古美學(xué)與現(xiàn)代科技完美結(jié)合
    海信羅馬假日洗衣機(jī)亮相AWE 復(fù)古美學(xué)與現(xiàn)代
    合肥機(jī)場巴士4號線
    合肥機(jī)場巴士4號線
    合肥機(jī)場巴士3號線
    合肥機(jī)場巴士3號線
  • 短信驗(yàn)證碼 目錄網(wǎng) 排行網(wǎng)

    關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責(zé)聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網(wǎng) 版權(quán)所有
    ICP備06013414號-3 公安備 42010502001045

    91精品国产91久久久久久_国产精品二区一区二区aⅴ污介绍_一本久久a久久精品vr综合_亚洲视频一区二区三区
    91免费视频网| 91国在线观看| 色婷婷综合久久久中文一区二区| 69堂亚洲精品首页| 亚洲少妇最新在线视频| 另类小说视频一区二区| 亚洲手机视频| 欧美美女一区二区在线观看| 国产精品成人午夜| 国产美女av一区二区三区| va亚洲va日韩不卡在线观看| 一区在线视频观看| 欧美一区二区三区视频在线 | 99国内精品久久| 久久一二三区| 中文字幕亚洲区| 成人在线视频首页| 久久亚洲色图| 1区2区3区精品视频| 丁香一区二区三区| 一本一道波多野结衣一区二区| 国产精品久久久久影院色老大 | 久久这里只有精品6| 美女网站视频久久| 国产亚洲精品bv在线观看| 久久综合狠狠综合久久综合88| 欧美aⅴ一区二区三区视频| 欧美一区成人| 欧美sm美女调教| 精品亚洲成a人| 老司机精品福利视频| 亚洲精品免费在线| 欧美中文字幕久久| 欧美日韩极品在线观看一区| 欧美成人在线直播| 激情综合一区二区三区| 国产日韩一区二区三区| 国产精品毛片高清在线完整版| 懂色av一区二区在线播放| 在线观看视频一区| 午夜电影一区二区三区| 亚洲国内精品| 亚洲欧洲精品成人久久奇米网| 91视频com| 久久亚区不卡日本| 成人激情综合网站| 日韩午夜中文字幕| 国产福利一区二区三区视频 | 国产在线欧美日韩| 国产午夜精品久久久久久免费视 | 亚洲色图欧美在线| 在线观看的日韩av| 一区二区在线看| 日韩经典一区二区| 99综合精品| 亚洲男人天堂一区| 野花国产精品入口| 亚洲一卡二卡三卡四卡无卡久久| 亚洲国产免费看| 亚洲美女免费在线| 免费欧美日韩| 日韩中文字幕一区二区三区| 91福利视频网站| 精品亚洲成a人| 日韩欧美一区在线观看| 成人免费高清在线| 久久新电视剧免费观看| 国产自产在线视频一区| 综合久久久久久| 免费看黄裸体一级大秀欧美| 欧美a级一区二区| 91麻豆精品国产自产在线观看一区 | 国产精品伦理在线| 国产中文一区| 一区二区欧美视频| 一本大道久久a久久综合| 久久99久久99| 欧美成人国产一区二区| 女女同性精品视频| 亚洲精品写真福利| 色哟哟国产精品| 国产一区二区三区四区在线观看| 日韩一级完整毛片| 国产精品豆花视频| 亚洲成人免费在线| 欧美电影影音先锋| 欧美凹凸一区二区三区视频| 一区二区三区中文在线| 日本高清无吗v一区| 高清beeg欧美| 国产精品家庭影院| 日本韩国精品一区二区在线观看| 国产.欧美.日韩| 中文字幕在线一区| 一本色道亚洲精品aⅴ| 成人av网站在线观看免费| 久久综合久久综合亚洲| 日韩亚洲视频| 韩国av一区二区三区在线观看| 2023国产精华国产精品| 国产欧美一区二区三区国产幕精品| 日韩高清电影一区| 精品国产乱码久久久久久闺蜜| 亚洲免费播放| 国产精品一二三在| 亚洲精品乱码久久久久久| 欧美日韩高清影院| 国产精品多人| 国产一区二区三区美女| 成人欧美一区二区三区小说| 欧美视频你懂的| 亚洲婷婷免费| 国产精品性做久久久久久| 亚洲人成在线观看一区二区| 欧美一区二区三区在线观看视频| 激情欧美一区二区三区| 国内精品久久久久影院薰衣草| 国产精品久久久久久一区二区三区| 欧美性感一区二区三区| 激情综合久久| 成人免费毛片app| 日韩av电影免费观看高清完整版在线观看| 精品国产乱码久久久久久免费| 色噜噜偷拍精品综合在线| 欧美日韩一区综合| 国产成人免费视频一区| 午夜在线成人av| 中文字幕一区二区三| 欧美一级黄色录像| 一本大道av伊人久久综合| 亚洲性图久久| bt欧美亚洲午夜电影天堂| 蜜臀av一区二区| 一区二区三区日韩精品| 国产亚洲精久久久久久| 欧美一区二区久久久| 欧美在线播放高清精品| 国产日韩亚洲| 国产一区再线| 欧美高清一区| 成人精品视频一区| 国产在线精品免费av| 日韩精品国产欧美| 亚洲精品久久久蜜桃| 国产精品入口麻豆九色| 欧美精品一区男女天堂| 日韩一区国产二区欧美三区| 一本在线高清不卡dvd| 国产欧美日韩一区| 狠狠入ady亚洲精品经典电影| 99久久久无码国产精品| 国产精品一区在线| 黄色日韩三级电影| 欧美亚一区二区| 欧美在线播放| 国产91精品一区二区麻豆网站| 奇米色777欧美一区二区| 亚洲www啪成人一区二区麻豆| 一区二区三区在线视频免费| 国产精品久久久久久久久免费丝袜| 久久影院午夜论| 欧美刺激脚交jootjob| 日韩精品一区二| 欧美xxxx在线观看| 欧美videos大乳护士334| 日韩欧美激情四射| 日韩欧美一级二级三级久久久| 欧美蜜桃一区二区三区 | 国内精品第一页| 久久成人免费网站| 久久国产三级精品| 国产综合久久久久久鬼色| 精品在线观看免费| 国产精品一二三| 成人性视频网站| av电影在线观看一区| 欧美成人69| 亚洲激情成人| 久久精品免费| 欧美伊人久久大香线蕉综合69 | 日韩精品亚洲专区| 午夜精品久久久久| 日产欧产美韩系列久久99| 免费欧美日韩国产三级电影| 久久成人免费网站| 国产成人精品亚洲午夜麻豆| caoporn国产精品| 国产综合欧美在线看| 亚洲美女91| 老司机午夜精品视频| 欧美精品高清视频| 久久综合九色综合欧美亚洲| 国产精品嫩草影院av蜜臀| 亚洲女厕所小便bbb| 欧美a一区二区| av不卡免费在线观看| 伊人色综合久久天天五月婷| 校园春色综合网| 日韩精品在线一区二区| 欧美激情一区在线|