[
  {
    "path": "README.txt",
    "content": "This repository contains the 6th solution on KDD Cup 2020 Challenges\nfor Modern E-Commerce Platform: Debiasing Challenge.\n\nskewcy@gmail.com\n"
  },
  {
    "path": "README_CN.md",
    "content": "# KDDCUP-2020\n2020-KDDCUP，Debiasing赛道 第6名解决方案\n\nThis repository contains the 6th solution on KDD Cup 2020 Challenges for Modern E-Commerce Platform: Debiasing Challenge.\n\n赛题链接：https://tianchi.aliyun.com/competition/entrance/231785/introduction\n\n解决方案blog: https://zhuanlan.zhihu.com/p/149424540\n\n数据集下载链接：\nunderexpose_train.zip\t271.62MB\thttp://tianchi-competition.oss-cn-hangzhou.aliyuncs.com/231785/underexpose_train.zip\nunderexpose_test.zip\t3.27MB\t   http://tianchi-competition.oss-cn-hangzhou.aliyuncs.com/231785/underexpose_test.zip\n\n数据集解压密码：\n\n        7c2d2b8a636cbd790ff12a007907b2ba underexpose_train_click-1\n        ea0ec486b76ae41ed836a8059726aa85 underexpose_train_click-2\n        65255c3677a40bf4d341b0c739ad6dff underexpose_train_click-3\n        c8376f1c4ed07b901f7fe5c60362ad7b underexpose_train_click-4\n        63b326dc07d39c9afc65ed81002ff2ab underexpose_train_click-5\n        f611f3e477b458b718223248fd0d1b55 underexpose_train_click-6\n        ec191ea68e0acc367da067133869dd60 underexpose_train_click-7\n        90129a980cb0a4ba3879fb9a4b177cd2 underexpose_train_click-8\n        f4ff091ab62d849ba1e6ea6f7c4fb717 underexpose_train_click-9\n\n        96d071a532e801423be614e9e8414992 underexpose_test_click-1\n        503bf7a5882d3fac5ca9884d9010078c underexpose_test_click-2\n        dd3de82d0b3a7fe9c55e0b260027f50f underexpose_test_click-3\n        04e966e4f6c7b48f1272a53d8f9ade5d underexpose_test_click-4\n        13a14563bf5528121b8aaccfa7a0dd73 underexpose_test_click-5\n        dee22d5e4a7b1e3c409ea0719aa0a715 underexpose_test_click-6\n        69416eedf810b56f8a01439e2061e26d underexpose_test_click-7\n        55588c1cddab2fa5c63abe5c4bf020e5 underexpose_test_click-8\n        caacb2c58d01757f018d6b9fee0c8095 underexpose_test_click-9\n\n\n\n## 解决方案\n1. 如下文件结构所示，我们先对数据做预处理“1_DataPreprocessing”，将倒数第二次点击当答案生成线下训练集（存于user_data/model_1），将倒数第一次\n点击当答案生成线下验证集（存于user_data/offline），线上待预测数据存于user_data/dataset。我们依据点击数的周期变换，将time转换为了\n日期（04_TransformDateTime-Copy1.py），还生成了文本相似性、图像相似性文件（05_Generate_img_txt_vec.py）。\n\n2. 依次选用线下训练集、线下验证集和线上待预测数据中的点击日志训练deepwalk、node2vec模型（“deep_node_model.py”）。进而，融合文本相似性\n、deepwalk、node2vec修改了ItemCF算法，计算并存储商品相似性（“01_itemCF_Mundane_model1.py”等）。此外，基于召回的商品相似性构建商品相似性网络，\n计算并存储RA、AA、CN、HDI、HPI、LHN1等二阶相似性（“RA_Wu_model1.py”等）。\n\n3. 实现Self-Attentive Sequnetial Model，预测召回的用户-商品对的发生点击的概率（“3_NN”）。\n\n4. 基于存储的商品相似性为每个待预测用户召回1000候选商品（“3_Recall”）。\n5. 为召回列表中的商品-用户对生成排序特征（“4_RankFeature”）。\n\n6. 将召回列表中真正发生点击的用户-商品对视为正样，按1:5的正负比例从召回列表中随机选取负样，生成6个数据集。进而，采用catboost和lightgbm\n建模，为点击量少的商品赋予更大的权重，采用算数平均值、几何平均值与调和平均值做模型融合，并依据商品点击量进行后处理（“5_Modeling”）。\n\n**最终我们的方案取得了Track-A 1th，Track-B 6th的成绩。**\n\n\n## 文件结构\n数据可以在比赛官方网站中下载，按照以下路径创建文件夹以及放置数据。\n\n    │  feature_list.csv                               # List the features we used in ranking process\n    │  main.sh                                        # Run this script to start the whole process\n    │  project_structure.txt                          # The tree structure of this project\n    │  \n    ├─code\n    │  │  __init__.py\n    │  │  \n    │  ├─1_DataPreprocessing                          # Generate validation-set, create timestamp and generate item feature vectors\n    │  │      01_Generate_Offline_Dataset_origin.py   \n    │  │      02_Generate_Model1_Dataset_origin.py\n    │  │      03_Create_Model1_Answer.py\n    │  │      03_Create_Offline_Answer.py\n    │  │      04_TransformDateTime-Copy1.py\n    │  │      05_Generate_img_txt_vec.py\n    │  │      ipynb_file.zip\n    │  │      \n    │  ├─2_Similarity                                 # Generate item-item similarity matrix \n    │  │      01_itemCF_Mundane_model1.py\n    │  │      01_itemCF_Mundane_offline.py\n    │  │      01_itemCF_Mundane_online.py\n    │  │      deep_node_model.py\n    │  │      ipynb_file.zip\n    │  │      RA_Wu_model1.py\n    │  │      RA_Wu_offline.py\n    │  │      RA_Wu_online.py\n    │  │      \n    │  ├─3_NN                                         # Generate deep-learning based result\n    │  │      config.py\n    │  │      ItemFeat2.py\n    │  │      model2.py\n    │  │      modules.py\n    │  │      Readme\n    │  │      sampler2.py\n    │  │      sas_rec.py\n    │  │      util.py\n    │  │      \n    │  ├─3_Recall                                     # Recall candidates\n    │  │      01_Recall-Wu-model1.py\n    │  │      01_Recall-Wu-offline.py\n    │  │      01_Recall-Wu-online.py\n    │  │      ipynb_file.zip\n    │  │      \n    │  ├─4_RankFeature                                # Generate feature for ranking\n    │  │      01_sim_feature_model1.py\n    │  │      01_sim_feature_model1_RA_AA.py\n    │  │      01_sim_feature_offline.py\n    │  │      01_sim_feature_offline_RA_AA.py\n    |  |      ……\n    │  │      10_emergency_feature_offline.py\n    │  │      10_emergency_feature_online.py\n    │  │      4_RankFeature.zip\n    │  │      \n    │  └─5_Modeling                                  # Build Catboost and LightGBM model\n    │          ipynb_file.zip\n    │          Model_Offline.py\n    │          Model_Online.py\n    │          \n    ├─data                                           # Origin dataset\n    │  ├─underexpose_test\n    │  └─underexpose_train\n    ├─prediction_result\n    └─user_data                                      # Containing intermediate files\n        ├─dataset\n        │  ├─new_recall\n        │  ├─new_similarity\n        │  └─nn\n        ├─model_1\n        │  ├─new_recall\n        │  ├─new_similarity\n        │  └─nn\n        └─offline\n            ├─new_recall\n            ├─new_similarity\n            └─nn\n        \n## Python库环境依赖\n    lightgbm==2.2.1\n    tensorflow==1.13.1\n    joblib==0.15.1\n    gensim==3.4.0\n    pandas==0.25.1\n    numpy==1.16.3\n    networkx==2.4\n    tqdm==4.46.0\n\n## 声明/\n本项目库专门存放KDD2020挑战赛的相关代码文件，所有代码仅供各位同学学习参考使用。如有任何对代码的问题请邮箱联系：cs_xcy@126.com\n\nIf you have any issue please feel free to contact me at cs_xcy@126.com\n\n天池ID：GrandRookie，\nBruceQD，\n七里z，\n青禹小生，\n蓝绿黄红，\nLSH123，\nXMNG，\nwenwen_123，\n**小雨姑娘**，\nwbbhcb\n"
  },
  {
    "path": "code/1_DataPreprocessing/01_Generate_Offline_Dataset_origin.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[29]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nimport warnings \r\nwarnings.filterwarnings(\"ignore\") \r\n\r\n\r\n# In[30]:\r\n\r\n\r\ncurrent_stage = 9\r\npath = './data/'\r\noutput_path = './user_data/offline/'\r\ninput_header = 'underexpose_'\r\noutput_header = 'offline_'\r\n\r\n#path = 'offline/'\r\n#input_header = 'offline_'\r\n#output_header = 'model1/model_1_'\r\n\r\n\r\n# In[31]:\r\n\r\n\r\ndf_train_list = [pd.read_csv(path+'underexpose_train/'+input_header+'train_click-%d.csv'%x,\r\n                             header=None,\r\n                             names=['user_id', 'item_id', 'time']) for x in range(current_stage + 1)]\r\nfor x, df_train in enumerate(df_train_list):\r\n    df_train.to_csv('./user_data/dataset/' + input_header + 'train_click-%d.csv'%x, index=False,header=None)\r\n\r\ndf_train = pd.concat(df_train_list)\r\ndf_train = df_train.drop_duplicates(subset=['user_id','item_id','time'],keep='last')\r\ndf_train = df_train.reset_index(drop=True)\r\n\r\n\r\n# In[32]:\r\n\r\n\r\ndf_test_list = [pd.read_csv(path+'underexpose_test/'+input_header+'test_click-%d.csv'%x,\r\n                             header=None,\r\n                             names=['user_id', 'item_id', 'time']) for x in range(current_stage + 1)]\r\nfor x, df_test in enumerate(df_test_list):\r\n    df_test.to_csv('./user_data/dataset/' + input_header + 'test_click-%d.csv'%x, index=False,header=None)\r\ndf_test = pd.concat(df_test_list)\r\ndf_test = df_test.drop_duplicates(subset=['user_id','item_id','time'],keep='last')\r\ndf_test = df_test.reset_index(drop=True)\r\n\r\n\r\n# In[33]:\r\n\r\n\r\ndf = pd.concat([df_train,df_test])\r\ndf = df.drop_duplicates(subset=['user_id','item_id','time'],keep='last')\r\ndf = df.reset_index(drop=True)\r\n\r\n\r\n# In[34]:\r\n\r\n\r\n# if you are generating the offline dataset please use the comment sentense\r\n\r\n# df_pred_list = [pd.read_csv(path+input_header+'test_qtime-%d.csv'%x,\r\n#                              header=None,\r\n#                              names=['user_id','item_id','time']) for x in range(current_stage + 1)]\r\n\r\n#online\r\ndf_pred_list = [pd.read_csv(path+'underexpose_test/'+input_header+'test_qtime-%d.csv'%x,\r\n                             header=None,\r\n                             names=['user_id','time']) for x in range(current_stage + 1)]\r\n\r\nfor x, df_pred in enumerate(df_pred_list):\r\n    df_pred.to_csv('./user_data/dataset/' + input_header + 'test_qtime-%d.csv'%x, index=False,header=None)\r\n\r\n\r\n# In[35]:\r\n\r\n\r\nfor i in range(current_stage + 1):\r\n    if 'item_id' in df_pred_list[i].columns:\r\n        df_pred_list[i] = df_pred_list[i][['user_id','time']]\r\n\r\n\r\n# In[36]:\r\n\r\n\r\ndf_list = []\r\n\r\nfor i in range(current_stage + 1):\r\n    df_0 = pd.concat([df_train_list[i], df_test_list[i],df_pred_list[i]])\r\n    df_0 = df_0.sort_values(by=['time'])\r\n    df_0 = df_0.reset_index(drop=True)\r\n    df_list.append(df_0)\r\n\r\n\r\n# In[37]:\r\n\r\n\r\nfor i in range(current_stage + 1):\r\n    count_log = []\r\n    for index, row in df_pred_list[i].iterrows():\r\n        count_log.append(sum((df_list[i]['user_id']==row['user_id']) & (df_list[i]['time']<row['time']) ))\r\n    df_pred_list[i]['count_log'] = count_log\r\n\r\n\r\n# In[38]:\r\n\r\n\r\nlist_train_list = [[] for x in range(current_stage + 1)]\r\nlist_test_list = [[] for x in range(current_stage + 1)]\r\n\r\nfor each_stage_out in range(current_stage + 1):\r\n    \r\n    fout = open(output_path + output_header + 'test_qtime-%d.csv'%each_stage_out,'w')\r\n    \r\n    for i, row in df_pred_list[each_stage_out].iterrows():\r\n        if row['count_log'] < 3:\r\n            continue    \r\n\r\n        df_tmp = df_list[each_stage_out][df_list[each_stage_out]['user_id']==row['user_id']]\r\n        \r\n        if sum(df_tmp['time']==max(df_tmp['time'])) > 1:\r\n            row_tmp = df_list[each_stage_out].loc[df_tmp[ (df_tmp['time']==max(df_tmp['time']) ) & (~np.isnan(df_tmp['item_id'] )) ].index[0]]\r\n            user_id_tmp = row_tmp['user_id']\r\n            item_id_tmp = row_tmp['item_id']\r\n            time_tmp = row_tmp['time']\r\n            fout.write(str(int(user_id_tmp)) + ',' + str(int(item_id_tmp)) + ',' + str(time_tmp) + '\\n')\r\n        else:\r\n            row_tmp = df_list[each_stage_out].loc[df_tmp.index[-2]]\r\n            user_id_tmp = row_tmp['user_id']\r\n            item_id_tmp = row_tmp['item_id']\r\n            time_tmp = row_tmp['time']            \r\n            fout.write(str(int(user_id_tmp)) + ',' + str(int(item_id_tmp)) + ',' + str(time_tmp) + '\\n')\r\n        \r\n        for each_stage_in in range(current_stage + 1):\r\n            list_train_list[each_stage_in] += list(df_train_list[each_stage_in][(df_train_list[each_stage_in]['user_id']==row['user_id'])\r\n                                       &(df_train_list[each_stage_in]['item_id']==item_id_tmp)].index)\r\n\r\n            list_test_list[each_stage_in] += list(df_test_list[each_stage_in][(df_test_list[each_stage_in]['user_id']==row['user_id'])\r\n                                     &(df_test_list[each_stage_in]['item_id']==item_id_tmp)].index)\r\n    fout.close()\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[39]:\r\n\r\n\r\ndf_train_list = [x.drop(labels=list_train_list[i],axis=0) for i,x in enumerate(df_train_list)]\r\n\r\n\r\n# In[40]:\r\n\r\n\r\ndf_test_list = [x.drop(labels=list_test_list[i],axis=0) for i,x in enumerate(df_test_list)]\r\n\r\n\r\n# In[41]:\r\n\r\n\r\ndf_train_list = [x.reset_index(drop=True) for x in df_train_list]\r\ndf_test_list = [x.reset_index(drop=True) for x in df_test_list]\r\n\r\n\r\n# In[42]:\r\n\r\n\r\nfor i in range(current_stage + 1):\r\n    df_train_list[i].to_csv(output_path + output_header+'train_click-%d.csv'%i,index=False,header=None)\r\n    df_test_list[i].to_csv(output_path + output_header+'test_click-%d.csv'%i,index=False,header=None)\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/1_DataPreprocessing/02_Generate_Model1_Dataset_origin.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[10]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nimport warnings \r\nwarnings.filterwarnings(\"ignore\") \r\n\r\n\r\n# In[23]:\r\n\r\n\r\ncurrent_stage = 9\r\n#path = 'dataset/'\r\n#input_header = 'underexpose_'\r\n#output_header = 'offline/offline_'\r\n\r\npath = './user_data/offline/'\r\noutput_path = './user_data/model_1/'\r\ninput_header = 'offline_'\r\noutput_header = 'model_1_'\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf_train_list = [pd.read_csv(path+input_header+'train_click-%d.csv'%x,\r\n                             header=None,\r\n                             names=['user_id', 'item_id', 'time']) for x in range(current_stage + 1)]\r\ndf_train = pd.concat(df_train_list)\r\ndf_train = df_train.drop_duplicates(subset=['user_id','item_id','time'],keep='last')\r\ndf_train = df_train.reset_index(drop=True)\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ndf_test_list = [pd.read_csv(path+input_header+'test_click-%d.csv'%x,\r\n                             header=None,\r\n                             names=['user_id', 'item_id', 'time']) for x in range(current_stage + 1)]\r\ndf_test = pd.concat(df_test_list)\r\ndf_test = df_test.drop_duplicates(subset=['user_id','item_id','time'],keep='last')\r\ndf_test = df_test.reset_index(drop=True)\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ndf = pd.concat([df_train,df_test])\r\ndf = df.drop_duplicates(subset=['user_id','item_id','time'],keep='last')\r\ndf = df.reset_index(drop=True)\r\n\r\n\r\n# In[15]:\r\n\r\n\r\n# if you are generating the offline dataset please use the comment sentense\r\n\r\ndf_pred_list = [pd.read_csv(path+input_header+'test_qtime-%d.csv'%x,\r\n                             header=None,\r\n                             names=['user_id','item_id','time']) for x in range(current_stage + 1)]\r\n\r\n#online\r\n#df_pred_list = [pd.read_csv(path+input_header+'test_qtime-%d.csv'%x,\r\n#                              header=None,\r\n#                              names=['user_id','time']) for x in range(current_stage + 1)]\r\n\r\n\r\n# In[16]:\r\n\r\n\r\nfor i in range(current_stage + 1):\r\n    if 'item_id' in df_pred_list[i].columns:\r\n        df_pred_list[i] = df_pred_list[i][['user_id','time']]\r\n\r\n\r\n# In[17]:\r\n\r\n\r\ndf_list = []\r\n\r\nfor i in range(current_stage + 1):\r\n    df_0 = pd.concat([df_train_list[i], df_test_list[i],df_pred_list[i]])\r\n    df_0 = df_0.sort_values(by=['time'])\r\n    df_0 = df_0.reset_index(drop=True)\r\n    df_list.append(df_0)\r\n\r\n\r\n# In[18]:\r\n\r\n\r\nfor i in range(current_stage + 1):\r\n    count_log = []\r\n    for index, row in df_pred_list[i].iterrows():\r\n        count_log.append(sum((df_list[i]['user_id']==row['user_id']) & (df_list[i]['time']<row['time']) ))\r\n    df_pred_list[i]['count_log'] = count_log\r\n\r\n\r\n# In[24]:\r\n\r\n\r\nlist_train_list = [[] for x in range(current_stage + 1)]\r\nlist_test_list = [[] for x in range(current_stage + 1)]\r\n\r\nfor each_stage_out in range(current_stage + 1):\r\n    \r\n    fout = open(output_path + output_header + 'test_qtime-%d.csv'%each_stage_out,'w')\r\n    \r\n    for i, row in df_pred_list[each_stage_out].iterrows():\r\n        if row['count_log'] < 3:\r\n            continue    \r\n\r\n        df_tmp = df_list[each_stage_out][df_list[each_stage_out]['user_id']==row['user_id']]\r\n        \r\n        if sum(df_tmp['time']==max(df_tmp['time'])) > 1:\r\n            row_tmp = df_list[each_stage_out].loc[df_tmp[ (df_tmp['time']==max(df_tmp['time']) ) & (~np.isnan(df_tmp['item_id'] )) ].index[0]]\r\n            user_id_tmp = row_tmp['user_id']\r\n            item_id_tmp = row_tmp['item_id']\r\n            time_tmp = row_tmp['time']\r\n            fout.write(str(int(user_id_tmp)) + ',' + str(int(item_id_tmp)) + ',' + str(time_tmp) + '\\n')\r\n        else:\r\n            row_tmp = df_list[each_stage_out].loc[df_tmp.index[-2]]\r\n            user_id_tmp = row_tmp['user_id']\r\n            item_id_tmp = row_tmp['item_id']\r\n            time_tmp = row_tmp['time']            \r\n            fout.write(str(int(user_id_tmp)) + ',' + str(int(item_id_tmp)) + ',' + str(time_tmp) + '\\n')\r\n        \r\n        for each_stage_in in range(current_stage + 1):\r\n            list_train_list[each_stage_in] += list(df_train_list[each_stage_in][(df_train_list[each_stage_in]['user_id']==row['user_id'])\r\n                                       &(df_train_list[each_stage_in]['item_id']==item_id_tmp)].index)\r\n\r\n            list_test_list[each_stage_in] += list(df_test_list[each_stage_in][(df_test_list[each_stage_in]['user_id']==row['user_id'])\r\n                                     &(df_test_list[each_stage_in]['item_id']==item_id_tmp)].index)\r\n    fout.close()\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[25]:\r\n\r\n\r\ndf_train_list = [x.drop(labels=list_train_list[i],axis=0) for i,x in enumerate(df_train_list)]\r\n\r\n\r\n# In[26]:\r\n\r\n\r\ndf_test_list = [x.drop(labels=list_test_list[i],axis=0) for i,x in enumerate(df_test_list)]\r\n\r\n\r\n# In[27]:\r\n\r\n\r\ndf_train_list = [x.reset_index(drop=True) for x in df_train_list]\r\ndf_test_list = [x.reset_index(drop=True) for x in df_test_list]\r\n\r\n\r\n# In[28]:\r\n\r\n\r\nfor i in range(current_stage + 1):\r\n    df_train_list[i].to_csv(output_path+output_header+'train_click-%d.csv'%i,index=False,header=None)\r\n    df_test_list[i].to_csv(output_path+output_header+'test_click-%d.csv'%i,index=False,header=None)\r\n\r\n"
  },
  {
    "path": "code/1_DataPreprocessing/03_Create_Model1_Answer.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[6]:\r\n\r\n\r\nfrom collections import defaultdict\r\n\r\ncurrent_phases = 9\r\nnumber = 1\r\n\r\ndef _create_answer_file_for_evaluation(answer_fname='debias_track_answer.csv'):\r\n\r\n    \r\n    train = './user_data/model_'+str(number)+'/model_'+str(number)+'_train_click-%d.csv'\r\n    test = './user_data/model_'+str(number)+'/model_'+str(number)+'_test_click-%d.csv'\r\n\r\n\r\n    answer = './user_data/model_'+str(number)+'/model_'+str(number)+'_test_qtime-%d.csv'\r\n\r\n    item_deg = defaultdict(lambda: 0)\r\n    with open(answer_fname, 'w') as fout:\r\n        for phase_id in range(current_phases+1):\r\n            with open(train % phase_id) as fin:\r\n                for line in fin:\r\n                    user_id, item_id, timestamp = line.split(',')\r\n                    user_id, item_id, timestamp = (\r\n                        int(user_id), int(item_id), float(timestamp))\r\n                    item_deg[item_id] += 1\r\n            with open(test % phase_id) as fin:\r\n                for line in fin:\r\n                    user_id, item_id, timestamp = line.split(',')\r\n                    user_id, item_id, timestamp = (\r\n                        int(user_id), int(item_id), float(timestamp))\r\n                    item_deg[item_id] += 1\r\n            with open(answer % phase_id) as fin:\r\n                for line in fin:\r\n                    user_id, item_id, timestamp = line.split(',')\r\n                    user_id, item_id, timestamp = (\r\n                        int(user_id), int(item_id), float(timestamp))\r\n                    assert user_id % 11 == phase_id\r\n                    print(phase_id, user_id, item_id, item_deg[item_id],\r\n                          sep=',', file=fout)\r\n\r\n\r\n# In[7]:\r\n\r\n\r\n_create_answer_file_for_evaluation('./user_data/model_'+str(number)+'/model_'+str(number)+'_debias_track_answer.csv')\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/1_DataPreprocessing/03_Create_Offline_Answer.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nfrom collections import defaultdict\r\n\r\ncurrent_phases = 9\r\n\r\ndef _create_answer_file_for_evaluation(answer_fname='debias_track_answer.csv'):\r\n    train = './user_data/offline/offline_train_click-%d.csv'\r\n    test = './user_data/offline/offline_test_click-%d.csv'\r\n\r\n    \r\n#     train = 'model'+str(number)+'/model_'+str(number)+'_train_click-%d.csv'\r\n#     test = 'model'+str(number)+'/model_'+str(number)+'_test_click-%d.csv'\r\n\r\n    \r\n    # underexpose_test_qtime-T.csv contains only <user_id, item_id>\r\n    # underexpose_test_qtime_with_answer-T.csv contains <user_id, item_id, time>\r\n    #answer = 'model/model_test_qtime-%d.csv'  # not released\r\n    \r\n    answer = './user_data/offline/offline_test_qtime-%d.csv'\r\n\r\n#     answer = 'model'+str(number)+'/model_'+str(number)+'_test_qtime-%d.csv'\r\n\r\n    item_deg = defaultdict(lambda: 0)\r\n    with open(answer_fname, 'w') as fout:\r\n        for phase_id in range(current_phases+1):\r\n            with open(train % phase_id) as fin:\r\n                for line in fin:\r\n                    user_id, item_id, timestamp = line.split(',')\r\n                    user_id, item_id, timestamp = (\r\n                        int(user_id), int(item_id), float(timestamp))\r\n                    item_deg[item_id] += 1\r\n            with open(test % phase_id) as fin:\r\n                for line in fin:\r\n                    user_id, item_id, timestamp = line.split(',')\r\n                    user_id, item_id, timestamp = (\r\n                        int(user_id), int(item_id), float(timestamp))\r\n                    item_deg[item_id] += 1\r\n            with open(answer % phase_id) as fin:\r\n                for line in fin:\r\n                    user_id, item_id, timestamp = line.split(',')\r\n                    user_id, item_id, timestamp = (\r\n                        int(user_id), int(item_id), float(timestamp))\r\n                    assert user_id % 11 == phase_id\r\n                    print(phase_id, user_id, item_id, item_deg[item_id],\r\n                          sep=',', file=fout)\r\n\r\n\r\n# In[2]:\r\n\r\n\r\n_create_answer_file_for_evaluation('./user_data/offline/offline_debias_track_answer.csv')\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[3]:\r\n\r\n\r\n# _create_answer_file_for_evaluation('model'+str(number)+'/model_'+str(number)+'_debias_track_answer.csv')\r\n\r\n"
  },
  {
    "path": "code/1_DataPreprocessing/04_TransformDateTime-Copy1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd  \r\nfrom tqdm import tqdm  \r\nfrom collections import defaultdict  \r\nimport math  \r\nimport numpy as np\r\nimport datetime\r\n\r\n\r\n# In[2]:\r\n\r\n\r\nrandom_number_1 = 41152582\r\nrandom_number_2 = 1570909091\r\n\r\n\r\n# In[3]:\r\n\r\n\r\ntrain_path = './user_data/offline/'\r\ntest_path = './user_data/offline/'\r\n\r\nnow_phase = 9\r\nfor c in range(now_phase + 1):  \r\n    print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + '/offline_train_click-{}.csv'.format(c), header=None,  names=['user_id', 'item_id', 'time'])  \r\n    click_test = pd.read_csv(test_path + '/offline_test_click-{}.csv'.format(c), header=None,  names=['user_id', 'item_id', 'time'])  \r\n    click_query = pd.read_csv(test_path + '/offline_test_qtime-{}.csv'.format(c), header=None,  names=['user_id', 'item_id', 'time']) \r\n    \r\n    click_train['unix_time'] = click_train['time'].apply(lambda x: x * random_number_2 + random_number_1)\r\n    click_train['datetime'] = click_train['unix_time'].apply(lambda x: datetime.datetime.fromtimestamp(x))\r\n    \r\n    click_train.to_csv(train_path+'/offline_train_click_{}_time.csv'.format(c),index=False)\r\n    \r\n    click_test['unix_time'] = click_test['time'].apply(lambda x: x * random_number_2 + random_number_1)\r\n    click_test['datetime'] = click_test['unix_time'].apply(lambda x: datetime.datetime.fromtimestamp(x))\r\n    \r\n    click_test.to_csv(test_path+'/offline_test_click_{}_time.csv'.format(c),index=False)\r\n    \r\n    click_query['unix_time'] = click_query['time'].apply(lambda x: x * random_number_2 + random_number_1)\r\n    click_query['datetime'] = click_query['unix_time'].apply(lambda x: datetime.datetime.fromtimestamp(x))\r\n    \r\n    click_query.to_csv(test_path+'/offline_test_qtime_{}_time.csv'.format(c),index=False)   \r\n    \r\n\r\n\r\n# In[4]:\r\n\r\n\r\nnum = 1\r\ntrain_path = './user_data/model_'+str(num)\r\ntest_path = './user_data/model_'+str(num)\r\n\r\nnow_phase = 9\r\nfor c in range(now_phase + 1):  \r\n    print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + '/model_'+str(num)+'_train_click-{}.csv'.format(c), header=None,  names=['user_id', 'item_id', 'time'])  \r\n    click_test = pd.read_csv(test_path + '/model_'+str(num)+'_test_click-{}.csv'.format(c), header=None,  names=['user_id', 'item_id', 'time'])  \r\n    click_query = pd.read_csv(test_path + '/model_'+str(num)+'_test_qtime-{}.csv'.format(c), header=None,  names=['user_id', 'item_id', 'time']) \r\n    \r\n    click_train['unix_time'] = click_train['time'].apply(lambda x: x * random_number_2 + random_number_1)\r\n    click_train['datetime'] = click_train['unix_time'].apply(lambda x: datetime.datetime.fromtimestamp(x))\r\n    \r\n    click_train.to_csv(train_path+'/model_'+str(num)+'_train_click_{}_time.csv'.format(c),index=False)\r\n    \r\n    click_test['unix_time'] = click_test['time'].apply(lambda x: x * random_number_2 + random_number_1)\r\n    click_test['datetime'] = click_test['unix_time'].apply(lambda x: datetime.datetime.fromtimestamp(x))\r\n    \r\n    click_test.to_csv(test_path+'/model_'+str(num)+'_test_click_{}_time.csv'.format(c),index=False)\r\n    \r\n    click_query['unix_time'] = click_query['time'].apply(lambda x: x * random_number_2 + random_number_1)\r\n    click_query['datetime'] = click_query['unix_time'].apply(lambda x: datetime.datetime.fromtimestamp(x))\r\n    \r\n    click_query.to_csv(test_path+'/model_'+str(num)+'_test_qtime_{}_time.csv'.format(c),index=False)   \r\n    \r\n\r\n\r\n# In[5]:\r\n\r\n\r\ntrain_path = './user_data/dataset'  \r\ntest_path = './user_data/dataset'\r\n\r\nnow_phase = 9\r\nfor c in range(now_phase + 1):  \r\n    print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + '/underexpose_train_click-{}.csv'.format(c), header=None,  names=['user_id', 'item_id', 'time'])  \r\n    click_test = pd.read_csv(test_path + '/underexpose_test_click-{}.csv'.format(c), header=None,  names=['user_id', 'item_id', 'time'])  \r\n    click_query = pd.read_csv(test_path + '/underexpose_test_qtime-{}.csv'.format(c), header=None,  names=['user_id', 'time']) \r\n    \r\n    click_train['unix_time'] = click_train['time'].apply(lambda x: x * random_number_2 + random_number_1)\r\n    click_train['datetime'] = click_train['unix_time'].apply(lambda x: datetime.datetime.fromtimestamp(x))\r\n    \r\n    click_train.to_csv(train_path+'/underexpose_train_click_{}_time.csv'.format(c),index=False)\r\n    \r\n    click_test['unix_time'] = click_test['time'].apply(lambda x: x * random_number_2 + random_number_1)\r\n    click_test['datetime'] = click_test['unix_time'].apply(lambda x: datetime.datetime.fromtimestamp(x))\r\n    \r\n    click_test.to_csv(test_path+'/underexpose_test_click_{}_time.csv'.format(c),index=False)\r\n    \r\n    click_query['unix_time'] = click_query['time'].apply(lambda x: x * random_number_2 + random_number_1)\r\n    click_query['datetime'] = click_query['unix_time'].apply(lambda x: datetime.datetime.fromtimestamp(x))\r\n    \r\n    click_query.to_csv(test_path+'/underexpose_test_qtime_{}_time.csv'.format(c),index=False)   \r\n    \r\n\r\n"
  },
  {
    "path": "code/1_DataPreprocessing/05_Generate_img_txt_vec.py",
    "content": "import pandas as pd\r\nfrom gensim.models import KeyedVectors\r\n\r\n\r\n\r\ntrain_path = './data/underexpose_train/'\r\nitem = pd.read_csv(train_path+'underexpose_item_feat.csv',header=None)\r\n\r\nitem[1] = item[1].apply(lambda x: float(str(x).replace('[', '')))\r\nitem[256] = item[256].apply(lambda x: float(str(x).replace(']', '')))\r\nitem[128] = item[128].apply(lambda x: float(str(x).replace(']', '')))\r\nitem[129] = item[129].apply(lambda x: float(str(x).replace('[', '')))\r\nitem.columns = ['item_id'] + ['txt_vec_{}'.format(f) for f in range(0, 128)] + ['img_vec_{}'.format(f) for f in\r\n                                                                                range(0, 128)]\r\nitem_nun=item['item_id'].nunique()\r\n\r\nitem[['item_id'] + ['img_vec_{}'.format(f) for f in range(0, 128)]].to_csv(\"user_data/w2v_img_vec.txt\", sep=\" \",\r\n                                                                                header=[str(item_nun), '128'] + [\"\"] * 127,\r\n                                                                                index=False,\r\n                                                                                encoding='UTF-8')\r\n\r\nitem[['item_id'] + ['txt_vec_{}'.format(f) for f in range(0, 128)]].to_csv(\"user_data/w2v_txt_vec.txt\",\r\n                                                                                sep=\" \",\r\n                                                                                header=[str(item_nun), '128'] + [\"\"] * 127,\r\n                                                                                index=False,\r\n                                                                                encoding='UTF-8')\r\n\r\ntxt_vec_model = KeyedVectors.load_word2vec_format(\"./user_data/\" + 'w2v_txt_vec.txt', binary=False)\r\ntxt_vec_model = KeyedVectors.load_word2vec_format(\"./user_data/\" + 'w2v_img_vec.txt', binary=False)"
  },
  {
    "path": "code/2_Similarity/01_itemCF_Mundane_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[13]:\r\n\r\n\r\nfrom __future__ import division\r\nfrom __future__ import print_function\r\nfrom gensim.models import KeyedVectors\r\nimport gc\r\nimport os\r\nimport math\r\nimport time\r\nimport random\r\nimport joblib\r\nimport itertools\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom tqdm import tqdm\r\nfrom collections import defaultdict\r\nimport pickle\r\nfrom multiprocessing import Pool as ProcessPool\r\nimport json\r\n\r\n\r\n# In[14]:\r\n\r\n\r\nrandom.seed(2020)\r\npd.set_option('display.unicode.ambiguous_as_wide', True)\r\npd.set_option('display.unicode.east_asian_width', True)\r\npd.set_option('display.max_columns', None)\r\npd.set_option('display.max_rows', None)\r\npd.set_option(\"display.max_colwidth\", 100)\r\npd.set_option('display.width', 1000)\r\n\r\n\r\n# In[15]:\r\n\r\n\r\ndef process(each_item):\r\n    dict_tmp = item_sim_list[each_item]\r\n    for j in dict_tmp:\r\n        dict_tmp[j] = round(dict_tmp[j],4)\r\n        dict_tmp[j] = round(dict_tmp[j],4)\r\n    \r\n    return (each_item,dict_tmp)\r\n\r\ndef myround(x, thres):\r\n    temp = 10**thres\r\n    return int(x * temp) / temp\r\n\r\n\r\n# In[16]:\r\n\r\n\r\nmyround = lambda x,thres : int(x * 10**thres) / 10**thres\r\n\r\n\r\n# In[17]:\r\n\r\n\r\ndef phase_predict(df, pred_col, top_fill, topk=50):\r\n    \"\"\"recom_df, 'sim', top50_click, \"click_valid\"\r\n    \"\"\"\r\n    top_fill = [int(t) for t in top_fill.split(',')]\r\n    top_fill = top_fill[:topk]\r\n    scores = [-1 * i for i in range(1, len(top_fill) + 1)]\r\n    ids = list(df['user_id'].unique())\r\n    fill_df = pd.DataFrame(ids * len(top_fill), columns=['user_id'])\r\n    fill_df.sort_values('user_id', inplace=True)\r\n    fill_df['item_id'] = top_fill * len(ids)\r\n    fill_df[pred_col] = scores * len(ids)\r\n    df = df.append(fill_df)\r\n    df.sort_values(pred_col, ascending=False, inplace=True)\r\n    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')\r\n    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)\r\n    df.sort_values(\"rank\", inplace=True)\r\n    df = df[df[\"rank\"] <= topk]\r\n    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',',\r\n                                                                                                   expand=True).reset_index()\r\n    return df\r\n\r\n\r\ndef get_sim_item(df_, user_col, item_col):#, nodewalk_model,deepwalk_model,txt_vec_model):\r\n    global txt_similarity\r\n    global deepwalk_similarity\r\n    global nodewalk_similarity\r\n\r\n    df = df_.copy()\r\n    user_item_ = df.groupby(user_col)[item_col].agg(list).reset_index()\r\n    user_item_dict = dict(zip(user_item_[user_col], user_item_[item_col]))\r\n\r\n    user_time_ = df.groupby(user_col)['time'].agg(list).reset_index()  # 引入时间因素\r\n    user_time_dict = dict(zip(user_time_[user_col], user_time_['time']))\r\n\r\n    item_user_ = df.groupby(item_col)[user_col].agg(set).reset_index()\r\n    item_user_dict = dict(zip(item_user_[item_col], item_user_[user_col]))\r\n\r\n    item_dic = df[item_col].value_counts().to_dict()\r\n\r\n    df.sort_values('time', inplace=True)\r\n    df.drop_duplicates('item_id', keep='first', inplace=True)\r\n    item_time_ = df.groupby(item_col)['time'].agg(list).reset_index()  # 引入时间因素\r\n    item_time_dict = dict(zip(item_time_[item_col], item_time_['time']))\r\n\r\n\r\n    sim_item = {}\r\n    item_cnt = defaultdict(int)  # 商品被点击次数\r\n    for user, items in tqdm(user_item_dict.items()):\r\n        for loc1, item in enumerate(items):\r\n            users = item_user_dict[item]\r\n            item_cnt[item] += 1\r\n            sim_item.setdefault(item, {})\r\n            user_item_len = len(items)\r\n            for loc2, relate_item in enumerate(items):\r\n                if item == relate_item:\r\n                    continue\r\n                t1 = user_time_dict[user][loc1]  # 点击时间提取\r\n                t2 = user_time_dict[user][loc2]\r\n                delta_t = abs(t1 - t2) * 650000\r\n                delta_loc = abs(loc1 - loc2)\r\n                '''\r\n                The meaning of each columns:\r\n                {'sim': 0,------------------------0\r\n                  'item_cf': 0,-------------------1\r\n                  'item_cf_weighted': 0,----------2\r\n                  'time_diff': np.inf,------------3\r\n                  'loc_diff': np.inf,-------------4\r\n                  'node_sim_max': -1e8,-----------5\r\n                  'node_sim_sum':0,---------------6\r\n                  'deep_sim_max': -1e8,-----------7\r\n                  'deep_sim_sum':0----------------8\r\n                                          }\r\n                '''\r\n                \r\n                sim_item[item].setdefault(relate_item,\r\n                                          [0,0,0,np.inf,np.inf,-1e8,0,-1e8,0]\r\n                                         )\r\n                \r\n                \r\n                key = [str(int(item)), str(int(relate_item))]\r\n                key_tmp = \"_\".join(key)\r\n                \r\n                ##nodewalk\r\n                if key_tmp in nodewalk_similarity:\r\n                    node_sim = nodewalk_similarity[key_tmp]\r\n                else:\r\n                    try:\r\n                        node_sim = 0.5 * nodewalk_model.similarity(str(item), str(relate_item))+ 0.5\r\n                    except:\r\n                        node_sim = 0.5\r\n                    nodewalk_similarity[key_tmp] = node_sim\r\n                    \r\n                ##deepwalk\r\n                if key_tmp in deepwalk_similarity:\r\n                    deep_sim = deepwalk_similarity[key_tmp]\r\n                else:\r\n                    try:\r\n                        deep_sim = 0.5 * deepwalk_model.similarity(str(item), str(relate_item))+ 0.5\r\n                    except:\r\n                        deep_sim = 0.5\r\n                    deepwalk_similarity[key_tmp] = deep_sim\r\n\r\n                #txt\r\n                if key_tmp in txt_similarity:\r\n                    txt_sim = txt_similarity[key_tmp]\r\n                else:\r\n                    try:\r\n                        txt_sim = 0.5 * txt_model.similarity(str(item), str(relate_item))+ 0.5\r\n                    except:\r\n                        txt_sim = 0.5\r\n                    txt_similarity[key_tmp] = txt_sim\r\n\r\n                '''\r\n                WIJ\r\n                The meaning of each columns:\r\n                {'sim': 0,------------------------0\r\n                  'item_cf': 0,-------------------1\r\n                  'item_cf_weighted': 0,----------2\r\n                  'time_diff': np.inf,------------3\r\n                  'loc_diff': np.inf,-------------4\r\n                  'node_sim_max': -1e8,-----------5\r\n                  'node_sim_sum':0,---------------6\r\n                  'deep_sim_max': -1e8,-----------7\r\n                  'deep_sim_sum':0----------------8\r\n                                          }\r\n                '''\r\n                \r\n                if loc1 - loc2 > 0:\r\n                    sim_item[item][relate_item][0] += (node_sim**2)*deep_sim*txt_sim * 0.8 * max(0.5, (0.9 ** (loc1 - loc2 - 1))) * (\r\n                        max(0.5, 1 / (1 + delta_t))) / (math.log(len(users) + 1) * math.log(\r\n                        1 + user_item_len))\r\n                else:                 \r\n                    sim_item[item][relate_item][0] += (node_sim**2)*deep_sim*txt_sim * 1.0 * max(0.5, (0.9 ** (loc2 - loc1 - 1))) * (\r\n                        max(0.5, 1 / (1 + delta_t))) / (math.log(len(users) + 1) * math.log(\r\n                        1 + user_item_len))\r\n                \r\n                if delta_t < sim_item[item][relate_item][3]:\r\n                    sim_item[item][relate_item][3] = delta_t\r\n                if delta_loc < sim_item[item][relate_item][4]:\r\n                    sim_item[item][relate_item][4] = delta_loc\r\n                sim_item[item][relate_item][1] += 1\r\n                sim_item[item][relate_item][2] += (0.8**(loc2-loc1-1)) * (1 - (t2 - t1) * 2000) / math.log(1 + len(items))\r\n                \r\n                if node_sim > sim_item[item][relate_item][5]:\r\n                    sim_item[item][relate_item][5] = node_sim\r\n                sim_item[item][relate_item][6] += node_sim\r\n                \r\n                if deep_sim > sim_item[item][relate_item][7]:\r\n                    sim_item[item][relate_item][7] = deep_sim\r\n                sim_item[item][relate_item][8] += deep_sim\r\n                \r\n                \r\n\r\n    sim_item_corr = sim_item.copy()\r\n    for i, related_items in tqdm(sim_item.items()):\r\n        for j, cij in related_items.items():\r\n            cosine_sim = cij[0] / ((item_cnt[i] * item_cnt[j]) ** 0.2)\r\n            sim_item_corr[i][j][0] = cosine_sim\r\n            sim_item_corr[i][j] = [myround(x, 4) for x in sim_item_corr[i][j]]\r\n\r\n\r\n    return sim_item_corr, user_item_dict, user_time_dict, item_dic, item_time_dict\r\n\r\n\r\ndef recommend(sim_item_corr, user_item_dict, user_id, times, item_dict, item_time_dict, top_k, item_num):\r\n    '''\r\n    input:item_sim_list, user_item, uid, 500, 50\r\n    # 用户历史序列中的所有商品均有关联商品,整合这些关联商品,进行相似性排序\r\n    '''\r\n    rank = {}\r\n    interacted_items = user_item_dict[user_id]\r\n    interacted_items = interacted_items[::-1]\r\n    times = times[::-1]\r\n    t0 = times[0]\r\n    for loc, i in enumerate(interacted_items):\r\n        for j, wij in sorted(sim_item_corr[i].items(), key=lambda d: d[1][0], reverse=True)[0:top_k]:\r\n            if j not in interacted_items:\r\n                rank.setdefault(j, [0,0,0,np.inf,np.inf,np.inf,np.inf,np.inf,-1e8,0,-1e8,0])\r\n                '''\r\n                RANK\r\n                {'sim': 0,---------------------------------0\r\n                'item_cf': 0,------------------------------1\r\n                'item_cf_weighted': 0,---------------------2\r\n                'time_diff': np.inf,-----------------------3\r\n                'loc_diff': np.inf,------------------------4\r\n                # Some feature generated by recall\r\n                'time_diff_recall': np.inf,----------------5\r\n                'time_diff_recall_1': np.inf,--------------6\r\n                'loc_diff_recall': np.inf,-----------------7\r\n                # Nodesim and Deepsim\r\n                  'node_sim_max': -1e8,--------------------8\r\n                  'node_sim_sum':0,------------------------9\r\n                  'deep_sim_max': -1e8,--------------------10\r\n                  'deep_sim_sum':0,------------------------11\r\n                                          }\r\n                '''\r\n                t1 = times[loc]\r\n                t2 = item_time_dict[j][0]\r\n                delta_t1 = abs(t0 - t1) * 650000\r\n                delta_t2 = abs(t0 - t2) * 650000\r\n                alpha = max(0.2, 1 / (1 + item_dict[j]))\r\n                beta = max(0.5, (0.9 ** loc))\r\n                theta = max(0.5, 1 / (1 + delta_t1))\r\n                gamma = max(0.5, 1 / (1 + delta_t2))\r\n                \r\n                '''\r\n                RANK\r\n                {'sim': 0,---------------------------------0\r\n                'item_cf': 0,------------------------------1\r\n                'item_cf_weighted': 0,---------------------2\r\n                'time_diff': np.inf,-----------------------3\r\n                'loc_diff': np.inf,------------------------4\r\n                # Some feature generated by recall\r\n                'time_diff_recall': np.inf,----------------5\r\n                'time_diff_recall_1': np.inf,--------------6\r\n                'loc_diff_recall': np.inf,-----------------7\r\n                # Nodesim and Deepsim\r\n                  'node_sim_max': -1e8,--------------------8\r\n                  'node_sim_sum':0,------------------------9\r\n                  'deep_sim_max': -1e8,--------------------10\r\n                  'deep_sim_sum':0,------------------------11\r\n                                          }\r\n                '''\r\n                \r\n                '''\r\n                WIJ\r\n                The meaning of each columns:\r\n                {'sim': 0,------------------------0\r\n                  'item_cf': 0,-------------------1\r\n                  'item_cf_weighted': 0,----------2\r\n                  'time_diff': np.inf,------------3\r\n                  'loc_diff': np.inf,-------------4\r\n                  'node_sim_max': -1e8,-----------5\r\n                  'node_sim_sum':0,---------------6\r\n                  'deep_sim_max': -1e8,-----------7\r\n                  'deep_sim_sum':0----------------8\r\n                                          }\r\n                '''\r\n                \r\n\r\n                rank[j][0] += myround(wij[0] * (alpha ** 2) * (beta) * (theta ** 2) * gamma, 4)\r\n                rank[j][1] += wij[1]\r\n                rank[j][2] += wij[2]\r\n                \r\n                if wij[3] < rank[j][3]:\r\n                    rank[j][3] = wij[3]\r\n                if wij[4] < rank[j][4]:\r\n                    rank[j][4] = wij[4]\r\n                if delta_t1 < rank[j][5]:\r\n                    rank[j][5] = myround(delta_t1, 4)\r\n                if delta_t2 < rank[j][6]:\r\n                    rank[j][6] = myround(delta_t2, 4)\r\n                if loc < rank[j][7]:\r\n                    rank[j][7] = loc\r\n                    \r\n                if wij[5] > rank[j][8]:\r\n                    rank[j][8] = wij[5]\r\n                rank[j][9] += wij[6] / wij[1]\r\n                \r\n                if wij[7] > rank[j][10]:\r\n                    rank[j][10] = wij[7]\r\n                rank[j][11] += wij[8] / wij[1]\r\n                \r\n    return sorted(rank.items(), key=lambda d: d[1][0], reverse=True)[:item_num]\r\n\r\n\r\n# In[18]:\r\n\r\n\r\nnow_phase = 9\r\nheader = 'model_1'\r\ntxt_similarity = {}\r\ndeepwalk_similarity = {}\r\nnodewalk_similarity = {}\r\noffline = \"./user_data/model_1/\"\r\nout_path = './user_data/model_1/new_similarity/'\r\n\r\nprint(\"start\")\r\nprint(\"read sim\")\r\n\r\nnodewalk_model = KeyedVectors.load_word2vec_format(offline + 'node2vec_' + header + '.bin',binary=True)\r\n\r\ndeepwalk_model = KeyedVectors.load_word2vec_format(offline + 'deepwalk_' + header + '.bin',binary=True)\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/w2v_txt_vec.txt')\r\n\r\n\r\n# In[19]:\r\n\r\n\r\nrecom_item = []\r\nfor phase in range(now_phase + 1):\r\n    a = time.time()\r\n    history_list = []\r\n    for i in range(now_phase + 1):\r\n        click_train = pd.read_csv(offline + header + '_train_click-{}.csv'.format(i), header=None,\r\n                                  names=['user_id', 'item_id', 'time'])\r\n        click_test = pd.read_csv(offline + header + '_test_click-{}.csv'.format(i), header=None,\r\n                                 names=['user_id', 'item_id', 'time'])\r\n        all_click = click_train.append(click_test)\r\n        history_list.append(all_click)\r\n\r\n    qtime_test = pd.read_csv(offline + header + '_test_qtime-{}.csv'.format(phase), header=None,\r\n                              names=['user_id', 'item_id', 'time'])\r\n\r\n    print('phase:', phase)\r\n    time_diff = max(history_list[now_phase]['time']) - min(history_list[0]['time'])\r\n    for i in range(phase + 1, now_phase + 1):\r\n        history_list[i]['time'] = history_list[i]['time'] - time_diff\r\n\r\n    whole_click = pd.DataFrame()\r\n    for i in range(now_phase + 1):\r\n        whole_click = whole_click.append(history_list[i])\r\n\r\n\r\n    whole_click = whole_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    whole_click = whole_click.sort_values('time')\r\n    whole_click = whole_click.reset_index(drop=True)\r\n\r\n\r\n    item_sim_list, user_item, user_time_dict, item_dic, item_time_dict = get_sim_item(whole_click,\r\n                                                                                      'user_id',\r\n                                                                                      'item_id'\r\n                                                                                      )       \r\n\r\n\r\n    print(\"phase finish time:{:6.4f} mins\".format((time.time() - a) / 60))\r\n    \r\n    for user in tqdm(qtime_test['user_id'].unique()):\r\n        if user in user_time_dict:\r\n            times = user_time_dict[user]\r\n            rank_item = recommend(item_sim_list, user_item, user, times, item_dic, item_time_dict, 500, 1000)\r\n            for j in rank_item:\r\n                recom_item.append([user, int(j[0])] + j[1])    \r\n                \r\n    for i, related_items in tqdm(item_sim_list.items()):\r\n        for j, cij in related_items.items():\r\n            item_sim_list[i][j] = cij[0]\r\n    \r\n    write_file = open(out_path+'itemCF_new'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(item_sim_list, write_file)\r\n    write_file.close() \r\n\r\n    write_file = open(out_path+'user2item_new'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(user_item, write_file)\r\n    write_file.close()     \r\n\r\n    write_file = open(out_path+'item2cnt_new'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(item_dic, write_file)\r\n    write_file.close() \r\n\r\n    write_file = open(out_path+'userTime'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(user_time_dict, write_file)\r\n    write_file.close()         \r\n\r\n    write_file = open(out_path+'itemTime'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(item_time_dict, write_file)\r\n    write_file.close()  \r\n    \r\n    write_file = open(out_path+'recom_item'+'.pkl', 'wb')\r\n    pickle.dump(recom_item, write_file)\r\n    write_file.close() \r\n\r\n    \r\n    del item_sim_list\r\n    del user_item\r\n    del user_time_dict\r\n    del item_dic\r\n    del item_time_dict\r\n    gc.collect()\r\n\r\n"
  },
  {
    "path": "code/2_Similarity/01_itemCF_Mundane_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[30]:\r\n\r\n\r\nfrom __future__ import division\r\nfrom __future__ import print_function\r\nfrom gensim.models import KeyedVectors\r\nimport gc\r\nimport os\r\nimport math\r\nimport time\r\nimport random\r\nimport joblib\r\nimport itertools\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom tqdm import tqdm\r\nfrom collections import defaultdict\r\nimport pickle\r\nfrom multiprocessing import Pool as ProcessPool\r\nimport json\r\n\r\n\r\n# In[31]:\r\n\r\n\r\nrandom.seed(2020)\r\npd.set_option('display.unicode.ambiguous_as_wide', True)\r\npd.set_option('display.unicode.east_asian_width', True)\r\npd.set_option('display.max_columns', None)\r\npd.set_option('display.max_rows', None)\r\npd.set_option(\"display.max_colwidth\", 100)\r\npd.set_option('display.width', 1000)\r\n\r\n\r\n# In[32]:\r\n\r\n\r\ndef process(each_item):\r\n    dict_tmp = item_sim_list[each_item]\r\n    for j in dict_tmp:\r\n        dict_tmp[j] = round(dict_tmp[j],4)\r\n        dict_tmp[j] = round(dict_tmp[j],4)\r\n    \r\n    return (each_item,dict_tmp)\r\n\r\ndef myround(x, thres):\r\n    temp = 10**thres\r\n    return int(x * temp) / temp\r\n\r\n\r\n# In[33]:\r\n\r\n\r\nmyround = lambda x,thres : int(x * 10**thres) / 10**thres\r\n\r\n\r\n# In[34]:\r\n\r\n\r\ndef phase_predict(df, pred_col, top_fill, topk=50):\r\n    \"\"\"recom_df, 'sim', top50_click, \"click_valid\"\r\n    \"\"\"\r\n    top_fill = [int(t) for t in top_fill.split(',')]\r\n    top_fill = top_fill[:topk]\r\n    scores = [-1 * i for i in range(1, len(top_fill) + 1)]\r\n    ids = list(df['user_id'].unique())\r\n    fill_df = pd.DataFrame(ids * len(top_fill), columns=['user_id'])\r\n    fill_df.sort_values('user_id', inplace=True)\r\n    fill_df['item_id'] = top_fill * len(ids)\r\n    fill_df[pred_col] = scores * len(ids)\r\n    df = df.append(fill_df)\r\n    df.sort_values(pred_col, ascending=False, inplace=True)\r\n    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')\r\n    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)\r\n    df.sort_values(\"rank\", inplace=True)\r\n    df = df[df[\"rank\"] <= topk]\r\n    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',',\r\n                                                                                                   expand=True).reset_index()\r\n    return df\r\n\r\n\r\ndef get_sim_item(df_, user_col, item_col):#, nodewalk_model,deepwalk_model,txt_vec_model):\r\n    global txt_similarity\r\n    global deepwalk_similarity\r\n    global nodewalk_similarity\r\n\r\n    df = df_.copy()\r\n    user_item_ = df.groupby(user_col)[item_col].agg(list).reset_index()\r\n    user_item_dict = dict(zip(user_item_[user_col], user_item_[item_col]))\r\n\r\n    user_time_ = df.groupby(user_col)['time'].agg(list).reset_index()  # 引入时间因素\r\n    user_time_dict = dict(zip(user_time_[user_col], user_time_['time']))\r\n\r\n    item_user_ = df.groupby(item_col)[user_col].agg(set).reset_index()\r\n    item_user_dict = dict(zip(item_user_[item_col], item_user_[user_col]))\r\n\r\n    item_dic = df[item_col].value_counts().to_dict()\r\n\r\n    df.sort_values('time', inplace=True)\r\n    df.drop_duplicates('item_id', keep='first', inplace=True)\r\n    item_time_ = df.groupby(item_col)['time'].agg(list).reset_index()  # 引入时间因素\r\n    item_time_dict = dict(zip(item_time_[item_col], item_time_['time']))\r\n\r\n\r\n    sim_item = {}\r\n    item_cnt = defaultdict(int)  # 商品被点击次数\r\n    for user, items in tqdm(user_item_dict.items()):\r\n        for loc1, item in enumerate(items):\r\n            users = item_user_dict[item]\r\n            item_cnt[item] += 1\r\n            sim_item.setdefault(item, {})\r\n            user_item_len = len(items)\r\n            for loc2, relate_item in enumerate(items):\r\n                if item == relate_item:\r\n                    continue\r\n                t1 = user_time_dict[user][loc1]  # 点击时间提取\r\n                t2 = user_time_dict[user][loc2]\r\n                delta_t = abs(t1 - t2) * 650000\r\n                delta_loc = abs(loc1 - loc2)\r\n                '''\r\n                The meaning of each columns:\r\n                {'sim': 0,------------------------0\r\n                  'item_cf': 0,-------------------1\r\n                  'item_cf_weighted': 0,----------2\r\n                  'time_diff': np.inf,------------3\r\n                  'loc_diff': np.inf,-------------4\r\n                  'node_sim_max': -1e8,-----------5\r\n                  'node_sim_sum':0,---------------6\r\n                  'deep_sim_max': -1e8,-----------7\r\n                  'deep_sim_sum':0----------------8\r\n                                          }\r\n                '''\r\n                \r\n                sim_item[item].setdefault(relate_item,\r\n                                          [0,0,0,np.inf,np.inf,-1e8,0,-1e8,0]\r\n                                         )\r\n                \r\n                \r\n                key = [str(int(item)), str(int(relate_item))]\r\n                key_tmp = \"_\".join(key)\r\n                \r\n                ##nodewalk\r\n                if key_tmp in nodewalk_similarity:\r\n                    node_sim = nodewalk_similarity[key_tmp]\r\n                else:\r\n                    try:\r\n                        node_sim = 0.5 * nodewalk_model.similarity(str(item), str(relate_item))+ 0.5\r\n                    except:\r\n                        node_sim = 0.5\r\n                    nodewalk_similarity[key_tmp] = node_sim\r\n                    \r\n                ##deepwalk\r\n                if key_tmp in deepwalk_similarity:\r\n                    deep_sim = deepwalk_similarity[key_tmp]\r\n                else:\r\n                    try:\r\n                        deep_sim = 0.5 * deepwalk_model.similarity(str(item), str(relate_item))+ 0.5\r\n                    except:\r\n                        deep_sim = 0.5\r\n                    deepwalk_similarity[key_tmp] = deep_sim\r\n\r\n                #txt\r\n                if key_tmp in txt_similarity:\r\n                    txt_sim = txt_similarity[key_tmp]\r\n                else:\r\n                    try:\r\n                        txt_sim = 0.5 * txt_model.similarity(str(item), str(relate_item))+ 0.5\r\n                    except:\r\n                        txt_sim = 0.5\r\n                    txt_similarity[key_tmp] = txt_sim\r\n\r\n                '''\r\n                WIJ\r\n                The meaning of each columns:\r\n                {'sim': 0,------------------------0\r\n                  'item_cf': 0,-------------------1\r\n                  'item_cf_weighted': 0,----------2\r\n                  'time_diff': np.inf,------------3\r\n                  'loc_diff': np.inf,-------------4\r\n                  'node_sim_max': -1e8,-----------5\r\n                  'node_sim_sum':0,---------------6\r\n                  'deep_sim_max': -1e8,-----------7\r\n                  'deep_sim_sum':0----------------8\r\n                                          }\r\n                '''\r\n                \r\n                if loc1 - loc2 > 0:\r\n                    sim_item[item][relate_item][0] += (node_sim**2)*deep_sim*txt_sim * 0.8 * max(0.5, (0.9 ** (loc1 - loc2 - 1))) * (\r\n                        max(0.5, 1 / (1 + delta_t))) / (math.log(len(users) + 1) * math.log(\r\n                        1 + user_item_len))\r\n                else:                 \r\n                    sim_item[item][relate_item][0] += (node_sim**2)*deep_sim*txt_sim * 1.0 * max(0.5, (0.9 ** (loc2 - loc1 - 1))) * (\r\n                        max(0.5, 1 / (1 + delta_t))) / (math.log(len(users) + 1) * math.log(\r\n                        1 + user_item_len))\r\n                \r\n                if delta_t < sim_item[item][relate_item][3]:\r\n                    sim_item[item][relate_item][3] = delta_t\r\n                if delta_loc < sim_item[item][relate_item][4]:\r\n                    sim_item[item][relate_item][4] = delta_loc\r\n                sim_item[item][relate_item][1] += 1\r\n                sim_item[item][relate_item][2] += (0.8**(loc2-loc1-1)) * (1 - (t2 - t1) * 2000) / math.log(1 + len(items))\r\n                \r\n                if node_sim > sim_item[item][relate_item][5]:\r\n                    sim_item[item][relate_item][5] = node_sim\r\n                sim_item[item][relate_item][6] += node_sim\r\n                \r\n                if deep_sim > sim_item[item][relate_item][7]:\r\n                    sim_item[item][relate_item][7] = deep_sim\r\n                sim_item[item][relate_item][8] += deep_sim\r\n                \r\n                \r\n\r\n    sim_item_corr = sim_item.copy()\r\n    for i, related_items in tqdm(sim_item.items()):\r\n        for j, cij in related_items.items():\r\n            cosine_sim = cij[0] / ((item_cnt[i] * item_cnt[j]) ** 0.2)\r\n            sim_item_corr[i][j][0] = cosine_sim\r\n            sim_item_corr[i][j] = [myround(x, 4) for x in sim_item_corr[i][j]]\r\n\r\n\r\n    return sim_item_corr, user_item_dict, user_time_dict, item_dic, item_time_dict\r\n\r\n\r\ndef recommend(sim_item_corr, user_item_dict, user_id, times, item_dict, item_time_dict, top_k, item_num):\r\n    '''\r\n    input:item_sim_list, user_item, uid, 500, 50\r\n    # 用户历史序列中的所有商品均有关联商品,整合这些关联商品,进行相似性排序\r\n    '''\r\n    rank = {}\r\n    interacted_items = user_item_dict[user_id]\r\n    interacted_items = interacted_items[::-1]\r\n    times = times[::-1]\r\n    t0 = times[0]\r\n    for loc, i in enumerate(interacted_items):\r\n        for j, wij in sorted(sim_item_corr[i].items(), key=lambda d: d[1][0], reverse=True)[0:top_k]:\r\n            if j not in interacted_items:\r\n                rank.setdefault(j, [0,0,0,np.inf,np.inf,np.inf,np.inf,np.inf,-1e8,0,-1e8,0])\r\n                '''\r\n                RANK\r\n                {'sim': 0,---------------------------------0\r\n                'item_cf': 0,------------------------------1\r\n                'item_cf_weighted': 0,---------------------2\r\n                'time_diff': np.inf,-----------------------3\r\n                'loc_diff': np.inf,------------------------4\r\n                # Some feature generated by recall\r\n                'time_diff_recall': np.inf,----------------5\r\n                'time_diff_recall_1': np.inf,--------------6\r\n                'loc_diff_recall': np.inf,-----------------7\r\n                # Nodesim and Deepsim\r\n                  'node_sim_max': -1e8,--------------------8\r\n                  'node_sim_sum':0,------------------------9\r\n                  'deep_sim_max': -1e8,--------------------10\r\n                  'deep_sim_sum':0,------------------------11\r\n                                          }\r\n                '''\r\n                t1 = times[loc]\r\n                t2 = item_time_dict[j][0]\r\n                delta_t1 = abs(t0 - t1) * 650000\r\n                delta_t2 = abs(t0 - t2) * 650000\r\n                alpha = max(0.2, 1 / (1 + item_dict[j]))\r\n                beta = max(0.5, (0.9 ** loc))\r\n                theta = max(0.5, 1 / (1 + delta_t1))\r\n                gamma = max(0.5, 1 / (1 + delta_t2))\r\n                \r\n                '''\r\n                RANK\r\n                {'sim': 0,---------------------------------0\r\n                'item_cf': 0,------------------------------1\r\n                'item_cf_weighted': 0,---------------------2\r\n                'time_diff': np.inf,-----------------------3\r\n                'loc_diff': np.inf,------------------------4\r\n                # Some feature generated by recall\r\n                'time_diff_recall': np.inf,----------------5\r\n                'time_diff_recall_1': np.inf,--------------6\r\n                'loc_diff_recall': np.inf,-----------------7\r\n                # Nodesim and Deepsim\r\n                  'node_sim_max': -1e8,--------------------8\r\n                  'node_sim_sum':0,------------------------9\r\n                  'deep_sim_max': -1e8,--------------------10\r\n                  'deep_sim_sum':0,------------------------11\r\n                                          }\r\n                '''\r\n                \r\n                '''\r\n                WIJ\r\n                The meaning of each columns:\r\n                {'sim': 0,------------------------0\r\n                  'item_cf': 0,-------------------1\r\n                  'item_cf_weighted': 0,----------2\r\n                  'time_diff': np.inf,------------3\r\n                  'loc_diff': np.inf,-------------4\r\n                  'node_sim_max': -1e8,-----------5\r\n                  'node_sim_sum':0,---------------6\r\n                  'deep_sim_max': -1e8,-----------7\r\n                  'deep_sim_sum':0----------------8\r\n                                          }\r\n                '''\r\n                \r\n\r\n                rank[j][0] += myround(wij[0] * (alpha ** 2) * (beta) * (theta ** 2) * gamma, 4)\r\n                rank[j][1] += wij[1]\r\n                rank[j][2] += wij[2]\r\n                \r\n                if wij[3] < rank[j][3]:\r\n                    rank[j][3] = wij[3]\r\n                if wij[4] < rank[j][4]:\r\n                    rank[j][4] = wij[4]\r\n                if delta_t1 < rank[j][5]:\r\n                    rank[j][5] = myround(delta_t1, 4)\r\n                if delta_t2 < rank[j][6]:\r\n                    rank[j][6] = myround(delta_t2, 4)\r\n                if loc < rank[j][7]:\r\n                    rank[j][7] = loc\r\n                    \r\n                if wij[5] > rank[j][8]:\r\n                    rank[j][8] = wij[5]\r\n                rank[j][9] += wij[6] / wij[1]\r\n                \r\n                if wij[7] > rank[j][10]:\r\n                    rank[j][10] = wij[7]\r\n                rank[j][11] += wij[8] / wij[1]\r\n                \r\n    return sorted(rank.items(), key=lambda d: d[1][0], reverse=True)[:item_num]\r\n\r\n\r\n# In[35]:\r\n\r\n\r\nnow_phase = 9\r\nheader = 'offline'\r\ntxt_similarity = {}\r\ndeepwalk_similarity = {}\r\nnodewalk_similarity = {}\r\noffline = \"./user_data/offline/\"\r\nout_path = './user_data/offline/new_similarity/'\r\n\r\nprint(\"start\")\r\nprint(\"read sim\")\r\n\r\nnodewalk_model = KeyedVectors.load_word2vec_format(offline + 'node2vec_' + header + '.bin',binary=True)\r\n\r\ndeepwalk_model = KeyedVectors.load_word2vec_format(offline + 'deepwalk_' + header + '.bin',binary=True)\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/w2v_txt_vec.txt')\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\nrecom_item = []\r\nfor phase in range(0, now_phase + 1):\r\n    a = time.time()\r\n    history_list = []\r\n    for i in range(now_phase + 1):\r\n        click_train = pd.read_csv(offline + header + '_train_click-{}.csv'.format(i), header=None,\r\n                                  names=['user_id', 'item_id', 'time'])\r\n        click_test = pd.read_csv(offline + header + '_test_click-{}.csv'.format(i), header=None,\r\n                                 names=['user_id', 'item_id', 'time'])\r\n        all_click = click_train.append(click_test)\r\n        history_list.append(all_click)\r\n\r\n    qtime_test = pd.read_csv(offline + header + '_test_qtime-{}.csv'.format(phase), header=None,\r\n                              names=['user_id', 'item_id', 'time'])\r\n\r\n    print('phase:', phase)\r\n    time_diff = max(history_list[now_phase]['time']) - min(history_list[0]['time'])\r\n    for i in range(phase + 1, now_phase + 1):\r\n        history_list[i]['time'] = history_list[i]['time'] - time_diff\r\n\r\n    whole_click = pd.DataFrame()\r\n    for i in range(now_phase + 1):\r\n        whole_click = whole_click.append(history_list[i])\r\n\r\n\r\n    whole_click = whole_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    whole_click = whole_click.sort_values('time')\r\n    whole_click = whole_click.reset_index(drop=True)\r\n\r\n\r\n    item_sim_list, user_item, user_time_dict, item_dic, item_time_dict = get_sim_item(whole_click,\r\n                                                                                      'user_id',\r\n                                                                                      'item_id'\r\n                                                                                      )       \r\n\r\n\r\n    print(\"phase finish time:{:6.4f} mins\".format((time.time() - a) / 60))\r\n    \r\n    for user in tqdm(qtime_test['user_id'].unique()):\r\n        if user in user_time_dict:\r\n            times = user_time_dict[user]\r\n            rank_item = recommend(item_sim_list, user_item, user, times, item_dic, item_time_dict, 500, 1000)\r\n            for j in rank_item:\r\n                recom_item.append([user, int(j[0])] + j[1])    \r\n                \r\n    for i, related_items in tqdm(item_sim_list.items()):\r\n        for j, cij in related_items.items():\r\n            item_sim_list[i][j] = cij[0]\r\n    \r\n    write_file = open(out_path+'itemCF_new'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(item_sim_list, write_file)\r\n    write_file.close() \r\n\r\n    write_file = open(out_path+'user2item_new'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(user_item, write_file)\r\n    write_file.close()     \r\n\r\n    write_file = open(out_path+'item2cnt_new'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(item_dic, write_file)\r\n    write_file.close() \r\n\r\n    write_file = open(out_path+'userTime'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(user_time_dict, write_file)\r\n    write_file.close()         \r\n\r\n    write_file = open(out_path+'itemTime'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(item_time_dict, write_file)\r\n    write_file.close()  \r\n    \r\n    write_file = open(out_path+'recom_item'+'.pkl', 'wb')\r\n    pickle.dump(recom_item, write_file)\r\n    write_file.close() \r\n\r\n    \r\n    del item_sim_list\r\n    del user_item\r\n    del user_time_dict\r\n    del item_dic\r\n    del item_time_dict\r\n    gc.collect()\r\n\r\n"
  },
  {
    "path": "code/2_Similarity/01_itemCF_Mundane_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nfrom __future__ import division\r\nfrom __future__ import print_function\r\nfrom gensim.models import KeyedVectors\r\nimport gc\r\nimport os\r\nimport math\r\nimport time\r\nimport random\r\nimport joblib\r\nimport itertools\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom tqdm import tqdm\r\nfrom collections import defaultdict\r\nimport pickle\r\nfrom multiprocessing import Pool as ProcessPool\r\nimport json\r\nimport time\r\n\r\n\r\n# In[2]:\r\n\r\n\r\n# print('俺睡着了')\r\n# time.sleep(6 * 60 * 60)\r\n# print('俺睡醒了')\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nrandom.seed(2020)\r\npd.set_option('display.unicode.ambiguous_as_wide', True)\r\npd.set_option('display.unicode.east_asian_width', True)\r\npd.set_option('display.max_columns', None)\r\npd.set_option('display.max_rows', None)\r\npd.set_option(\"display.max_colwidth\", 100)\r\npd.set_option('display.width', 1000)\r\n\r\n\r\n# In[4]:\r\n\r\n\r\ndef process(each_item):\r\n    dict_tmp = item_sim_list[each_item]\r\n    for j in dict_tmp:\r\n        dict_tmp[j] = round(dict_tmp[j],4)\r\n        dict_tmp[j] = round(dict_tmp[j],4)\r\n    \r\n    return (each_item,dict_tmp)\r\n\r\ndef myround(x, thres):\r\n    temp = 10**thres\r\n    return int(x * temp) / temp\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nmyround = lambda x,thres : int(x * 10**thres) / 10**thres\r\n\r\n\r\n# In[6]:\r\n\r\n\r\ndef phase_predict(df, pred_col, top_fill, topk=50):\r\n    \"\"\"recom_df, 'sim', top50_click, \"click_valid\"\r\n    \"\"\"\r\n    top_fill = [int(t) for t in top_fill.split(',')]\r\n    top_fill = top_fill[:topk]\r\n    scores = [-1 * i for i in range(1, len(top_fill) + 1)]\r\n    ids = list(df['user_id'].unique())\r\n    fill_df = pd.DataFrame(ids * len(top_fill), columns=['user_id'])\r\n    fill_df.sort_values('user_id', inplace=True)\r\n    fill_df['item_id'] = top_fill * len(ids)\r\n    fill_df[pred_col] = scores * len(ids)\r\n    df = df.append(fill_df)\r\n    df.sort_values(pred_col, ascending=False, inplace=True)\r\n    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')\r\n    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)\r\n    df.sort_values(\"rank\", inplace=True)\r\n    df = df[df[\"rank\"] <= topk]\r\n    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',',\r\n                                                                                                   expand=True).reset_index()\r\n    return df\r\n\r\n\r\ndef get_sim_item(df_, user_col, item_col):#, nodewalk_model,deepwalk_model,txt_vec_model):\r\n    global txt_similarity\r\n    global deepwalk_similarity\r\n    global nodewalk_similarity\r\n\r\n    df = df_.copy()\r\n    user_item_ = df.groupby(user_col)[item_col].agg(list).reset_index()\r\n    user_item_dict = dict(zip(user_item_[user_col], user_item_[item_col]))\r\n\r\n    user_time_ = df.groupby(user_col)['time'].agg(list).reset_index()  # 引入时间因素\r\n    user_time_dict = dict(zip(user_time_[user_col], user_time_['time']))\r\n\r\n    item_user_ = df.groupby(item_col)[user_col].agg(set).reset_index()\r\n    item_user_dict = dict(zip(item_user_[item_col], item_user_[user_col]))\r\n\r\n    item_dic = df[item_col].value_counts().to_dict()\r\n\r\n    df.sort_values('time', inplace=True)\r\n    df.drop_duplicates('item_id', keep='first', inplace=True)\r\n    item_time_ = df.groupby(item_col)['time'].agg(list).reset_index()  # 引入时间因素\r\n    item_time_dict = dict(zip(item_time_[item_col], item_time_['time']))\r\n\r\n\r\n    sim_item = {}\r\n    item_cnt = defaultdict(int)  # 商品被点击次数\r\n    for user, items in tqdm(user_item_dict.items()):\r\n        for loc1, item in enumerate(items):\r\n            users = item_user_dict[item]\r\n            item_cnt[item] += 1\r\n            sim_item.setdefault(item, {})\r\n            user_item_len = len(items)\r\n            for loc2, relate_item in enumerate(items):\r\n                if item == relate_item:\r\n                    continue\r\n                t1 = user_time_dict[user][loc1]  # 点击时间提取\r\n                t2 = user_time_dict[user][loc2]\r\n                delta_t = abs(t1 - t2) * 650000\r\n                delta_loc = abs(loc1 - loc2)\r\n                '''\r\n                The meaning of each columns:\r\n                {'sim': 0,------------------------0\r\n                  'item_cf': 0,-------------------1\r\n                  'item_cf_weighted': 0,----------2\r\n                  'time_diff': np.inf,------------3\r\n                  'loc_diff': np.inf,-------------4\r\n                  'node_sim_max': -1e8,-----------5\r\n                  'node_sim_sum':0,---------------6\r\n                  'deep_sim_max': -1e8,-----------7\r\n                  'deep_sim_sum':0----------------8\r\n                                          }\r\n                '''\r\n                \r\n                sim_item[item].setdefault(relate_item,\r\n                                          [0,0,0,np.inf,np.inf,-1e8,0,-1e8,0]\r\n                                         )\r\n                \r\n                \r\n                key = [str(int(item)), str(int(relate_item))]\r\n                key_tmp = \"_\".join(key)\r\n                \r\n                ##nodewalk\r\n                if key_tmp in nodewalk_similarity:\r\n                    node_sim = nodewalk_similarity[key_tmp]\r\n                else:\r\n                    try:\r\n                        node_sim = 0.5 * nodewalk_model.similarity(str(item), str(relate_item))+ 0.5\r\n                    except:\r\n                        node_sim = 0.5\r\n                    nodewalk_similarity[key_tmp] = node_sim\r\n                    \r\n                ##deepwalk\r\n                if key_tmp in deepwalk_similarity:\r\n                    deep_sim = deepwalk_similarity[key_tmp]\r\n                else:\r\n                    try:\r\n                        deep_sim = 0.5 * deepwalk_model.similarity(str(item), str(relate_item))+ 0.5\r\n                    except:\r\n                        deep_sim = 0.5\r\n                    deepwalk_similarity[key_tmp] = deep_sim\r\n\r\n                #txt\r\n                if key_tmp in txt_similarity:\r\n                    txt_sim = txt_similarity[key_tmp]\r\n                else:\r\n                    try:\r\n                        txt_sim = 0.5 * txt_model.similarity(str(item), str(relate_item))+ 0.5\r\n                    except:\r\n                        txt_sim = 0.5\r\n                    txt_similarity[key_tmp] = txt_sim\r\n\r\n                '''\r\n                WIJ\r\n                The meaning of each columns:\r\n                {'sim': 0,------------------------0\r\n                  'item_cf': 0,-------------------1\r\n                  'item_cf_weighted': 0,----------2\r\n                  'time_diff': np.inf,------------3\r\n                  'loc_diff': np.inf,-------------4\r\n                  'node_sim_max': -1e8,-----------5\r\n                  'node_sim_sum':0,---------------6\r\n                  'deep_sim_max': -1e8,-----------7\r\n                  'deep_sim_sum':0----------------8\r\n                                          }\r\n                '''\r\n                \r\n                if loc1 - loc2 > 0:\r\n                    sim_item[item][relate_item][0] += (node_sim**2)*deep_sim*txt_sim * 0.8 * max(0.5, (0.9 ** (loc1 - loc2 - 1))) * (\r\n                        max(0.5, 1 / (1 + delta_t))) / (math.log(len(users) + 1) * math.log(\r\n                        1 + user_item_len))\r\n                else:                 \r\n                    sim_item[item][relate_item][0] += (node_sim**2)*deep_sim*txt_sim * 1.0 * max(0.5, (0.9 ** (loc2 - loc1 - 1))) * (\r\n                        max(0.5, 1 / (1 + delta_t))) / (math.log(len(users) + 1) * math.log(\r\n                        1 + user_item_len))\r\n                \r\n                if delta_t < sim_item[item][relate_item][3]:\r\n                    sim_item[item][relate_item][3] = delta_t\r\n                if delta_loc < sim_item[item][relate_item][4]:\r\n                    sim_item[item][relate_item][4] = delta_loc\r\n                sim_item[item][relate_item][1] += 1\r\n                sim_item[item][relate_item][2] += (0.8**(loc2-loc1-1)) * (1 - (t2 - t1) * 2000) / math.log(1 + len(items))\r\n                \r\n                if node_sim > sim_item[item][relate_item][5]:\r\n                    sim_item[item][relate_item][5] = node_sim\r\n                sim_item[item][relate_item][6] += node_sim\r\n                \r\n                if deep_sim > sim_item[item][relate_item][7]:\r\n                    sim_item[item][relate_item][7] = deep_sim\r\n                sim_item[item][relate_item][8] += deep_sim\r\n                \r\n                \r\n\r\n    sim_item_corr = sim_item.copy()\r\n    for i, related_items in tqdm(sim_item.items()):\r\n        for j, cij in related_items.items():\r\n            cosine_sim = cij[0] / ((item_cnt[i] * item_cnt[j]) ** 0.2)\r\n            sim_item_corr[i][j][0] = cosine_sim\r\n            sim_item_corr[i][j] = [myround(x, 4) for x in sim_item_corr[i][j]]\r\n\r\n\r\n    return sim_item_corr, user_item_dict, user_time_dict, item_dic, item_time_dict\r\n\r\n\r\ndef recommend(sim_item_corr, user_item_dict, user_id, times, item_dict, item_time_dict, top_k, item_num):\r\n    '''\r\n    input:item_sim_list, user_item, uid, 500, 50\r\n    # 用户历史序列中的所有商品均有关联商品,整合这些关联商品,进行相似性排序\r\n    '''\r\n    rank = {}\r\n    interacted_items = user_item_dict[user_id]\r\n    interacted_items = interacted_items[::-1]\r\n    times = times[::-1]\r\n    t0 = times[0]\r\n    for loc, i in enumerate(interacted_items):\r\n        for j, wij in sorted(sim_item_corr[i].items(), key=lambda d: d[1][0], reverse=True)[0:top_k]:\r\n            if j not in interacted_items:\r\n                rank.setdefault(j, [0,0,0,np.inf,np.inf,np.inf,np.inf,np.inf,-1e8,0,-1e8,0])\r\n                '''\r\n                RANK\r\n                {'sim': 0,---------------------------------0\r\n                'item_cf': 0,------------------------------1\r\n                'item_cf_weighted': 0,---------------------2\r\n                'time_diff': np.inf,-----------------------3\r\n                'loc_diff': np.inf,------------------------4\r\n                # Some feature generated by recall\r\n                'time_diff_recall': np.inf,----------------5\r\n                'time_diff_recall_1': np.inf,--------------6\r\n                'loc_diff_recall': np.inf,-----------------7\r\n                # Nodesim and Deepsim\r\n                  'node_sim_max': -1e8,--------------------8\r\n                  'node_sim_sum':0,------------------------9\r\n                  'deep_sim_max': -1e8,--------------------10\r\n                  'deep_sim_sum':0,------------------------11\r\n                                          }\r\n                '''\r\n                t1 = times[loc]\r\n                t2 = item_time_dict[j][0]\r\n                delta_t1 = abs(t0 - t1) * 650000\r\n                delta_t2 = abs(t0 - t2) * 650000\r\n                alpha = max(0.2, 1 / (1 + item_dict[j]))\r\n                beta = max(0.5, (0.9 ** loc))\r\n                theta = max(0.5, 1 / (1 + delta_t1))\r\n                gamma = max(0.5, 1 / (1 + delta_t2))\r\n                \r\n                '''\r\n                RANK\r\n                {'sim': 0,---------------------------------0\r\n                'item_cf': 0,------------------------------1\r\n                'item_cf_weighted': 0,---------------------2\r\n                'time_diff': np.inf,-----------------------3\r\n                'loc_diff': np.inf,------------------------4\r\n                # Some feature generated by recall\r\n                'time_diff_recall': np.inf,----------------5\r\n                'time_diff_recall_1': np.inf,--------------6\r\n                'loc_diff_recall': np.inf,-----------------7\r\n                # Nodesim and Deepsim\r\n                  'node_sim_max': -1e8,--------------------8\r\n                  'node_sim_sum':0,------------------------9\r\n                  'deep_sim_max': -1e8,--------------------10\r\n                  'deep_sim_sum':0,------------------------11\r\n                                          }\r\n                '''\r\n                \r\n                '''\r\n                WIJ\r\n                The meaning of each columns:\r\n                {'sim': 0,------------------------0\r\n                  'item_cf': 0,-------------------1\r\n                  'item_cf_weighted': 0,----------2\r\n                  'time_diff': np.inf,------------3\r\n                  'loc_diff': np.inf,-------------4\r\n                  'node_sim_max': -1e8,-----------5\r\n                  'node_sim_sum':0,---------------6\r\n                  'deep_sim_max': -1e8,-----------7\r\n                  'deep_sim_sum':0----------------8\r\n                                          }\r\n                '''\r\n                \r\n\r\n                rank[j][0] += myround(wij[0] * (alpha ** 2) * (beta) * (theta ** 2) * gamma, 4)\r\n                rank[j][1] += wij[1]\r\n                rank[j][2] += wij[2]\r\n                \r\n                if wij[3] < rank[j][3]:\r\n                    rank[j][3] = wij[3]\r\n                if wij[4] < rank[j][4]:\r\n                    rank[j][4] = wij[4]\r\n                if delta_t1 < rank[j][5]:\r\n                    rank[j][5] = myround(delta_t1, 4)\r\n                if delta_t2 < rank[j][6]:\r\n                    rank[j][6] = myround(delta_t2, 4)\r\n                if loc < rank[j][7]:\r\n                    rank[j][7] = loc\r\n                    \r\n                if wij[5] > rank[j][8]:\r\n                    rank[j][8] = wij[5]\r\n                rank[j][9] += wij[6] / wij[1]\r\n                \r\n                if wij[7] > rank[j][10]:\r\n                    rank[j][10] = wij[7]\r\n                rank[j][11] += wij[8] / wij[1]\r\n                \r\n    return sorted(rank.items(), key=lambda d: d[1][0], reverse=True)[:item_num]\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nnow_phase = 9\r\nheader = 'underexpose'\r\ntxt_similarity = {}\r\ndeepwalk_similarity = {}\r\nnodewalk_similarity = {}\r\noffline = \"./user_data/dataset/\"\r\nout_path = './user_data/dataset/new_similarity/'\r\n\r\nprint(\"start\")\r\nprint(\"read sim\")\r\n\r\nnodewalk_model = KeyedVectors.load_word2vec_format(offline + 'node2vec_' + header + '.bin',binary=True)\r\n\r\ndeepwalk_model = KeyedVectors.load_word2vec_format(offline + 'deepwalk_' + header + '.bin',binary=True)\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/w2v_txt_vec.txt')\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\nrecom_item = []\r\nfor phase in range(now_phase + 1):\r\n    a = time.time()\r\n    history_list = []\r\n    for i in range(now_phase + 1):\r\n        click_train = pd.read_csv(offline + header + '_train_click-{}.csv'.format(i), header=None,\r\n                                  names=['user_id', 'item_id', 'time'])\r\n        click_test = pd.read_csv(offline + header + '_test_click-{}.csv'.format(i), header=None,\r\n                                 names=['user_id', 'item_id', 'time'])\r\n        all_click = click_train.append(click_test)\r\n        history_list.append(all_click)\r\n\r\n    qtime_test = pd.read_csv(offline + header + '_test_qtime-{}.csv'.format(phase), header=None,\r\n                              names=['user_id', 'item_id', 'time'])\r\n\r\n    print('phase:', phase)\r\n    time_diff = max(history_list[now_phase]['time']) - min(history_list[0]['time'])\r\n    for i in range(phase + 1, now_phase + 1):\r\n        history_list[i]['time'] = history_list[i]['time'] - time_diff\r\n\r\n    whole_click = pd.DataFrame()\r\n    for i in range(now_phase + 1):\r\n        whole_click = whole_click.append(history_list[i])\r\n\r\n\r\n    whole_click = whole_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    whole_click = whole_click.sort_values('time')\r\n    whole_click = whole_click.reset_index(drop=True)\r\n\r\n\r\n    item_sim_list, user_item, user_time_dict, item_dic, item_time_dict = get_sim_item(whole_click,\r\n                                                                                      'user_id',\r\n                                                                                      'item_id'\r\n                                                                                      )       \r\n\r\n\r\n    print(\"phase finish time:{:6.4f} mins\".format((time.time() - a) / 60))\r\n    \r\n    for user in tqdm(qtime_test['user_id'].unique()):\r\n        if user in user_time_dict:\r\n            times = user_time_dict[user]\r\n            rank_item = recommend(item_sim_list, user_item, user, times, item_dic, item_time_dict, 500, 1000)\r\n            for j in rank_item:\r\n                recom_item.append([user, int(j[0])] + j[1])    \r\n                \r\n    for i, related_items in tqdm(item_sim_list.items()):\r\n        for j, cij in related_items.items():\r\n            item_sim_list[i][j] = cij[0]\r\n    \r\n    write_file = open(out_path+'itemCF_new'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(item_sim_list, write_file)\r\n    write_file.close() \r\n\r\n    write_file = open(out_path+'user2item_new'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(user_item, write_file)\r\n    write_file.close()     \r\n\r\n    write_file = open(out_path+'item2cnt_new'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(item_dic, write_file)\r\n    write_file.close() \r\n\r\n    write_file = open(out_path+'userTime'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(user_time_dict, write_file)\r\n    write_file.close()         \r\n\r\n    write_file = open(out_path+'itemTime'+str(phase)+'.pkl', 'wb')\r\n    pickle.dump(item_time_dict, write_file)\r\n    write_file.close()  \r\n    \r\n    write_file = open(out_path+'recom_item'+'.pkl', 'wb')\r\n    pickle.dump(recom_item, write_file)\r\n    write_file.close() \r\n\r\n    \r\n    del item_sim_list\r\n    del user_item\r\n    del user_time_dict\r\n    del item_dic\r\n    del item_time_dict\r\n    gc.collect()\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\nimport sys\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ndel deepwalk_similarity\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ndel nodewalk_similarity\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ndel txt_similarity\r\n\r\n"
  },
  {
    "path": "code/2_Similarity/RA_Wu_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# # RA、AA一起运行的\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[2]:\r\n\r\n\r\nnow_phase = 9\r\n\r\ninput_path = './user_data/model_1/new_similarity/'\r\nout_path = './user_data/model_1/new_similarity/'\r\n\r\n\r\n\r\nfor num in range(now_phase+1):\r\n    \r\n    # 获取itemCF相似度\r\n    with open(input_path+'itemCF_new'+str(num)+'.pkl','rb') as f:\r\n        item_sim_list_tmp = pickle.load(f)  \r\n    \r\n    item_sim = {}\r\n    for item in item_sim_list_tmp:\r\n        item_sim.setdefault(item, {})\r\n        for related_item in item_sim_list_tmp[item]:\r\n            if item_sim_list_tmp[item][related_item] > 0.005:\r\n                item_sim[item][related_item] = item_sim_list_tmp[item][related_item]\r\n    \r\n    item_sim_list_tmp = []\r\n    \r\n    strengh_dict = dict()\r\n    print('Counting degree')\r\n    for item in tqdm(item_sim):\r\n        strengh_dict[item] = sum(item_sim[item].values())       \r\n        \r\n    strengh_AA_dict = dict()\r\n    print('Counting degree')\r\n    for item in tqdm(item_sim):\r\n        strengh_AA_dict[item] = math.log(1+sum(item_sim[item].values()) )\r\n        \r\n        \r\n    #RA\r\n    RA_sim = dict()\r\n    for item in tqdm(item_sim):\r\n        neighbors = list(set(item_sim[item].keys()))\r\n        for item1 in neighbors:\r\n            if item in item_sim[item1]:\r\n                RA_sim.setdefault(item1, {})\r\n                for item2 in neighbors:\r\n                    if item1 != item2:\r\n                        RA_sim[item1].setdefault(item2, 0)\r\n                        RA_sim[item1][item2] += item_sim[item1][item] * item_sim[item][item2]/strengh_dict[item]\r\n    \r\n    \r\n    new_RA = dict()\r\n    for item1 in tqdm(RA_sim):\r\n        new_RA[item1] = {i: int(x * 1e3) / 1e3 for i, x in RA_sim[item1].items() if x > 1e-3}\r\n    \r\n    RA_sim = []\r\n    print('Saving')\r\n    write_file = open(out_path+'RA_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(new_RA, write_file)\r\n    write_file.close() \r\n    \r\n        \r\n    new_RA = []\r\n    \r\n    \r\n    #RA\r\n    AA_sim = dict()\r\n    for item in tqdm(item_sim):\r\n        neighbors = list(set(item_sim[item].keys()))\r\n        for item1 in neighbors:\r\n            if item in item_sim[item1]:\r\n                AA_sim.setdefault(item1, {})\r\n                for item2 in neighbors:\r\n                    if item1 != item2:\r\n                        AA_sim[item1].setdefault(item2, 0)\r\n                        AA_sim[item1][item2] += item_sim[item1][item] * item_sim[item][item2]/strengh_AA_dict[item]\r\n    \r\n    \r\n    new_AA = dict()\r\n    for item1 in tqdm(AA_sim):\r\n        new_AA[item1] = {i: int(x * 1e3) / 1e3 for i, x in AA_sim[item1].items() if x > 1e-3}\r\n    \r\n    AA_sim = []\r\n    print('Saving')\r\n    write_file = open(out_path+'AA_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(new_AA, write_file)\r\n    write_file.close() \r\n    \r\n        \r\n    new_AA = []    \r\n    \r\n    \r\n    \r\n    \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# # CN、HPI、HDI、LHN1是一起运行的\r\n\r\n# In[3]:\r\n\r\n\r\nnow_phase = 9\r\n\r\ninput_path = './user_data/model_1/new_similarity/'\r\nout_path = './user_data/model_1/new_similarity/'\r\n\r\n\r\n\r\nfor num in range(now_phase+1):\r\n    \r\n    # 获取itemCF相似度\r\n    with open(input_path+'itemCF_new'+str(num)+'.pkl','rb') as f:\r\n        item_sim_list_tmp = pickle.load(f)  \r\n    \r\n    item_sim = {}\r\n    for item in item_sim_list_tmp:\r\n        item_sim.setdefault(item, {})\r\n        for related_item in item_sim_list_tmp[item]:\r\n            if item_sim_list_tmp[item][related_item] > 0.005:\r\n                item_sim[item][related_item] = item_sim_list_tmp[item][related_item]\r\n    \r\n    item_sim_list_tmp = []\r\n    \r\n    #CN\r\n    CN_sim = dict()\r\n    for item in tqdm(item_sim):\r\n        neighbors = list(set(item_sim[item].keys()))\r\n        for item1 in neighbors:\r\n            if item in item_sim[item1]:\r\n                CN_sim.setdefault(item1, {})\r\n                for item2 in neighbors:\r\n                    if item1 != item2:\r\n                        CN_sim[item1].setdefault(item2, 0)\r\n                        CN_sim[item1][item2] += item_sim[item1][item] * item_sim[item][item2]\r\n    \r\n    \r\n    new_CN = dict()\r\n    for item1 in tqdm(CN_sim):\r\n        new_CN[item1] = {i: int(x * 1e3) / 1e3 for i, x in CN_sim[item1].items() if x > 1e-3}\r\n    \r\n    CN_sim = []\r\n    print('Saving')\r\n    write_file = open(out_path+'CN_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(new_CN, write_file)\r\n    write_file.close() \r\n    \r\n    strengh_dict = dict()\r\n    print('Counting degree')\r\n    for item in tqdm(item_sim):\r\n        strengh_dict[item] = sum(item_sim[item].values())     \r\n    \r\n    #HPI\r\n    HPI_sim = dict()\r\n    for item in tqdm(new_CN):\r\n        HPI_sim.setdefault(item,{})\r\n        for related_item in new_CN[item]:\r\n            HPI_sim[item][related_item] = new_CN[item][related_item]/max(0.005,min(strengh_dict[item],strengh_dict[related_item]))     \r\n            \r\n    print('Saving')\r\n    write_file = open(out_path+'HPI_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(HPI_sim, write_file)\r\n    write_file.close()\r\n    \r\n    HPI_sim = []\r\n    \r\n    \r\n    #HDI\r\n    HDI_sim = dict()\r\n    for item in tqdm(new_CN):\r\n        HDI_sim.setdefault(item,{})\r\n        for related_item in new_CN[item]:\r\n            HDI_sim[item][related_item] = new_CN[item][related_item]/max(strengh_dict[item],strengh_dict[related_item])       \r\n            \r\n    print('Saving')\r\n    write_file = open(out_path+'HDI_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(HDI_sim, write_file)\r\n    write_file.close()    \r\n    HDI_sim = []\r\n    \r\n    \r\n    \r\n    #LHN1\r\n    LHN1_sim = dict()\r\n    for item in tqdm(new_CN):\r\n        LHN1_sim.setdefault(item,{})\r\n        for related_item in new_CN[item]:\r\n            LHN1_sim[item][related_item] = new_CN[item][related_item]/( max(0.005,strengh_dict[item]) * max(0.005,strengh_dict[related_item]))       \r\n            \r\n    print('Saving')\r\n    write_file = open(out_path+'LHN1_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(LHN1_sim, write_file)\r\n    write_file.close()    \r\n    LHN1_sim = []\r\n    \r\n        \r\n    new_CN = []\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/2_Similarity/RA_Wu_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# # RA、AA一起运行的\r\n\r\n# In[2]:\r\n\r\n\r\nnow_phase = 9\r\n\r\ninput_path = './user_data/offline/new_similarity/'\r\nout_path = './user_data/offline/new_similarity/'\r\n\r\n\r\nfor num in range(now_phase+1):\r\n    \r\n    # 获取itemCF相似度\r\n    with open(input_path+'itemCF_new'+str(num)+'.pkl','rb') as f:\r\n        item_sim_list_tmp = pickle.load(f)  \r\n    \r\n    item_sim = {}\r\n    for item in item_sim_list_tmp:\r\n        item_sim.setdefault(item, {})\r\n        for related_item in item_sim_list_tmp[item]:\r\n            if item_sim_list_tmp[item][related_item] > 0.005:\r\n                item_sim[item][related_item] = item_sim_list_tmp[item][related_item]\r\n    \r\n    item_sim_list_tmp = []\r\n    \r\n    strengh_dict = dict()\r\n    print('Counting degree')\r\n    for item in tqdm(item_sim):\r\n        strengh_dict[item] = sum(item_sim[item].values())       \r\n        \r\n    strengh_AA_dict = dict()\r\n    print('Counting degree')\r\n    for item in tqdm(item_sim):\r\n        strengh_AA_dict[item] = math.log(1+sum(item_sim[item].values()) )\r\n        \r\n        \r\n    #RA\r\n    RA_sim = dict()\r\n    for item in tqdm(item_sim):\r\n        neighbors = list(set(item_sim[item].keys()))\r\n        for item1 in neighbors:\r\n            if item in item_sim[item1]:\r\n                RA_sim.setdefault(item1, {})\r\n                for item2 in neighbors:\r\n                    if item1 != item2:\r\n                        RA_sim[item1].setdefault(item2, 0)\r\n                        RA_sim[item1][item2] += item_sim[item1][item] * item_sim[item][item2]/strengh_dict[item]\r\n    \r\n    \r\n    new_RA = dict()\r\n    for item1 in tqdm(RA_sim):\r\n        new_RA[item1] = {i: int(x * 1e3) / 1e3 for i, x in RA_sim[item1].items() if x > 1e-3}\r\n    \r\n    RA_sim = []\r\n    print('Saving')\r\n    write_file = open(out_path+'RA_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(new_RA, write_file)\r\n    write_file.close() \r\n    \r\n        \r\n    new_RA = []\r\n    \r\n    \r\n    #RA\r\n    AA_sim = dict()\r\n    for item in tqdm(item_sim):\r\n        neighbors = list(set(item_sim[item].keys()))\r\n        for item1 in neighbors:\r\n            if item in item_sim[item1]:\r\n                AA_sim.setdefault(item1, {})\r\n                for item2 in neighbors:\r\n                    if item1 != item2:\r\n                        AA_sim[item1].setdefault(item2, 0)\r\n                        AA_sim[item1][item2] += item_sim[item1][item] * item_sim[item][item2]/strengh_AA_dict[item]\r\n    \r\n    \r\n    new_AA = dict()\r\n    for item1 in tqdm(AA_sim):\r\n        new_AA[item1] = {i: int(x * 1e3) / 1e3 for i, x in AA_sim[item1].items() if x > 1e-3}\r\n    \r\n    AA_sim = []\r\n    print('Saving')\r\n    write_file = open(out_path+'AA_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(new_AA, write_file)\r\n    write_file.close() \r\n    \r\n        \r\n    new_AA = []    \r\n    \r\n    \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# # CN、HPI、HDI、LHN1是一起运行的\r\n\r\n# In[3]:\r\n\r\n\r\nnow_phase = 9\r\n\r\ninput_path = './user_data/offline/new_similarity/'\r\nout_path = './user_data/offline/new_similarity/'\r\n\r\n\r\n\r\nfor num in range(now_phase+1):\r\n    \r\n    # 获取itemCF相似度\r\n    with open(input_path+'itemCF_new'+str(num)+'.pkl','rb') as f:\r\n        item_sim_list_tmp = pickle.load(f)  \r\n    \r\n    item_sim = {}\r\n    for item in item_sim_list_tmp:\r\n        item_sim.setdefault(item, {})\r\n        for related_item in item_sim_list_tmp[item]:\r\n            if item_sim_list_tmp[item][related_item] > 0.005:\r\n                item_sim[item][related_item] = item_sim_list_tmp[item][related_item]\r\n    \r\n    item_sim_list_tmp = []\r\n    \r\n    #CN\r\n    CN_sim = dict()\r\n    for item in tqdm(item_sim):\r\n        neighbors = list(set(item_sim[item].keys()))\r\n        for item1 in neighbors:\r\n            if item in item_sim[item1]:\r\n                CN_sim.setdefault(item1, {})\r\n                for item2 in neighbors:\r\n                    if item1 != item2:\r\n                        CN_sim[item1].setdefault(item2, 0)\r\n                        CN_sim[item1][item2] += item_sim[item1][item] * item_sim[item][item2]\r\n    \r\n    \r\n    new_CN = dict()\r\n    for item1 in tqdm(CN_sim):\r\n        new_CN[item1] = {i: int(x * 1e3) / 1e3 for i, x in CN_sim[item1].items() if x > 1e-3}\r\n    \r\n    CN_sim = []\r\n    print('Saving')\r\n    write_file = open(out_path+'CN_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(new_CN, write_file)\r\n    write_file.close() \r\n    \r\n    strengh_dict = dict()\r\n    print('Counting degree')\r\n    for item in tqdm(item_sim):\r\n        strengh_dict[item] = sum(item_sim[item].values())     \r\n    \r\n    #HPI\r\n    HPI_sim = dict()\r\n    for item in tqdm(new_CN):\r\n        HPI_sim.setdefault(item,{})\r\n        for related_item in new_CN[item]:\r\n            HPI_sim[item][related_item] = new_CN[item][related_item]/min(strengh_dict[item],strengh_dict[related_item])       \r\n            \r\n    print('Saving')\r\n    write_file = open(out_path+'HPI_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(HPI_sim, write_file)\r\n    write_file.close()\r\n    \r\n    HPI_sim = []\r\n    \r\n    \r\n    #HDI\r\n    HDI_sim = dict()\r\n    for item in tqdm(new_CN):\r\n        HDI_sim.setdefault(item,{})\r\n        for related_item in new_CN[item]:\r\n            HDI_sim[item][related_item] = new_CN[item][related_item]/max(strengh_dict[item],strengh_dict[related_item])       \r\n            \r\n    print('Saving')\r\n    write_file = open(out_path+'HDI_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(HDI_sim, write_file)\r\n    write_file.close()    \r\n    HDI_sim = []\r\n    \r\n    \r\n    \r\n    #LHN1\r\n    LHN1_sim = dict()\r\n    for item in tqdm(new_CN):\r\n        LHN1_sim.setdefault(item,{})\r\n        for related_item in new_CN[item]:\r\n            LHN1_sim[item][related_item] = new_CN[item][related_item]/(strengh_dict[item]*strengh_dict[related_item])       \r\n            \r\n    print('Saving')\r\n    write_file = open(out_path+'LHN1_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(LHN1_sim, write_file)\r\n    write_file.close()    \r\n    LHN1_sim = []\r\n    \r\n        \r\n    new_CN = []\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/2_Similarity/RA_Wu_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[2]:\r\n\r\n\r\nnow_phase = 9\r\n\r\ninput_path = './user_data/dataset/new_similarity/'\r\nout_path = './user_data/dataset/new_similarity/'\r\n\r\n\r\nfor num in range(now_phase+1):\r\n    \r\n    # 获取itemCF相似度\r\n    with open(input_path+'itemCF_new'+str(num)+'.pkl','rb') as f:\r\n        item_sim_list_tmp = pickle.load(f)  \r\n    \r\n    item_sim = {}\r\n    for item in item_sim_list_tmp:\r\n        item_sim.setdefault(item, {})\r\n        for related_item in item_sim_list_tmp[item]:\r\n            if item_sim_list_tmp[item][related_item] > 0.005:\r\n                item_sim[item][related_item] = item_sim_list_tmp[item][related_item]\r\n    \r\n    item_sim_list_tmp = []\r\n    \r\n    strengh_dict = dict()\r\n    print('Counting degree')\r\n    for item in tqdm(item_sim):\r\n        strengh_dict[item] = sum(item_sim[item].values())       \r\n        \r\n    strengh_AA_dict = dict()\r\n    print('Counting degree')\r\n    for item in tqdm(item_sim):\r\n        strengh_AA_dict[item] = math.log(1+sum(item_sim[item].values()) )\r\n        \r\n        \r\n    #RA\r\n    RA_sim = dict()\r\n    for item in tqdm(item_sim):\r\n        neighbors = list(set(item_sim[item].keys()))\r\n        for item1 in neighbors:\r\n            if item in item_sim[item1]:\r\n                RA_sim.setdefault(item1, {})\r\n                for item2 in neighbors:\r\n                    if item1 != item2:\r\n                        RA_sim[item1].setdefault(item2, 0)\r\n                        RA_sim[item1][item2] += item_sim[item1][item] * item_sim[item][item2]/strengh_dict[item]\r\n    \r\n    \r\n    new_RA = dict()\r\n    for item1 in tqdm(RA_sim):\r\n        new_RA[item1] = {i: int(x * 1e3) / 1e3 for i, x in RA_sim[item1].items() if x > 1e-3}\r\n    \r\n    RA_sim = []\r\n    print('Saving')\r\n    write_file = open(out_path+'RA_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(new_RA, write_file)\r\n    write_file.close() \r\n    \r\n        \r\n    new_RA = []\r\n    \r\n    \r\n    #RA\r\n    AA_sim = dict()\r\n    for item in tqdm(item_sim):\r\n        neighbors = list(set(item_sim[item].keys()))\r\n        for item1 in neighbors:\r\n            if item in item_sim[item1]:\r\n                AA_sim.setdefault(item1, {})\r\n                for item2 in neighbors:\r\n                    if item1 != item2:\r\n                        AA_sim[item1].setdefault(item2, 0)\r\n                        AA_sim[item1][item2] += item_sim[item1][item] * item_sim[item][item2]/strengh_AA_dict[item]\r\n    \r\n    \r\n    new_AA = dict()\r\n    for item1 in tqdm(AA_sim):\r\n        new_AA[item1] = {i: int(x * 1e3) / 1e3 for i, x in AA_sim[item1].items() if x > 1e-3}\r\n    \r\n    AA_sim = []\r\n    print('Saving')\r\n    write_file = open(out_path+'AA_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(new_AA, write_file)\r\n    write_file.close() \r\n    \r\n        \r\n    new_AA = []    \r\n    \r\n    \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nnow_phase = 9\r\n\r\ninput_path = './user_data/dataset/new_similarity/'\r\nout_path = './user_data/dataset/new_similarity/'\r\n\r\n\r\n\r\nfor num in range(now_phase+1):\r\n    \r\n    # 获取itemCF相似度\r\n    with open(input_path+'itemCF_new'+str(num)+'.pkl','rb') as f:\r\n        item_sim_list_tmp = pickle.load(f)  \r\n    \r\n    item_sim = {}\r\n    for item in item_sim_list_tmp:\r\n        item_sim.setdefault(item, {})\r\n        for related_item in item_sim_list_tmp[item]:\r\n            if item_sim_list_tmp[item][related_item] > 0.005:\r\n                item_sim[item][related_item] = item_sim_list_tmp[item][related_item]\r\n    \r\n    item_sim_list_tmp = []\r\n    \r\n    #CN\r\n    CN_sim = dict()\r\n    for item in tqdm(item_sim):\r\n        neighbors = list(set(item_sim[item].keys()))\r\n        for item1 in neighbors:\r\n            if item in item_sim[item1]:\r\n                CN_sim.setdefault(item1, {})\r\n                for item2 in neighbors:\r\n                    if item1 != item2:\r\n                        CN_sim[item1].setdefault(item2, 0)\r\n                        CN_sim[item1][item2] += item_sim[item1][item] * item_sim[item][item2]\r\n    \r\n    \r\n    new_CN = dict()\r\n    for item1 in tqdm(CN_sim):\r\n        new_CN[item1] = {i: int(x * 1e3) / 1e3 for i, x in CN_sim[item1].items() if x > 1e-3}\r\n    \r\n    CN_sim = []\r\n    print('Saving')\r\n    write_file = open(out_path+'CN_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(new_CN, write_file)\r\n    write_file.close() \r\n    \r\n    strengh_dict = dict()\r\n    print('Counting degree')\r\n    for item in tqdm(item_sim):\r\n        strengh_dict[item] = sum(item_sim[item].values())     \r\n    \r\n    #HPI\r\n    HPI_sim = dict()\r\n    for item in tqdm(new_CN):\r\n        HPI_sim.setdefault(item,{})\r\n        for related_item in new_CN[item]:\r\n            HPI_sim[item][related_item] = new_CN[item][related_item]/min(strengh_dict[item],strengh_dict[related_item])       \r\n            \r\n    print('Saving')\r\n    write_file = open(out_path+'HPI_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(HPI_sim, write_file)\r\n    write_file.close()\r\n    \r\n    HPI_sim = []\r\n    \r\n    \r\n    #HDI\r\n    HDI_sim = dict()\r\n    for item in tqdm(new_CN):\r\n        HDI_sim.setdefault(item,{})\r\n        for related_item in new_CN[item]:\r\n            HDI_sim[item][related_item] = new_CN[item][related_item]/max(strengh_dict[item],strengh_dict[related_item])       \r\n            \r\n    print('Saving')\r\n    write_file = open(out_path+'HDI_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(HDI_sim, write_file)\r\n    write_file.close()    \r\n    HDI_sim = []\r\n    \r\n    \r\n    \r\n    #LHN1\r\n    LHN1_sim = dict()\r\n    for item in tqdm(new_CN):\r\n        LHN1_sim.setdefault(item,{})\r\n        for related_item in new_CN[item]:\r\n            LHN1_sim[item][related_item] = new_CN[item][related_item]/(strengh_dict[item]*strengh_dict[related_item])       \r\n            \r\n    print('Saving')\r\n    write_file = open(out_path+'LHN1_P'+str(num)+'_new.pkl', 'wb')\r\n    pickle.dump(LHN1_sim, write_file)\r\n    write_file.close()    \r\n    LHN1_sim = []\r\n    \r\n        \r\n    new_CN = []\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/2_Similarity/deep_node_model.py",
    "content": "# coding=utf-8\r\n'''\r\nCreated on 2020年5月1日\r\n\r\n@author: LSH\r\n'''\r\nimport os\r\nimport time\r\nimport random\r\nimport itertools\r\nimport numpy as np\r\nimport pandas as pd \r\nimport networkx as nx\r\nfrom gensim.models import Word2Vec\r\nfrom joblib import Parallel, delayed\r\nrandom.seed(2020)\r\npd.set_option('display.unicode.ambiguous_as_wide', True)\r\npd.set_option('display.unicode.east_asian_width', True)\r\npd.set_option('display.max_columns', None)\r\npd.set_option('display.max_rows', None)\r\npd.set_option(\"display.max_colwidth\",100)\r\npd.set_option('display.width',1000)\r\nnow_phase = 9\r\nuser_data = \"./user_data/\"\r\n\r\n\r\ndef create_alias_table(area_ratio):\r\n    \"\"\"\r\n    :param area_ratio: sum(area_ratio)=1\r\n    :return: accept,alias\r\n    \"\"\"\r\n    l = len(area_ratio)\r\n    accept, alias = [0] * l, [None] * l\r\n    small, large = [], []\r\n    area_ratio_ = np.array(area_ratio) * l\r\n    for i, prob in enumerate(area_ratio_):\r\n        if prob < 1.0:\r\n            small.append(i)\r\n        else:\r\n            large.append(i)\r\n\r\n    while small and large:\r\n\r\n        small_idx, large_idx = small.pop(), large.pop()\r\n\r\n        accept[small_idx] = area_ratio_[small_idx]\r\n\r\n        alias[small_idx] = large_idx\r\n        area_ratio_[large_idx] = area_ratio_[large_idx] - (1 - area_ratio_[small_idx])\r\n\r\n        if area_ratio_[large_idx] < 1.0:\r\n            small.append(large_idx)\r\n        else:\r\n            large.append(large_idx)\r\n\r\n    while large:\r\n        large_idx = large.pop()\r\n        accept[large_idx] = 1\r\n    while small:\r\n        small_idx = small.pop()\r\n        accept[small_idx] = 1\r\n    return accept, alias\r\n\r\n\r\ndef alias_sample(accept, alias):\r\n    \"\"\"\r\n    :param accept:\r\n    :param alias:\r\n    :return: sample index\r\n    \"\"\"\r\n    N = len(accept)\r\n    i = int(np.random.random() * N)\r\n    r = np.random.random()\r\n    if r < accept[i]:\r\n        return i\r\n    else:\r\n        return alias[i]\r\n\r\n\r\ndef partition_num(num, workers):\r\n    if num % workers == 0: return [num // workers] * workers\r\n    else: return [num // workers] * workers + [num % workers]\r\n   \r\n    \r\nclass RandomWalker:\r\n\r\n    def __init__(self, G, p=1, q=1):\r\n        \"\"\"\r\n        :param G:\r\n        :param p: Return parameter,controls the likelihood of immediately revisiting a node in the walk.\r\n        :param q: In-out parameter,allows the search to differentiate between “inward” and “outward” nodes\r\n        \"\"\"\r\n        self.G = G\r\n        self.p = p\r\n        self.q = q\r\n\r\n    def deepwalk_walk(self, walk_length, start_node):\r\n        walk = [start_node]\r\n        while len(walk) < walk_length:\r\n            cur = walk[-1]\r\n            cur_nbrs = list(self.G.neighbors(cur))\r\n            if len(cur_nbrs) > 0:\r\n                walk.append(random.choice(cur_nbrs))\r\n            else:\r\n                break\r\n        return walk\r\n\r\n    def node2vec_walk(self, walk_length, start_node):\r\n        G = self.G\r\n        alias_nodes = self.alias_nodes\r\n        alias_edges = self.alias_edges\r\n\r\n        walk = [start_node]\r\n        while len(walk) < walk_length:\r\n            cur = walk[-1]\r\n            cur_nbrs = list(G.neighbors(cur))\r\n            if len(cur_nbrs) > 0:\r\n                #由于node2vec采样需要cur节点v，prev节点t，所以当没有前序节点时，直接使用当前顶点和邻居顶点之间的边权作为采样依据\r\n                if len(walk) == 1:\r\n                    walk.append(cur_nbrs[alias_sample(alias_nodes[cur][0], alias_nodes[cur][1])])\r\n                else:\r\n                    prev = walk[-2]\r\n                    edge = (prev, cur)\r\n                    next_node = cur_nbrs[alias_sample(alias_edges[edge][0],alias_edges[edge][1])]\r\n                    walk.append(next_node)\r\n            else: \r\n                break\r\n        return walk\r\n\r\n    def simulate_walks(self, num_walks, walk_length, workers=1, verbose=0):\r\n        \"\"\"\r\n        \"\"\"\r\n        G = self.G\r\n        nodes = list(G.nodes())\r\n        results = Parallel(n_jobs=workers, verbose=verbose, )(\r\n            delayed(self._simulate_walks)(nodes, num, walk_length) for num in\r\n            partition_num(num_walks, workers))\r\n\r\n        walks = list(itertools.chain(*results))\r\n        return walks\r\n\r\n    def _simulate_walks(self, nodes, num_walks, walk_length, ):\r\n        walks = []\r\n        for _ in range(num_walks):\r\n            random.shuffle(nodes)\r\n            for v in nodes:\r\n                if self.p == 1 and self.q == 1:\r\n                    walks.append(self.deepwalk_walk(\r\n                        walk_length=walk_length, start_node=v))\r\n                else:\r\n                    walks.append(self.node2vec_walk(\r\n                        walk_length=walk_length, start_node=v))\r\n        return walks\r\n\r\n    def get_alias_edge(self, t, v):\r\n        \"\"\"\r\n        compute unnormalized transition probability between nodes v and its neighbors give the previous visited node t.\r\n        :param t:\r\n        :param v:\r\n        :return:\r\n        \"\"\"\r\n        G = self.G\r\n        p = self.p\r\n        q = self.q\r\n        unnormalized_probs = []\r\n        for x in G.neighbors(v):\r\n            weight = G[v][x].get('weight', 1.0)  # w_vx\r\n            if x == t:  # d_tx == 0\r\n                unnormalized_probs.append(weight/p)\r\n            elif G.has_edge(x, t):  # d_tx == 1\r\n                unnormalized_probs.append(weight)\r\n            else:  # d_tx > 1\r\n                unnormalized_probs.append(weight/q)\r\n\r\n        norm_const = sum(unnormalized_probs)\r\n        normalized_probs = [float(u_prob)/norm_const for u_prob in unnormalized_probs]\r\n        return create_alias_table(normalized_probs)\r\n\r\n    def preprocess_transition_probs(self):\r\n        \"\"\"\r\n        Preprocessing of transition probabilities for guiding the random walks.\r\n        \"\"\"\r\n        G = self.G\r\n        alias_nodes = {}\r\n        for node in G.nodes():\r\n            unnormalized_probs = [G[node][nbr].get('weight', 1.0) for nbr in G.neighbors(node)]\r\n            norm_const = sum(unnormalized_probs)\r\n            normalized_probs = [float(u_prob)/norm_const for u_prob in unnormalized_probs]\r\n            alias_nodes[node] = create_alias_table(normalized_probs)\r\n        alias_edges = {}\r\n        for edge in G.edges():\r\n            alias_edges[edge] = self.get_alias_edge(edge[0], edge[1])\r\n\r\n        self.alias_nodes = alias_nodes\r\n        self.alias_edges = alias_edges\r\n        return\r\n\r\n\r\nclass DeepWalk:\r\n    def __init__(self, graph, walk_length, num_walks, workers=1):\r\n\r\n        self.graph = graph\r\n        self.w2v_model = None\r\n        self._embeddings = {}\r\n\r\n        self.walker = RandomWalker(graph, p=1, q=1, )\r\n        self.sentences = self.walker.simulate_walks(\r\n            num_walks=num_walks, walk_length=walk_length, workers=workers, verbose=1)\r\n\r\n    def train(self, embed_size=128, window_size=5, workers=3, iters=5, **kwargs):\r\n\r\n        kwargs[\"sentences\"] = self.sentences\r\n        kwargs[\"min_count\"] = kwargs.get(\"min_count\", 0)\r\n        kwargs[\"size\"] = embed_size\r\n        kwargs[\"sg\"] = 1  # skip gram\r\n        kwargs[\"hs\"] = 1  # deepwalk use Hierarchical Softmax\r\n        kwargs[\"workers\"] = workers\r\n        kwargs[\"window\"] = window_size\r\n        kwargs[\"iter\"] = iters\r\n\r\n        print(\"Learning embedding vectors...\")\r\n        model = Word2Vec(**kwargs)\r\n        print(\"Learning embedding vectors done!\")\r\n\r\n        self.w2v_model = model\r\n        return model\r\n\r\n    def get_embeddings(self, ):\r\n        if self.w2v_model is None:\r\n            print(\"model not train\")\r\n            return {}\r\n        self._embeddings = {}\r\n        for word in self.graph.nodes():\r\n            self._embeddings[word] = self.w2v_model.wv[word]\r\n        return self._embeddings\r\n\r\n    def get_topK(self, item, k=50):\r\n        if not isinstance(item, str):\r\n            item=str(item)\r\n        recom_list = list(map(lambda x: [x[0], x[1]], self.w2v_model.wv.most_similar(positive=[item], topn=k)))\r\n        return recom_list\r\n\r\n\r\nclass Node2Vec:\r\n\r\n    def __init__(self, graph, walk_length, num_walks, p=1.0, q=1.0, workers=1):\r\n\r\n        self.graph = graph\r\n        self._embeddings = {}\r\n        \r\n        self.walker = RandomWalker(graph, p=p, q=q, )\r\n        self.walker.preprocess_transition_probs()\r\n        self.sentences = self.walker.simulate_walks(\r\n            num_walks=num_walks, walk_length=walk_length, workers=workers, verbose=1)\r\n\r\n    def train(self, embed_size=128, window_size=5, workers=3, iters=5, **kwargs):\r\n\r\n        kwargs[\"sentences\"] = self.sentences\r\n        kwargs[\"min_count\"] = kwargs.get(\"min_count\", 0)\r\n        kwargs[\"size\"] = embed_size\r\n        kwargs[\"sg\"] = 1\r\n        kwargs[\"hs\"] = 0  # node2vec not use Hierarchical Softmax\r\n        kwargs[\"workers\"] = workers\r\n        kwargs[\"window\"] = window_size\r\n        kwargs[\"iter\"] = iters\r\n\r\n        print(\"Learning embedding vectors...\")\r\n        model = Word2Vec(**kwargs)\r\n        print(\"Learning embedding vectors done!\")\r\n        self.w2v_model = model\r\n\r\n        return model\r\n\r\n    def get_embeddings(self,):\r\n        if self.w2v_model is None:\r\n            print(\"model not train\")\r\n            return {}\r\n\r\n        self._embeddings = {}\r\n        for word in self.graph.nodes():\r\n            self._embeddings[word] = self.w2v_model.wv[word]\r\n\r\n        return self._embeddings\r\n\r\n    def get_topK(self, item, k=50):\r\n        if not isinstance(item, str):\r\n            item = str(item)\r\n        recom_list = list(map(lambda x: [x[0], x[1]], self.w2v_model.wv.most_similar(positive=[item], topn=k)))\r\n        return recom_list\r\n\r\n\r\ndef get_item_graph(df, user_col, item_col, direction=True, new_wei=False):\r\n    \"\"\"构造图\r\n    \"\"\"\r\n    user_item_ = df.groupby(user_col)[item_col].agg(list).reset_index()\r\n    user_item_dict = dict(zip(user_item_[user_col], user_item_[item_col]))\r\n    edgelist = []\r\n    user_time_ = df.groupby(user_col)['time'].agg(list).reset_index() # 引入时间因素\r\n    user_time_dict = dict(zip(user_time_[user_col], user_time_['time']))\r\n\r\n    item_cnt=df[item_col].value_counts().to_dict()\r\n\r\n    for user, items in user_item_dict.items():\r\n        for i in range(len(items) - 1):\r\n            if direction:\r\n                t1 = user_time_dict[user][i] # 点击时间提取\r\n                t2 = user_time_dict[user][i+1]\r\n                delta_t=abs(t1-t2)*50000   # 中值 0.01 75%:0.02\r\n                #             有向有权图，热门商品-->冷门商品权重=热门商品个数/冷门商品个数\r\n                ai, aj = item_cnt[items[i]], item_cnt[items[i+1]]\r\n                edgelist.append([items[i], items[i + 1], max(3, np.log(1+ai/aj)) * 1/(1+delta_t) ])\r\n                edgelist.append([items[i+1], items[i], max(3, np.log(1+aj/ai)) * 0.8 * 1/(1+delta_t) ])\r\n            else:\r\n                edgelist.append([items[i], items[i + 1], 1])\r\n    if direction:\r\n        G = nx.DiGraph()\r\n    else:\r\n        G = nx.Graph()\r\n    for edge in edgelist:\r\n        G.add_edge(str(edge[0]), str(edge[1]), weight=edge[2])\r\n    if new_wei:\r\n        for u,v,d in G.edges(data=True):\r\n            deg = G.degree(u)/G.degree(v)\r\n            if deg < 1:\r\n                deg = max(0.1, deg)\r\n            else:\r\n                deg = min(3, deg)\r\n            new_weight = d[\"weight\"] * deg\r\n            G[u][v].update({\"weight\":new_weight})\r\n    return G\r\n\r\n\r\ndef deep_node_recom():\r\n    \"\"\"使用全量数据分别训练deepwalk和node2vec模型    用于offline和online\r\n    \"\"\"\r\n    global now_phase\r\n    novalid_click = pd.DataFrame()\r\n    whole_click = pd.DataFrame()\r\n    for i in range(now_phase+1):\r\n        click_train=pd.read_csv(user_data+'offline/offline_train_click-{}.csv'.format(i),header=None, names=['user_id', 'item_id', 'time'])\r\n        click_test=pd.read_csv(user_data+'offline/offline_test_click-{}.csv'.format(i),header=None, names=['user_id', 'item_id', 'time'])\r\n        qtime_test=pd.read_csv(user_data+'offline/offline_test_qtime-{}.csv'.format(i),header=None, names=['user_id', 'item_id', 'time'])\r\n        click_train[\"time\"] += i\r\n        click_test[\"time\"] += i\r\n        qtime_test[\"time\"] += i\r\n        all_click=click_train.append(click_test)\r\n        novalid_click = novalid_click.append(all_click)\r\n        all_click.append(qtime_test)\r\n        whole_click = whole_click.append(all_click)\r\n    \"\"\"除去test最后一次点击的whole点击数据，用于offline的召回\r\n    \"\"\"\r\n    novalid_click = novalid_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    novalid_click = novalid_click.sort_values('time')\r\n    novalid_click = novalid_click.reset_index(drop=True)\r\n    \"\"\"whole点击数据，用于online的召回\r\n    \"\"\"\r\n    whole_click = whole_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    whole_click = whole_click.sort_values('time')\r\n    whole_click = whole_click.reset_index(drop=True)\r\n    cpu_jobs = os.cpu_count() - 1\r\n    \"\"\"使用有向图训练的node2vecmox\r\n    \"\"\"\r\n    G = get_item_graph(novalid_click, 'user_id', 'item_id')\r\n    novalidmodel = Node2Vec(G, walk_length=20, num_walks=80, p=2, q=0.5, workers=1)\r\n    novalidmodel.train(embed_size=128, window_size=10, workers=cpu_jobs, iter=3)\r\n    novalidmodel.w2v_model.wv.save_word2vec_format(user_data + \"offline/node2vec_offline.bin\", binary=True)\r\n\r\n    G = get_item_graph(whole_click, 'user_id', 'item_id')\r\n    model = Node2Vec(G, walk_length=20, num_walks=80, p=2, q=0.5, workers=1)\r\n    model.train(embed_size=128, window_size=10, workers=cpu_jobs, iter=3)\r\n    model.w2v_model.wv.save_word2vec_format(user_data + \"dataset/node2vec_underexpose.bin\", binary=True)\r\n    \r\n    \"\"\"deepwalk\r\n    \"\"\"\r\n    G = get_item_graph(novalid_click, 'user_id', 'item_id', direction=False)\r\n    novalidmodel = DeepWalk(G, walk_length=20, num_walks=80, workers=8)\r\n    novalidmodel.train(embed_size=128, window_size=10, workers=cpu_jobs, iter=3)\r\n    novalidmodel.w2v_model.wv.save_word2vec_format(user_data + \"offline/deepwalk_offline.bin\", binary=True)\r\n\r\n    G = get_item_graph(whole_click, 'user_id', 'item_id', direction=False)\r\n    model = DeepWalk(G, walk_length=20, num_walks=80, workers=8)\r\n    model.train(embed_size=128, window_size=10, workers=cpu_jobs, iter=3)\r\n    model.w2v_model.wv.save_word2vec_format(user_data + \"dataset/deepwalk_underexpose.bin\", binary=True)\r\n#         model = KeyedVectors.load_word2vec_format(deepwalk + \"deep_model_whoclick_model.bin\", binary=True)\r\n  \r\n    \r\ndef model_deep_node_recom():\r\n    \"\"\"训练用于model1的deepwalk和node2vec\r\n    \"\"\"\r\n    global now_phase\r\n    novalid_click = pd.DataFrame()\r\n    for i in range(now_phase+1):\r\n        click_train=pd.read_csv(user_data+'model_1/model_1_train_click-{}.csv'.format(i),header=None, names=['user_id', 'item_id', 'time'])\r\n        click_test=pd.read_csv(user_data+'model_1/model_1_test_click-{}.csv'.format(i),header=None, names=['user_id', 'item_id', 'time'])\r\n        click_train[\"time\"] += i\r\n        click_test[\"time\"] += i\r\n        all_click=click_train.append(click_test)\r\n        novalid_click = novalid_click.append(all_click)\r\n\r\n    novalid_click = novalid_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    novalid_click = novalid_click.sort_values('time')\r\n    novalid_click = novalid_click.reset_index(drop=True)\r\n\r\n    cpu_jobs = os.cpu_count() - 1\r\n    G = get_item_graph(novalid_click, 'user_id', 'item_id')\r\n    novalidmodel = Node2Vec(G, walk_length=20, num_walks=80, p=2, q=0.5, workers=1)\r\n    novalidmodel.train(embed_size=128, window_size=10, workers=cpu_jobs, iter=3)\r\n    novalidmodel.w2v_model.wv.save_word2vec_format(user_data + \"model_1/node2vec_model_1.bin\", binary=True)\r\n\r\n    G = get_item_graph(novalid_click, 'user_id', 'item_id', direction=False)\r\n    novalidmodel = DeepWalk(G, walk_length=20, num_walks=80, workers=8)\r\n    novalidmodel.train(embed_size=128, window_size=10, workers=cpu_jobs, iter=3)\r\n    novalidmodel.w2v_model.wv.save_word2vec_format(user_data + \"model_1/deepwalk_model_1.bin\", binary=True)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n    print(\"start\")\r\n    a = time.time()\r\n    if not os.path.exists(user_data):\r\n        os.mkdir(user_data)\r\n    deep_node_recom()\r\n    model_deep_node_recom()\r\n    print(\"time:{:6.4f} mins\".format( (time.time()-a)/60))\r\n    \r\n\r\n\r\n\r\n    \r\n    \r\n    \r\n    \r\n    "
  },
  {
    "path": "code/3_NN/ItemFeat2.py",
    "content": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Apr 29 10:01:01 2020\n@author: hcb\n\"\"\"\n\nimport pandas as pd\nimport os\nfrom config import config\n\n\ndef get_feat(now_phase=3, base_path=None):\n    \n    # if base_path is None:\n    #     train_path = 'underexpose_train'\n    #     test_path = 'underexpose_test'\n    # else:\n    #     train_path = os.path.join(base_path, 'underexpose_train')\n    #     test_path = os.path.join(base_path, 'underexpose_test')\n    train_path = config.train_path\n    test_path = config.test_path  \n    click_train = pd.DataFrame()\n    click_test = pd.DataFrame()\n    for c in range(now_phase + 1):\n        click_tmp = pd.read_csv(train_path + f'/underexpose_train_click-{c}.csv', header=None,\n                                names=['user_id', 'item_id', 'time'])\n        click_tmp['user_id'] = '1_{}_'.format(c) + click_tmp['user_id'].astype(str)\n        click_test_tmp = pd.read_csv(test_path + f'/underexpose_test_click-{c}.csv', header=None,\n                                     names=['user_id', 'item_id', 'time'])\n        click_test_tmp['user_id'] = '0_{}_'.format(c) + click_test_tmp['user_id'].astype(str)\n        click_train = click_train.append(click_tmp)\n        click_test = click_test.append(click_test_tmp)\n    all_click = click_train.append(click_test)\n    print(all_click['item_id'].nunique())\n    item_df = all_click.groupby('item_id')['time'].count().reset_index()\n    item_df.columns = ['item_id', 'degree']\n    \n    feat = pd.read_csv('./data/underexpose_train/underexpose_item_feat.csv', header=None)\n    feat[1] = feat[1].apply(lambda x:x[1:]).astype(float)\n    feat[128] = feat[128].apply(lambda x:x[:-1]).astype(float)\n    feat[129] = feat[129].apply(lambda x:x[1:]).astype(float)\n    feat[256] = feat[256].apply(lambda x:x[:-1]).astype(float)\n    feat.columns = ['item_id'] + ['feat'+str(i) for i in range(256)]\n    \n    item_df = item_df.merge(feat, on='item_id', how='left')\n    print(item_df['item_id'].nunique())\n    def transform(x):\n        if x > 150 and x <400:\n            x = (x-150) // 25 * 25 +150\n        elif x>=400:\n            x = 400\n        return x \n    \n    item_df['degree'] = item_df['degree'].apply(lambda x: transform(x))\n    degree_df = item_df.groupby('degree')[['feat'+str(i) for i in range(256)]].mean().reset_index()\n    na_df = item_df[item_df['feat0'].isna()][['item_id', 'degree']].merge(degree_df, on='degree', how='left')\n    item_df.dropna(inplace=True)\n    item_df = pd.concat((item_df, na_df))\n    \n    item_df.to_csv('item_feat.csv', index=None)\n    \nif __name__ == '__main__':\n    get_feat(now_phase=9)"
  },
  {
    "path": "code/3_NN/Readme",
    "content": "pandas==0.25.1\nnumpy==1.17.2\ntensorflow-gpu==1.13.1\ntqdm\nargparse\ncudatoolkit==9.0\ncudnn==7.6.5"
  },
  {
    "path": "code/3_NN/config.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thu Jun 11 12:36:15 2020\n\n@author: hcb\n\"\"\"\n\nclass config:\n    train_path = './user_data/dataset'\n    test_path = './user_data/dataset'\n    offline_path = './user_data/offline'\n    model1_path = './user_data/model_1'\n    \n    save_path_offline = './user_data/offline/nn/nn_offline.csv'\n    save_path_online = './user_data/dataset/nn/nn_underexpose.csv'\n    save_path_model1 = './user_data/model_1/nn/nn_model_1.csv'\n    \n    online_item_file = './user_data/dataset/new_recall/user_item_index.csv'\n    offline_item_file = './user_data/offline/new_recall/user_item_index.csv'\n    model1_item_file = './user_data/model_1/new_recall/user_item_index.csv'\n    # online_path = ''"
  },
  {
    "path": "code/3_NN/model2.py",
    "content": "from modules import *\r\n\r\n\r\nclass Model:\r\n    def __init__(self, usernum, itemnum, args, emb=None, num_neg=2, dec_step=None, \r\n                 emb_usr=None, reuse=None):\r\n        self.is_training = tf.placeholder(tf.bool, shape=())\r\n        self.u = tf.placeholder(tf.int32, shape=(None))\r\n        self.input_seq = tf.placeholder(tf.int32, shape=(None, args.maxlen))\r\n        self.pos = tf.placeholder(tf.int32, shape=(None, args.maxlen))\r\n        self.neg = tf.placeholder(tf.int32, shape=(None, args.maxlen, num_neg))\r\n        pos = self.pos\r\n        neg = self.neg\r\n        mask = tf.expand_dims(tf.to_float(tf.not_equal(self.input_seq, 0)), -1)\r\n\r\n        with tf.variable_scope(\"SASRec\", reuse=reuse):\r\n            # sequence embedding, item embedding table\r\n            self.seq, item_emb_table = embedding(self.input_seq,\r\n                                                 vocab_size=itemnum + 1,\r\n                                                 num_units=args.hidden_units,\r\n                                                 zero_pad=False,\r\n                                                 scale=True,\r\n                                                 l2_reg=args.l2_emb,\r\n                                                 scope=\"input_embeddings\",\r\n                                                 with_t=True,\r\n                                                 reuse=reuse\r\n                                                 )\r\n            \r\n            \r\n            # self.lookup_table2 = tf.get_variable('lookup_table2',\r\n            #                   dtype=tf.float32,\r\n            #                   shape=[itemnum + 1, args.hidden_units],\r\n            #                   trainable=False\r\n            #                   )\r\n            # item_emb_table = lookup_table2 + item_emb_table\r\n#            \r\n#            self.seq = tf.nn.embedding_lookup(item_emb_table, self.input_seq)\r\n            \r\n            # Positional Encoding\r\n            t, pos_emb_table = embedding(\r\n                tf.tile(tf.expand_dims(tf.range(tf.shape(self.input_seq)[1]), 0), [tf.shape(self.input_seq)[0], 1]),\r\n                vocab_size=args.maxlen,\r\n                num_units=args.hidden_units,\r\n                zero_pad=False,\r\n                scale=False,\r\n                l2_reg=args.l2_emb,\r\n                scope=\"dec_pos\",\r\n                reuse=reuse,\r\n                with_t=True\r\n            )\r\n            \r\n            # user embedding\r\n            u_, user_emb_table = embedding(\r\n                self.u,\r\n                vocab_size=usernum+1,\r\n                num_units=args.hidden_units,\r\n                zero_pad=False,\r\n                scale=False,\r\n                l2_reg=args.l2_emb,\r\n                scope=\"user_embedding\",\r\n                reuse=reuse,\r\n                with_t=True\r\n            )\r\n            \r\n            self.seq += t\r\n            \r\n#            user_emb = tf.reshape(u_, [tf.shape(self.input_seq)[0], 1, args.hidden_units])\r\n#            self.seq = user_emb + self.seq\r\n            \r\n            # Dropout\r\n            self.seq = tf.layers.dropout(self.seq,\r\n                                         rate=args.dropout_rate,\r\n                                         training=tf.convert_to_tensor(self.is_training))\r\n            self.seq *= mask\r\n\r\n            # Build blocks\r\n\r\n            for i in range(args.num_blocks):\r\n                with tf.variable_scope(\"num_blocks_%d\" % i):\r\n\r\n                    # Self-attention\r\n                    self.seq = multihead_attention(queries=normalize(self.seq),\r\n                                                   keys=self.seq,\r\n                                                   num_units=args.hidden_units,\r\n                                                   num_heads=args.num_heads,\r\n                                                   dropout_rate=args.dropout_rate,\r\n                                                   is_training=self.is_training,\r\n                                                   causality=True,\r\n                                                   scope=\"self_attention\")\r\n\r\n                    # Feed forward\r\n                    self.seq = feedforward(normalize(self.seq), num_units=[args.hidden_units, args.hidden_units],\r\n                                           dropout_rate=args.dropout_rate, is_training=self.is_training)\r\n                    \r\n                    self.seq *= mask\r\n\r\n            self.seq = normalize(self.seq)\r\n#        print(item_emb_table.shape)\r\n#        print(emb_item.shape)  \r\n        self.emb_item = tf.Variable(emb, dtype=tf.float32)\r\n        self.usr_emb = tf.Variable(emb_usr, dtype=tf.float32)\r\n        \r\n        self.item_emb_table = item_emb_table\r\n#        self.lookup_table2 = lookup_table2\r\n        \r\n        pos = tf.reshape(pos, [tf.shape(self.input_seq)[0] * args.maxlen])\r\n#        neg = tf.reshape(neg, [tf.shape(self.input_seq)[0] * args.maxlen])\r\n        neg = tf.reshape(neg, [tf.shape(self.input_seq)[0] * args.maxlen * num_neg])\r\n        pos_emb = tf.nn.embedding_lookup(item_emb_table, pos)\r\n        neg_emb = tf.nn.embedding_lookup(item_emb_table, neg)\r\n        \r\n        # ------------------\r\n        #user emedding\r\n        self.user_emb_table = user_emb_table\r\n        user_emb = tf.nn.embedding_lookup(self.user_emb_table, self.u)\r\n        user_emb = tf.reshape(user_emb, [tf.shape(self.input_seq)[0], 1, args.hidden_units])\r\n        seq_emb = tf.reshape(self.seq, [tf.shape(self.input_seq)[0], args.maxlen, args.hidden_units])\r\n        self.seq = user_emb + seq_emb\r\n               \r\n        # last 5 emb\r\n#        item_emb2 = tf.nn.embedding_lookup(item_emb_table, self.input_seq)\r\n#        item_emb2 = tf.reshape(item_emb2, [tf.shape(self.input_seq)[0], args.maxlen, args.hidden_units])\r\n#        item_emb2 = tf.layers.dense(item_emb2, args.hidden_units, activation=None)\r\n#        self.seq = self.seq + item_emb2\r\n        \r\n#        seq_emb = tf.reshape(seq_emb, [-1, args.hidden_units])\r\n#        item_emb2 = tf.reduce_mean(item_emb2[:,-10:,:], axis=1)\r\n        \r\n        # -----------\r\n        seq_emb = tf.reshape(self.seq, [tf.shape(self.input_seq)[0] * args.maxlen, args.hidden_units])\r\n\r\n  \r\n        self.test_item = tf.placeholder(tf.int32, shape=(None))\r\n        test_item_emb = tf.nn.embedding_lookup(item_emb_table, self.test_item)\r\n        self.test_logits = tf.matmul(seq_emb, tf.transpose(test_item_emb))\r\n        self.test_logits = tf.reshape(self.test_logits, [tf.shape(self.input_seq)[0], args.maxlen, -1])\r\n        self.test_logits = self.test_logits[:, -1, :]\r\n\r\n        # prediction layer\r\n        self.pos_logits = tf.reduce_sum(pos_emb * seq_emb, -1)\r\n        \r\n#        print(neg_emb.shape)\r\n        tmp_seq_emb = tf.reshape(seq_emb, [-1,1,args.hidden_units])\r\n        neg_emb = tf.reshape(neg_emb, [-1,num_neg, args.hidden_units])\r\n        self.neg_logits = tf.reduce_sum(neg_emb * tmp_seq_emb, -1)\r\n        \r\n        self.neg_logits = tf.reshape(self.neg_logits, [tf.shape(self.input_seq)[0] * args.maxlen, num_neg])\r\n        # ignore padding items (0)\r\n        \r\n        istarget = tf.reshape(tf.to_float(tf.not_equal(pos, 0)), [tf.shape(self.input_seq)[0] * args.maxlen])\r\n        \r\n        # self.pos_logits = tf.reshape(self.pos_logits, [tf.shape(self.input_seq)[0] * args.maxlen, 1])\r\n        # err = self.pos_logits - self.neg_logits \r\n        # self.loss = tf.reduce_sum(\r\n        #     -tf.reduce_sum(tf.log(tf.sigmoid(err) + 1e-24), axis=-1) * istarget\r\n        # ) / tf.reduce_sum(istarget)\r\n        \r\n        self.loss = tf.reduce_sum(\r\n            - tf.log(tf.sigmoid(self.pos_logits) + 1e-24) * istarget -\r\n            tf.reduce_sum(tf.log(1 - tf.sigmoid(self.neg_logits) + 1e-24), axis=-1) * istarget\r\n        ) / tf.reduce_sum(istarget)\r\n        \r\n        reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)\r\n        self.loss += sum(reg_losses)\r\n\r\n        tf.summary.scalar('loss', self.loss)\r\n        self.auc = tf.reduce_sum(\r\n            ((tf.sign(self.pos_logits - self.neg_logits) + 1) / 2) * istarget\r\n        ) / tf.reduce_sum(istarget)\r\n\r\n        if reuse is None:\r\n            tf.summary.scalar('auc', self.auc)\r\n            self.global_step = tf.Variable(0, name='global_step', trainable=False)\r\n            self.lr = tf.train.exponential_decay(args.lr,\r\n                                self.global_step, dec_step, 0.5, staircase=True)\r\n            self.optimizer = tf.train.AdamOptimizer(learning_rate=self.lr, beta2=0.98)\r\n\r\n            self.train_op = self.optimizer.minimize(self.loss, global_step=self.global_step)\r\n        else:\r\n            tf.summary.scalar('test_auc', self.auc)\r\n\r\n        self.merged = tf.summary.merge_all()\r\n\r\n    def predict(self, sess, u, seq, item_idx):\r\n        return sess.run(self.test_logits,\r\n                        {self.u: u, self.input_seq: seq, self.test_item: item_idx, self.is_training: False})"
  },
  {
    "path": "code/3_NN/modules.py",
    "content": "# -*- coding: utf-8 -*-\n#/usr/bin/python2\n'''\nJune 2017 by kyubyong park. \nkbpark.linguist@gmail.com.\nhttps://www.github.com/kyubyong/transformer\n'''\n\nfrom __future__ import print_function\nimport tensorflow as tf\nimport numpy as np\n\n\ndef positional_encoding(dim, sentence_length, dtype=tf.float32):\n\n    encoded_vec = np.array([pos/np.power(10000, 2*i/dim) for pos in range(sentence_length) for i in range(dim)])\n    encoded_vec[::2] = np.sin(encoded_vec[::2])\n    encoded_vec[1::2] = np.cos(encoded_vec[1::2])\n\n    return tf.convert_to_tensor(encoded_vec.reshape([sentence_length, dim]), dtype=dtype)\n\ndef normalize(inputs, \n              epsilon = 1e-8,\n              scope=\"ln\",\n              reuse=None):\n    '''Applies layer normalization.\n    \n    Args:\n      inputs: A tensor with 2 or more dimensions, where the first dimension has\n        `batch_size`.\n      epsilon: A floating number. A very small number for preventing ZeroDivision Error.\n      scope: Optional scope for `variable_scope`.\n      reuse: Boolean, whether to reuse the weights of a previous layer\n        by the same name.\n      \n    Returns:\n      A tensor with the same shape and data dtype as `inputs`.\n    '''\n    with tf.variable_scope(scope, reuse=reuse):\n        inputs_shape = inputs.get_shape()\n        params_shape = inputs_shape[-1:]\n    \n        mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)\n        beta= tf.Variable(tf.zeros(params_shape))\n        gamma = tf.Variable(tf.ones(params_shape))\n        normalized = (inputs - mean) / ( (variance + epsilon) ** (.5) )\n        outputs = gamma * normalized + beta\n        \n    return outputs\n\ndef embedding(inputs, \n              vocab_size, \n              num_units, \n              zero_pad=True, \n              scale=True,\n              l2_reg=0.0,\n              scope=\"embedding\", \n              with_t=False,\n              trainable=True,\n              reuse=None):\n    '''Embeds a given tensor.\n\n    Args:\n      inputs: A `Tensor` with type `int32` or `int64` containing the ids\n         to be looked up in `lookup table`.\n      vocab_size: An int. Vocabulary size.\n      num_units: An int. Number of embedding hidden units.\n      zero_pad: A boolean. If True, all the values of the fist row (id 0)\n        should be constant zeros.\n      scale: A boolean. If True. the outputs is multiplied by sqrt num_units.\n      scope: Optional scope for `variable_scope`.\n      reuse: Boolean, whether to reuse the weights of a previous layer\n        by the same name.\n\n    Returns:\n      A `Tensor` with one more rank than inputs's. The last dimensionality\n        should be `num_units`.\n        \n    For example,\n    \n    ```\n    import tensorflow as tf\n    \n    inputs = tf.to_int32(tf.reshape(tf.range(2*3), (2, 3)))\n    outputs = embedding(inputs, 6, 2, zero_pad=True)\n    with tf.Session() as sess:\n        sess.run(tf.global_variables_initializer())\n        print sess.run(outputs)\n    >>\n    [[[ 0.          0.        ]\n      [ 0.09754146  0.67385566]\n      [ 0.37864095 -0.35689294]]\n\n     [[-1.01329422 -1.09939694]\n      [ 0.7521342   0.38203377]\n      [-0.04973143 -0.06210355]]]\n    ```\n    \n    ```\n    import tensorflow as tf\n    \n    inputs = tf.to_int32(tf.reshape(tf.range(2*3), (2, 3)))\n    outputs = embedding(inputs, 6, 2, zero_pad=False)\n    with tf.Session() as sess:\n        sess.run(tf.global_variables_initializer())\n        print sess.run(outputs)\n    >>\n    [[[-0.19172323 -0.39159766]\n      [-0.43212751 -0.66207761]\n      [ 1.03452027 -0.26704335]]\n\n     [[-0.11634696 -0.35983452]\n      [ 0.50208133  0.53509563]\n      [ 1.22204471 -0.96587461]]]    \n    ```    \n    '''\n    with tf.variable_scope(scope, reuse=reuse):\n        lookup_table = tf.get_variable('lookup_table',\n                                       dtype=tf.float32,\n                                       shape=[vocab_size, num_units],\n                                       initializer=tf.contrib.layers.xavier_initializer(),\n                                       regularizer=tf.contrib.layers.l2_regularizer(l2_reg),\n                                       trainable=trainable\n                                       )\n        if zero_pad:\n            lookup_table = tf.concat((tf.zeros(shape=[1, num_units]),\n                                      lookup_table[1:, :]), 0)\n        outputs = tf.nn.embedding_lookup(lookup_table, inputs)\n        \n        if scale:\n            outputs = outputs * (num_units ** 0.5) \n    if with_t: return outputs,lookup_table\n    else: return outputs\n\n\ndef multihead_attention(queries, \n                        keys, \n                        num_units=None, \n                        num_heads=8, \n                        dropout_rate=0,\n                        is_training=True,\n                        causality=False,\n                        scope=\"multihead_attention\", \n                        reuse=None,\n                        with_qk=False):\n    '''Applies multihead attention.\n    \n    Args:\n      queries: A 3d tensor with shape of [N, T_q, C_q].\n      keys: A 3d tensor with shape of [N, T_k, C_k].\n      num_units: A scalar. Attention size.\n      dropout_rate: A floating point number.\n      is_training: Boolean. Controller of mechanism for dropout.\n      causality: Boolean. If true, units that reference the future are masked. \n      num_heads: An int. Number of heads.\n      scope: Optional scope for `variable_scope`.\n      reuse: Boolean, whether to reuse the weights of a previous layer\n        by the same name.\n        \n    Returns\n      A 3d tensor with shape of (N, T_q, C)  \n    '''\n    with tf.variable_scope(scope, reuse=reuse):\n        # Set the fall back option for num_units\n        if num_units is None:\n            num_units = queries.get_shape().as_list[-1]\n        \n        # Linear projections\n        # Q = tf.layers.dense(queries, num_units, activation=tf.nn.relu) # (N, T_q, C)\n        # K = tf.layers.dense(keys, num_units, activation=tf.nn.relu) # (N, T_k, C)\n        # V = tf.layers.dense(keys, num_units, activation=tf.nn.relu) # (N, T_k, C)\n        Q = tf.layers.dense(queries, num_units, activation=None) # (N, T_q, C)\n        K = tf.layers.dense(keys, num_units, activation=None) # (N, T_k, C)\n        V = tf.layers.dense(keys, num_units, activation=None) # (N, T_k, C)\n        \n        # Split and concat\n        Q_ = tf.concat(tf.split(Q, num_heads, axis=2), axis=0) # (h*N, T_q, C/h) \n        K_ = tf.concat(tf.split(K, num_heads, axis=2), axis=0) # (h*N, T_k, C/h) \n        V_ = tf.concat(tf.split(V, num_heads, axis=2), axis=0) # (h*N, T_k, C/h) \n\n        # Multiplication\n        outputs = tf.matmul(Q_, tf.transpose(K_, [0, 2, 1])) # (h*N, T_q, T_k)\n        \n        # Scale\n        outputs = outputs / (K_.get_shape().as_list()[-1] ** 0.5)\n        \n        # Key Masking\n        key_masks = tf.sign(tf.abs(tf.reduce_sum(keys, axis=-1))) # (N, T_k)\n        key_masks = tf.tile(key_masks, [num_heads, 1]) # (h*N, T_k)\n        key_masks = tf.tile(tf.expand_dims(key_masks, 1), [1, tf.shape(queries)[1], 1]) # (h*N, T_q, T_k)\n        \n        paddings = tf.ones_like(outputs)*(-2**32+1)\n        outputs = tf.where(tf.equal(key_masks, 0), paddings, outputs) # (h*N, T_q, T_k)\n  \n        # Causality = Future blinding\n        if causality:\n            diag_vals = tf.ones_like(outputs[0, :, :]) # (T_q, T_k)\n            tril = tf.linalg.LinearOperatorLowerTriangular(diag_vals).to_dense() # (T_q, T_k)\n            masks = tf.tile(tf.expand_dims(tril, 0), [tf.shape(outputs)[0], 1, 1]) # (h*N, T_q, T_k)\n   \n            paddings = tf.ones_like(masks)*(-2**32+1)\n            outputs = tf.where(tf.equal(masks, 0), paddings, outputs) # (h*N, T_q, T_k)\n  \n        # Activation\n        outputs = tf.nn.softmax(outputs) # (h*N, T_q, T_k)\n         \n        # Query Masking\n        query_masks = tf.sign(tf.abs(tf.reduce_sum(queries, axis=-1))) # (N, T_q)\n        query_masks = tf.tile(query_masks, [num_heads, 1]) # (h*N, T_q)\n        query_masks = tf.tile(tf.expand_dims(query_masks, -1), [1, 1, tf.shape(keys)[1]]) # (h*N, T_q, T_k)\n        outputs *= query_masks # broadcasting. (N, T_q, C)\n          \n        # Dropouts\n        outputs = tf.layers.dropout(outputs, rate=dropout_rate, training=tf.convert_to_tensor(is_training))\n               \n        # Weighted sum\n        outputs = tf.matmul(outputs, V_) # ( h*N, T_q, C/h)\n        \n        # Restore shape\n        outputs = tf.concat(tf.split(outputs, num_heads, axis=0), axis=2 ) # (N, T_q, C)\n              \n        # Residual connection\n        outputs += queries\n              \n        # Normalize\n        #outputs = normalize(outputs) # (N, T_q, C)\n \n    if with_qk: return Q,K\n    else: return outputs\n\ndef feedforward(inputs, \n                num_units=[2048, 512],\n                scope=\"multihead_attention\", \n                dropout_rate=0.2,\n                is_training=True,\n                reuse=None):\n    '''Point-wise feed forward net.\n    \n    Args:\n      inputs: A 3d tensor with shape of [N, T, C].\n      num_units: A list of two integers.\n      scope: Optional scope for `variable_scope`.\n      reuse: Boolean, whether to reuse the weights of a previous layer\n        by the same name.\n        \n    Returns:\n      A 3d tensor with the same shape and dtype as inputs\n    '''\n    with tf.variable_scope(scope, reuse=reuse):\n        # Inner layer\n        params = {\"inputs\": inputs, \"filters\": num_units[0], \"kernel_size\": 1, #'padding':'same',\n                  \"activation\": tf.nn.relu, \"use_bias\": True}\n        outputs = tf.layers.conv1d(**params)\n        outputs = tf.layers.dropout(outputs, rate=dropout_rate, training=tf.convert_to_tensor(is_training))\n        # Readout layer\n        params = {\"inputs\": outputs, \"filters\": num_units[1], \"kernel_size\": 1, #'padding':'same',\n                  \"activation\": None, \"use_bias\": True}\n        outputs = tf.layers.conv1d(**params)\n        outputs = tf.layers.dropout(outputs, rate=dropout_rate, training=tf.convert_to_tensor(is_training))\n        \n        # Residual connection\n        outputs += inputs\n        \n        # Normalize\n        #outputs = normalize(outputs)\n    \n    return outputs\n"
  },
  {
    "path": "code/3_NN/sampler2.py",
    "content": "import numpy as np\r\nfrom multiprocessing import Process, Queue\r\nimport random\r\n\r\ndef random_neq(l, r, s, num_neg):\r\n    negs = []\r\n    for i in range(num_neg):\r\n        t = np.random.randint(l, r)\r\n        while t in s:\r\n            t = np.random.randint(l, r)\r\n        negs.append(t)\r\n            \r\n    return negs\r\n\r\n\r\ndef sample_function(user_train, usernum, itemnum, batch_size, maxlen, num_neg, \r\n                    id2user, user2idmap2, result_queue, SEED):\r\n    def sample():\r\n        \r\n#        num_neg = 2\r\n        user = np.random.randint(1, usernum + 1)\r\n        while len(user_train[user]) <= 1: user = np.random.randint(1, usernum + 1)\r\n\r\n        seq = np.zeros([maxlen], dtype=np.int32)\r\n        pos = np.zeros([maxlen], dtype=np.int32)\r\n        neg = np.zeros([maxlen, num_neg], dtype=np.int32)\r\n#        nxt = user_train[user][-1]\r\n        idx = maxlen - 1\r\n        \r\n        seq_ = user_train[user]\r\n        st = 0\r\n        if len(seq_) > (maxlen+1) :\r\n            st = np.random.randint(0, len(seq_)-maxlen-1)\r\n        seq_ = seq_[st:st+(maxlen+1)]\r\n        nxt = seq_[-1]\r\n        # nexts = [nxt]\r\n        ts = set(seq_)\r\n        \r\n        for i in reversed(seq_[:-1]):\r\n            seq[idx] = i\r\n            pos[idx] = nxt\r\n            if nxt != 0: neg[idx, :] = random_neq(1, itemnum + 1, ts, num_neg)\r\n            nxt = i\r\n            # nexts.append(i)\r\n            # nxt = random.choice(nexts)\r\n            \r\n            idx -= 1\r\n            if idx == -1: break\r\n        \r\n        user = id2user[user]\r\n        # user = user2idmap2[int(user.split('_')[-1])]\r\n        user = user2idmap2[user[2:]]\r\n        return (user, seq, pos, neg)\r\n\r\n    np.random.seed(SEED)\r\n    while True:\r\n        one_batch = []\r\n        for i in range(batch_size):\r\n            one_batch.append(sample())\r\n\r\n        result_queue.put(zip(*one_batch))\r\n\r\n\r\nclass WarpSampler(object):\r\n    def __init__(self, User, usernum, itemnum, id2user, user2idmap2,\r\n                 num_neg=20, batch_size=64, maxlen=10, n_workers=1):\r\n        self.result_queue = Queue(maxsize=n_workers * 10)\r\n        self.processors = []\r\n        for i in range(n_workers):\r\n            self.processors.append(\r\n                Process(target=sample_function, args=(User,\r\n                                                      usernum,\r\n                                                      itemnum,\r\n                                                      batch_size,\r\n                                                      maxlen,\r\n                                                      num_neg,\r\n                                                      id2user, user2idmap2,\r\n                                                      self.result_queue,\r\n                                                      np.random.randint(2e9)\r\n                                                      )))\r\n            self.processors[-1].daemon = True\r\n            self.processors[-1].start()\r\n\r\n    def next_batch(self):\r\n        return self.result_queue.get()\r\n\r\n    def close(self):\r\n        for p in self.processors:\r\n            p.terminate()\r\n            p.join()\r\n"
  },
  {
    "path": "code/3_NN/sas_rec.py",
    "content": "#!/usr/bin/env python\r\n# -*- coding:utf-8 -*-\r\n# author:juzphy\r\n# datetime:2020/4/26 3:46 下午\r\nimport pandas as pd\r\nfrom tqdm import tqdm\r\nimport tensorflow as tf\r\nfrom sampler2 import WarpSampler\r\nfrom model2 import Model\r\nimport os\r\nfrom util import *\r\nimport numpy as np\r\nimport argparse\r\nfrom config import config\r\n\r\ndef get_data(now_phase, train_path, test_path, kind=1):\r\n    click_train = pd.DataFrame()\r\n    click_test = pd.DataFrame()\r\n    for c in range(now_phase + 1):\r\n        if kind == 1:\r\n            click_tmp = pd.read_csv(os.path.join(train_path, f'underexpose_train_click-{c}.csv'), header=None,\r\n                        names=['user_id', 'item_id', 'time'],converters={'time':np.float64})\r\n            click_test_tmp = pd.read_csv(os.path.join(test_path, f'underexpose_test_click-{c}.csv'), header=None,\r\n                             names=['user_id', 'item_id', 'time'])\r\n            \r\n        elif kind == 2:\r\n            click_tmp = pd.read_csv(os.path.join(train_path, 'offline' + f'_train_click-{c}.csv'), header=None,\r\n                        names=['user_id', 'item_id', 'time'],converters={'time':np.float64})\r\n            click_test_tmp = pd.read_csv(os.path.join(test_path, 'offline' + f'_test_click-{c}.csv'), header=None,\r\n                                         names=['user_id', 'item_id', 'time'])\r\n            \r\n        elif kind == 3:\r\n            click_tmp = pd.read_csv(os.path.join(train_path, 'model_1' + f'_train_click-{c}.csv'), header=None,\r\n                        names=['user_id', 'item_id', 'time'],converters={'time':np.float64})            \r\n            click_test_tmp = pd.read_csv(os.path.join(test_path, 'model_1' + f'_test_click-{c}.csv'), header=None,\r\n                                         names=['user_id', 'item_id', 'time'])\r\n        \r\n        # click_tmp['user_id2'] = click_tmp['user_id']\r\n        click_tmp['user_id2'] ='{}_'.format(c) + click_tmp['user_id'].astype(str)\r\n        click_tmp['user_id'] = '1_{}_'.format(c) + click_tmp['user_id'].astype(str)\r\n        \r\n            \r\n        # click_test_tmp['user_id2'] = click_test_tmp['user_id']\r\n        click_test_tmp['user_id2'] = '{}_'.format(c) + click_test_tmp['user_id'].astype(str)\r\n        click_test_tmp['user_id'] = '0_{}_'.format(c) + click_test_tmp['user_id'].astype(str)\r\n        click_train = click_train.append(click_tmp)\r\n        click_test = click_test.append(click_test_tmp)\r\n    \r\n    # click_train.drop_duplicates(['item_id','time', 'user_id2'], inplace=True)\r\n    \r\n    all_click = click_train.append(click_test)\r\n    num_items = all_click['item_id'].nunique()\r\n    num_users = all_click['user_id'].nunique()\r\n    num_users2 = all_click['user_id2'].nunique()\r\n    item2idmap = dict(zip(all_click['item_id'].unique(), range(1, 1 + num_items)))\r\n    user2idmap = dict(zip(all_click['user_id'].unique(), range(1, 1 + num_users)))\r\n    user2idmap2 = dict(zip(all_click['user_id2'].unique(), range(1, 1 + num_users2)))\r\n    all_click['map_user'] = all_click['user_id'].map(user2idmap)\r\n    all_click['map_item'] = all_click['item_id'].map(item2idmap)\r\n    item_deg = all_click['map_item'].value_counts().to_dict()\r\n    \r\n    use_train, use_valid, use_test = {}, {}, {}\r\n    \r\n    all_click = all_click.sort_values('time').groupby('user_id')['map_item'].apply(list).to_dict()\r\n\r\n    for reviewerID, hist in tqdm(all_click.items()):\r\n        is_train = reviewerID.split('_')[0]\r\n        phase = reviewerID.split('_')[1]\r\n        user = user2idmap[reviewerID]\r\n            \r\n        if is_train == '1':\r\n            # if phase == str(now_phase):\r\n            if phase in ['7', '8', '9']:\r\n                use_train[user] = hist[:-1]\r\n                use_valid[user] = [hist[-1]]\r\n            else:\r\n                use_train[user] = hist\r\n                use_valid[user] = []                \r\n        else:\r\n            use_train[user] = hist\r\n            use_valid[user] = []\r\n            #if phase in ['7', '8', '9']:\r\n            use_test[user] = hist\r\n\r\n    id2item = dict()\r\n    for tmp_key in item2idmap.keys():\r\n        id2item[item2idmap[tmp_key]] = tmp_key\r\n    id2user = dict()\r\n    for tmp_key in user2idmap.keys():\r\n        id2user[user2idmap[tmp_key]] = tmp_key\r\n    \r\n    emb = pd.read_csv('item_feat.csv')\r\n    emb['item_id'] = emb['item_id'].map(item2idmap)\r\n    emb = emb.sort_values('item_id', ascending=True).reset_index(drop=True)\r\n    emb = emb[emb.columns[2:]].values\r\n    return use_train, use_valid, num_items, num_users, id2item, id2user, \\\r\n        item_deg, emb, use_test, user2idmap2, num_users2\r\n        \r\ndef eval_model(model, sess, train_data, eval_date, item_set, item_deg, idx2user, args, valid_array_):\r\n    res = {}\r\n    answers = {}\r\n    [user, user_array, seqs_array, label_array] = valid_array_\r\n    # eval_date = generate_vail_date(train_data, eval_date, 256)\r\n    \r\n    for u, seq,label in tqdm(gen(user, user_array, seqs_array, label_array, 32)):\r\n        preds = model.predict(sess, u, seq, item_set)\r\n        arg_sort =np.argsort(preds, -1)[:, ::-1]\r\n        for i in range(len(u)):\r\n            user_idx = u[i][0]\r\n            label_item = label[i][0]\r\n            # user = idx2user[user_idx]\r\n            phase = '4'\r\n            res.setdefault(phase, {})\r\n            answers.setdefault(phase, {})\r\n            _pred_top_50 = item_set[arg_sort[i][:50]]\r\n            res[phase][user_idx] = _pred_top_50.tolist()\r\n            answers[phase][user_idx] = (label_item, item_deg[label_item])\r\n    finally_score, phase_score = evalation(res, answers, None)\r\n    return finally_score, phase_score\r\n \r\ndef evaluate_each_phase(predictions, answers, recall_num=50):\r\n    list_item_degress = []\r\n    for user_id in answers:\r\n        item_id, item_degree = answers[user_id]\r\n        list_item_degress.append(item_degree)\r\n    list_item_degress.sort()\r\n    median_item_degree = list_item_degress[len(list_item_degress) // 2]\r\n\r\n    num_cases_full = 0.0\r\n    ndcg_50_full = 0.0\r\n    ndcg_50_half = 0.0\r\n    num_cases_half = 0.0\r\n    hitrate_50_full = 0.0\r\n    hitrate_50_half = 0.0\r\n    for user_id in answers:\r\n        item_id, item_degree = answers[user_id]\r\n        rank = 0\r\n        while rank < recall_num and predictions[user_id][rank] != item_id:\r\n            rank += 1\r\n        num_cases_full += 1.0\r\n        if rank < recall_num:\r\n            ndcg_50_full += 1.0 / np.log2(rank + 2.0)\r\n            hitrate_50_full += 1.0\r\n        if item_degree <= median_item_degree:\r\n            num_cases_half += 1.0\r\n            if rank < recall_num:\r\n                ndcg_50_half += 1.0 / np.log2(rank + 2.0)\r\n                hitrate_50_half += 1.0\r\n    ndcg_50_full /= num_cases_full\r\n    hitrate_50_full /= num_cases_full\r\n    ndcg_50_half /= num_cases_half\r\n    hitrate_50_half /= num_cases_half\r\n    return np.array([ndcg_50_full, ndcg_50_half,\r\n                     hitrate_50_full, hitrate_50_half], dtype=np.float32)\r\n\r\n\r\ndef evalation(res, answers, item_deg=None, recall_num=50):\r\n    if item_deg is not None:\r\n        _  = {}\r\n        for phase in answers.keys():\r\n            _.setdefault(phase, {})\r\n            for k,v in answers[phase].items():\r\n                _[phase][k] = (v, item_deg[v])\r\n        answers = _\r\n    finally_score = np.zeros(4, dtype=np.float32)\r\n    phase_score = {}\r\n    for phase in res.keys():\r\n    # We sum the scores from all the phases, instead of averaging them.\r\n        score = evaluate_each_phase(res[phase], answers[phase], recall_num)\r\n        print(f\"phase: {phase},  hitrate_full:{score[2]}, ndcg_full:{score[0]}, hitrate_half:{score[3]}, ndcg_half:{score[1]}\")\r\n        finally_score += score\r\n        phase_score[phase] = str(score.tolist())\r\n    print(f\"phase: all,  hitrate_full:{finally_score[2]}, ndcg_full:{finally_score[0]}, hitrate_half:{finally_score[3]}, ndcg_half:{finally_score[1]}\")\r\n    return finally_score, phase_score\r\n\r\n\r\ndef generate_vail_date(train, valid, id2user, user2idmap2):\r\n    user = []\r\n    seqs = []\r\n    labels = []\r\n    for user_idx, label_item in tqdm(valid.items(), leave=False, total=len(valid), desc=\"[EVAL] >> \"):\r\n        if len(label_item) < 1:\r\n            continue\r\n        seq = train[user_idx]\r\n        seq_len = len(seq)\r\n        if seq_len == 0:\r\n            continue\r\n        if seq_len <= args.maxlen:\r\n            seq_ = [0] * (args.maxlen - seq_len) + seq\r\n        else:\r\n            seq_ = seq[-50:]\r\n        seqs.append(seq_)\r\n        \r\n        u = id2user[user_idx]\r\n        # u = user2idmap2[u.split('_')[-1]]\r\n        u = user2idmap2[u[2:]]\r\n        \r\n        user.append([u])\r\n        \r\n        labels.append(label_item)\r\n    user_array = np.array(user)\r\n    seqs_array = np.array(seqs)\r\n    label_array = np.array(labels)\r\n    return user, user_array, seqs_array, label_array\r\n\r\n\r\ndef gen(user, user_array, seqs_array, label_array, batch_size):\r\n    for i in range(len(user)//batch_size):\r\n        yield (user_array[i*batch_size:(i+1)*batch_size], seqs_array[i*batch_size:(i+1)*batch_size], label_array[i*batch_size:(i+1)*batch_size])\r\n    yield (user_array[(i+1)*batch_size:], seqs_array[(i+1)*batch_size:], label_array[(i+1)*batch_size:])\r\n\r\n\r\nclass Args:\r\n    lr = 0.002\r\n    maxlen = 50\r\n    hidden_units = 256\r\n    num_blocks = 1\r\n    dropout_rate = 0.5\r\n    num_heads = 2\r\n    l2_emb = 0.0\r\n\r\n\r\nif __name__ == \"__main__\":\r\n    now_phase = 9\r\n\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument(\"--kind\", type=int, default=0)\r\n    parser.add_argument(\"--train\", type=int, default=0)\r\n    parser.add_argument(\"--test\", type=int, default=0)\r\n    parser.add_argument(\"--valid\", type=int, default=0)\r\n    \r\n    args = parser.parse_args()\r\n    \r\n    kind = int(args.kind)\r\n    if kind == 1:\r\n        read_path = config.online_item_file\r\n        save_path = config.save_path_online\r\n        model_base_path = 'ckpt'\r\n        train_path = config.train_path\r\n        test_path = config.test_path\r\n    elif kind == 2:\r\n        read_path = config.offline_item_file\r\n        save_path = config.save_path_offline\r\n        model_base_path = 'ckpt2'   \r\n        train_path = config.offline_path\r\n        test_path = config.offline_path\r\n    elif kind == 3:\r\n        read_path = config.model1_item_file\r\n        save_path = config.save_path_model1\r\n        model_base_path = 'ckpt3'\r\n        train_path = config.model1_path\r\n        test_path = config.model1_path\r\n    \r\n    train, valid, n_items, n_users, id2item, id2user, \\\r\n        item_deg, emb, use_test, user2idmap2, num_users2 = get_data(now_phase, train_path,\r\n                                                                    test_path, kind) # , base_path='F:\\data_kdd',\r\n    emb = np.concatenate((np.zeros((1,256)), emb), axis=0) / 25    \r\n    usr_emb = 0\r\n    \r\n    print('Reading data done.')\r\n    train_flag = args.train \r\n    valid_flag = args.valid\r\n    test_flag = args.test\r\n    test_flag2 = 0\r\n    \r\n    num_neg = 20\r\n    batch_size = 256\r\n    args = Args()\r\n    num_batch = len(train) // batch_size\r\n    num_epochs = 75\r\n    item_set = np.arange(1, n_items+1)\r\n    \r\n    config = tf.ConfigProto()\r\n    config.gpu_options.allow_growth = True\r\n    config.allow_soft_placement = True\r\n    sess = tf.Session(config=config)\r\n    print(n_items)\r\n    sampler = WarpSampler(train, n_users, n_items, id2user, user2idmap2, num_neg=num_neg, \r\n                          batch_size=batch_size, maxlen=args.maxlen, n_workers=3)\r\n    \r\n    model = Model(num_users2, n_items, args, emb, num_neg, dec_step=num_batch*25,\r\n                  emb_usr=usr_emb)\r\n    \r\n    sess.run(tf.initialize_all_variables())\r\n    sess.run(tf.assign(model.item_emb_table, model.emb_item))\r\n    # sess.run(tf.assign(model.user_emb_table, model.usr_emb))\r\n    \r\n    user, user_array, seqs_array, label_array = generate_vail_date(train, valid, id2user, user2idmap2)\r\n    valid_array = [user, user_array, seqs_array, label_array]\r\n    idx = np.random.choice(len(user), 5000, replace=False)\r\n    user2, user_array2, seqs_array2, label_array2= [], [], [], []\r\n    \r\n    for i in range(len(idx)):\r\n        user2.append(user[idx[i]])\r\n        user_array2.append(user_array[idx[i]])\r\n        seqs_array2.append(seqs_array[idx[i]])\r\n        label_array2.append(label_array[idx[i]])\r\n    valid_array2 = [user2, user_array2, seqs_array2, label_array2]\r\n    \r\n    \r\n    saver = tf.train.Saver()\r\n    ckpt_path = os.path.join(model_base_path, 'model.ckpt')\r\n    \r\n    if not os.path.exists(model_base_path):\r\n        os.mkdir(model_base_path)\r\n#    saver.restore(sess, ckpt_path)\r\n    \r\n    if train_flag:\r\n        finally_score = [0]\r\n        best_score = 0\r\n        for epoch in range(1, num_epochs + 1):\r\n#            auc_ = []\r\n            loss_ = []\r\n            for step in tqdm(range(num_batch), total=num_batch, ncols=70, leave=False, unit='b'):\r\n                u, seq, pos, neg = sampler.next_batch()\r\n                loss, _ = sess.run([model.loss, model.train_op],\r\n                                          {model.u: u, model.input_seq: seq, model.pos: pos, model.neg: neg,\r\n                                          model.is_training: True})\r\n#                auc_.append(auc)\r\n                loss_.append(loss)\r\n            print('epoch:%d, loss:%.3f' %(epoch, np.mean(loss_)))\r\n            if epoch % 25 == 0:\r\n                print(\"[EVAL] valid...\")\r\n                finally_score, phase_score = eval_model(model, sess, train, valid, \r\n                                                        item_set, item_deg, id2user, args, valid_array2)\r\n                \r\n                if finally_score[0] > best_score:\r\n                    best_score = finally_score[0]\r\n                save_path = saver.save(sess, ckpt_path)\r\n                \r\n        ckpt_path = os.path.join(model_base_path, 'model_last.ckpt')\r\n        save_path = saver.save(sess, ckpt_path)\r\n\r\n    sampler.close()\r\n    print(\"Done!\")\r\n    \r\n    \r\n    # sess.run(tf.initialize_all_variables())\r\n    \r\n    if valid_flag:\r\n        ckpt_path = os.path.join(model_base_path, 'model.ckpt')\r\n        saver.restore(sess, ckpt_path)\r\n        finally_score, phase_score = eval_model(model, sess, train, valid, \r\n                                                item_set, item_deg, id2user, args, valid_array)\r\n    if test_flag2:\r\n        ckpt_path = os.path.join(model_base_path, 'model_last.ckpt')\r\n        saver.restore(sess, ckpt_path)\r\n        evaluate2(model, [use_test, n_users, n_items], user2idmap2, \r\n                  args, sess, id2item, id2user)\r\n        from evaulation import evaluate_\r\n        evaluate_('pred_valid.csv', answer_fname='model_1/model_1_debias_track_answer.csv')\r\n\t\t\t\t\t\t\t\t\t\t\t\t\r\n    if test_flag:\r\n        # resotre model\r\n        ckpt_path = os.path.join(model_base_path, 'model_last.ckpt')\r\n        saver.restore(sess, ckpt_path)\r\n        evaluate5(model, [use_test, n_users, n_items], user2idmap2, \r\n                  args, sess, id2item, id2user, save_path=save_path, read_path=read_path)\r\n        \r\n"
  },
  {
    "path": "code/3_NN/util.py",
    "content": "import sys\nimport copy\nimport random\nimport numpy as np\nfrom collections import defaultdict\nimport pandas as pd\nfrom tqdm import tqdm\n\n\ndef evaluate6(model, dataset,user2idmap2, args, sess, id2item, id2user, \n              save_path='pred_valid.csv', read_path='all/offline.csv'):\n    [train, usernum, itemnum] = copy.deepcopy(dataset)\n    pred = []\n    item_idx = list(range(1, itemnum + 1))\n    id2itme_list = [id2item[i] for i in item_idx]\n    df2 = pd.read_csv(read_path)\n    item_map = {v:k for (k,v) in id2item.items()}\n     \n    for u in tqdm(train.keys()):\n\n        if len(train[u]) < 1:\n            print(u)\n            continue\n        score = []\n        seq = np.zeros([args.maxlen], dtype=np.int32)\n        idx = args.maxlen - 1\n        for i in reversed(train[u]):\n            seq[idx] = i\n            idx -= 1\n            if idx == -1: break\n        \n        u2 = id2user[u]\n        # u2 = user2idmap2[int(u2.split('_')[-1])]\n        u2 = user2idmap2[u2[2:]]\n        predictions = model.predict(sess, [u2], [seq], item_idx)\n        predictions = predictions[0]\n        idx = np.argsort(predictions)[::-1][:500]\n        # tmp_list = [id2itme_list[idx[i]] for i in range(500)]\n        # score = [predictions[idx[i]] for i in range(500)]\n        tmp_list = []\n        score = []\n        \n        tmp_df = df2[df2['user_id'] == int(id2user[u].split('_')[-1])]['item_id']\n        if len(tmp_df)>0:\n            items = set(tmp_df.values[0][1:-1].split(','))\n            tmp_list_set = set(tmp_list)\n            for tmp_item in items:\n                tmp_ = int(tmp_item)\n                if tmp_ not in tmp_list_set:\n                    tmp_idx = item_map[tmp_]\n                    tmp_list.append(tmp_)\n                    score.append(predictions[tmp_idx-1])\n                       \n        pred.append([id2user[u]] + [tmp_list] + [score])\n        \n    df = pd.DataFrame(pred)\n    df[0] = df[0].apply(lambda x: x.split('_')[-1])\n    df.columns = ['user', 'item', 'score']\n    df.to_csv(save_path, index=None)\n    return df\n\ndef evaluate5(model, dataset,user2idmap2, args, sess, id2item, id2user, \n              save_path='pred_valid.csv', read_path='all/offline.csv'):\n    [train, usernum, itemnum] = copy.deepcopy(dataset)\n    pred = []\n    item_idx = list(range(1, itemnum + 1))\n    id2itme_list = [id2item[i] for i in item_idx]\n    df2 = pd.read_csv(read_path)\n    df2 = df2.groupby('user_id')['item_id'].apply(list).reset_index()\n    \n    item_map = {v:k for (k,v) in id2item.items()}\n     \n    for u in tqdm(train.keys()):\n\n        if len(train[u]) < 1:\n            print(u)\n            continue\n        score = []\n        seq = np.zeros([args.maxlen], dtype=np.int32)\n        idx = args.maxlen - 1\n        for i in reversed(train[u]):\n            seq[idx] = i\n            idx -= 1\n            if idx == -1: break\n        \n        u2 = id2user[u]\n        # u2 = user2idmap2[int(u2.split('_')[-1])]\n        u2 = user2idmap2[u2[2:]]\n        predictions = model.predict(sess, [u2], [seq], item_idx)\n        predictions = predictions[0]\n        idx = np.argsort(predictions)[::-1][:500]\n        tmp_list = [id2itme_list[idx[i]] for i in range(500)]\n        score = [predictions[idx[i]] for i in range(500)]\n        \n        tmp_df = df2[df2['user_id'] == int(id2user[u].split('_')[-1])]['item_id']\n        if len(tmp_df)>0:\n            items = set(tmp_df.values[0]) # [1:-1].split(',')\n            tmp_list_set = set(tmp_list)\n            for tmp_item in items:\n                tmp_ = int(tmp_item)\n                if tmp_ not in tmp_list_set:\n                    tmp_idx = item_map[tmp_]\n                    tmp_list.append(tmp_)\n                    score.append(predictions[tmp_idx-1])\n                       \n        pred.append([id2user[u]] + [tmp_list] + [score])\n        \n    df = pd.DataFrame(pred)\n    df[0] = df[0].apply(lambda x: x.split('_')[-1])\n    df.columns = ['user', 'item', 'score']\n    df.to_csv(save_path, index=None)\n    return df\n\n\ndef evaluate4(model, dataset,user2idmap2, args, sess, id2item, id2user, user2idmap3):\n    [train, usernum, itemnum] = copy.deepcopy(dataset)\n    pred = []\n    item_idx = list(range(1, itemnum + 1))\n    id2itme_list = [id2item[i] for i in item_idx]\n    \n    for u in tqdm(train.keys()):\n\n        if len(train[u]) < 1:\n            print(u)\n            continue\n\n        seq = np.zeros([args.maxlen], dtype=np.int32)\n        idx = args.maxlen - 1\n        for i in reversed(train[u]):\n            seq[idx] = i\n            idx -= 1\n            if idx == -1: break\n        \n        u2 = id2user[u]\n        u3 = user2idmap3[int(u2.split('_')[-1])]\n        \n        u2 = user2idmap2[u2[2:]]\n        # \n        predictions = model.predict(sess, [u2], [u3], [seq], item_idx)\n        predictions = predictions[0]\n        idx = np.argsort(predictions)[::-1][:50]\n        tmp_list = [id2itme_list[idx[i]] for i in range(50)]\n        pred.append([id2user[u]] + tmp_list)\n        \n    df = pd.DataFrame(pred)\n    df[0] = df[0].apply(lambda x: x.split('_')[-1])\n    df.to_csv('pred_valid.csv', index=None, header=None)\n    return df\n\n\ndef evaluate3(model, dataset, args, sess, id2item, id2user, time_array):\n    [train, usernum, itemnum] = copy.deepcopy(dataset)\n    pred = []\n    item_idx = list(range(1, itemnum + 1))\n    id2itme_list = [id2item[i] for i in item_idx]\n    \n    for u in tqdm(train.keys()):\n\n        if len(train[u]) < 1:\n            print(u)\n            continue\n        \n        seq = np.zeros([args.maxlen], dtype=np.int32)\n        t = np.zeros([args.maxlen], dtype=np.int32)\n        idx = args.maxlen - 1\n        for i, t_ in zip(reversed(train[u]), reversed(time_array[u])):\n            seq[idx] = i\n            t[idx] = t_\n            idx -= 1\n            if idx == -1: break\n        \n        predictions = model.predict(sess, [u], [seq], item_idx, [t])\n        predictions = predictions[0]\n        idx = np.argsort(predictions)[::-1][:50]\n        tmp_list = [id2itme_list[idx[i]] for i in range(50)]\n        pred.append([id2user[u]] + tmp_list)\n        \n    df = pd.DataFrame(pred)\n    df[0] = df[0].apply(lambda x: x.split('_')[-1])\n    df.to_csv('pred_valid.csv', index=None, header=None)\n    return df\n\n\ndef evaluate2(model, dataset,user2idmap2, args, sess, id2item, id2user, \n              save_path='pred_valid.csv'):\n    [train, usernum, itemnum] = copy.deepcopy(dataset)\n    pred = []\n    item_idx = list(range(1, itemnum + 1))\n    id2itme_list = [id2item[i] for i in item_idx]\n    \n    for u in tqdm(train.keys()):\n\n        if len(train[u]) < 1:\n            print(u)\n            continue\n\n        seq = np.zeros([args.maxlen], dtype=np.int32)\n        idx = args.maxlen - 1\n        for i in reversed(train[u]):\n            seq[idx] = i\n            idx -= 1\n            if idx == -1: break\n        \n        u2 = id2user[u]\n        # u2 = user2idmap2[int(u2.split('_')[-1])]\n        u2 = user2idmap2[u2[2:]]\n        predictions = model.predict(sess, [u2], [seq], item_idx)\n        predictions = predictions[0]\n        idx = np.argsort(predictions)[::-1][:50]\n        tmp_list = [id2itme_list[idx[i]] for i in range(50)]\n        pred.append([id2user[u]] + tmp_list)\n        \n    df = pd.DataFrame(pred)\n    df[0] = df[0].apply(lambda x: x.split('_')[-1])\n    df.to_csv(save_path, index=None, header=None)\n    return df\n\ndef evaluate(model, dataset, args, sess, id2item, id2user):\n    [train, usernum, itemnum] = copy.deepcopy(dataset)\n    pred = []\n    item_idx = list(range(1, itemnum + 1))\n    id2itme_list = [id2item[i] for i in item_idx]\n    \n    for u in tqdm(train.keys()):\n\n        if len(train[u]) < 1:\n            print(u)\n            continue\n\n        seq = np.zeros([args.maxlen], dtype=np.int32)\n        idx = args.maxlen - 1\n        for i in reversed(train[u]):\n            seq[idx] = i\n            idx -= 1\n            if idx == -1: break\n\n        predictions = model.predict(sess, [u], [seq], item_idx)\n        predictions = predictions[0]\n        idx = np.argsort(predictions)[::-1][:50]\n        tmp_list = [id2itme_list[idx[i]] for i in range(50)]\n        pred.append([id2user[u]] + tmp_list)\n        \n    df = pd.DataFrame(pred)\n    df[0] = df[0].apply(lambda x: x.split('_')[-1])\n    df.to_csv('pred_valid.csv', index=None, header=None)\n    return df"
  },
  {
    "path": "code/3_Recall/01_Recall-Wu-model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\nimport time\r\nimport gc\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ndef get_predict(df, pred_col, top_fill, ranknum):  \r\n    top_fill = [int(t) for t in top_fill.split(',')]  \r\n    scores = [-1 * i for i in range(1, len(top_fill) + 1)]  \r\n    ids = list(df['user_id'].unique())  \r\n    fill_df = pd.DataFrame(ids * len(top_fill), columns=['user_id'])  \r\n    fill_df.sort_values('user_id', inplace=True)  \r\n    fill_df['item_id'] = top_fill * len(ids)  \r\n    fill_df[pred_col] = scores * len(ids)  \r\n    df = df.append(fill_df)  \r\n    df.sort_values(pred_col, ascending=False, inplace=True)  \r\n    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')  \r\n    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)  \r\n    df = df[df['rank'] <= ranknum]  \r\n    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',', expand=True).reset_index()  \r\n    return df  \r\n\r\n\r\n# In[3]:\r\n\r\n\r\ndef recommend(sim_item_corr, user_item_dict, user_id, times, item_dict, item_time_dict, top_k, item_num):\r\n    '''\r\n    input:item_sim_list, user_item, uid, 500, 50\r\n    # 用户历史序列中的所有商品均有关联商品,整合这些关联商品,进行相似性排序\r\n    '''\r\n    rank = {}\r\n    interacted_items = user_item_dict[user_id]\r\n    interacted_items = interacted_items[::-1]\r\n    times = times[::-1]\r\n    t0 = times[0]\r\n    for loc, i in enumerate(interacted_items):\r\n        for j, wij in sorted(sim_item_corr[i].items(), key=lambda d: d[1]['sim'], reverse=True)[0:top_k]:\r\n            if j not in interacted_items:\r\n                rank.setdefault(j, {'sim': 0,\r\n                                        'item_cf': 0,\r\n                                        'item_cf_weighted': 0,\r\n                                        'time_diff': np.inf,\r\n                                        'loc_diff': np.inf,\r\n                                        # Some feature generated by recall\r\n                                        'time_diff_recall': np.inf,\r\n                                        'time_diff_recall_1': np.inf,\r\n                                        'loc_diff_recall': np.inf,\r\n                                        # Nodesim and Deepsim\r\n                                          'node_sim_max': -1e8,\r\n                                          'node_sim_sum':0,\r\n                                          'deep_sim_max': -1e8,\r\n                                          'deep_sim_sum':0,\r\n                                          })\r\n                t1 = times[loc]\r\n                t2 = item_time_dict[j][0]\r\n                delta_t1 = abs(t0 - t1) * 650000\r\n                delta_t2 = abs(t0 - t2) * 650000\r\n                alpha = max(0.2, 1 / (1 + item_dict[j]))\r\n                beta = max(0.5, (0.9 ** loc))\r\n                theta = max(0.5, 1 / (1 + delta_t1))\r\n                gamma = max(0.5, 1 / (1 + delta_t2))\r\n\r\n                rank[j]['sim'] += wij['sim'] * (alpha ** 2) * (beta) * (theta ** 2) * gamma\r\n                rank[j]['item_cf'] += wij['item_cf']\r\n                rank[j]['item_cf_weighted'] += wij['item_cf_weighted']\r\n                \r\n                if wij['time_diff'] < rank[j]['time_diff']:\r\n                    rank[j]['time_diff'] = wij['time_diff']\r\n                if wij['loc_diff'] < rank[j]['loc_diff']:\r\n                    rank[j]['loc_diff'] = wij['loc_diff']\r\n                if delta_t1 < rank[j]['time_diff_recall']:\r\n                    rank[j]['time_diff_recall'] = delta_t1\r\n                if delta_t2 < rank[j]['time_diff_recall_1']:\r\n                    rank[j]['time_diff_recall_1'] = delta_t2\r\n                if loc < rank[j]['loc_diff_recall']:\r\n                    rank[j]['loc_diff_recall'] = loc\r\n                    \r\n                if wij['node_sim_max'] > rank[j]['node_sim_max']:\r\n                    rank[j]['node_sim_max'] = wij['node_sim_max']\r\n                rank[j]['node_sim_sum'] += wij['node_sim_sum'] / wij['item_cf']\r\n                \r\n                if wij['deep_sim_max'] > rank[j]['deep_sim_max']:\r\n                    rank[j]['deep_sim_max'] = wij['deep_sim_max']\r\n                rank[j]['deep_sim_sum'] += wij['deep_sim_sum'] / wij['item_cf']\r\n                \r\n    return sorted(rank.items(), key=lambda d: d[1]['sim'], reverse=True)[:item_num]\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nnow_phase = 9\r\n\r\noffline = \"./user_data/model_1/\"\r\nheader = 'model_1'\r\ninput_path = './user_data/model_1/new_similarity/'\r\noutput_path = './user_data/model_1/new_recall/'\r\n\r\n\r\n# In[5]:\r\n\r\n\r\n# recom_item = []  \r\n\r\n# for c in range(now_phase + 1):  \r\n#     a = time.time()\r\n\r\n#     print('phase:', c)  \r\n    \r\n#     with open(input_path+'itemCF_new'+str(c)+'.pkl','rb') as f:\r\n#         item_sim_list = pickle.load(f)    \r\n\r\n#     with open(input_path+'user2item_new'+str(c)+'.pkl','rb') as f:\r\n#         user_item = pickle.load(f)                  \r\n              \r\n#     with open(input_path+'item2cnt_new'+str(c)+'.pkl','rb') as f:\r\n#         item_dic = pickle.load(f) \r\n\r\n#     with open(input_path+'userTime'+str(c)+'.pkl','rb') as f:\r\n#         user_time_dict = pickle.load(f)         \r\n        \r\n#     with open(input_path+'itemTime'+str(c)+'.pkl','rb') as f:\r\n#         item_time_dict = pickle.load(f)          \r\n        \r\n#     qtime_test = pd.read_csv(offline + header + '_test_qtime-{}.csv'.format(c), header=None,\r\n#                               names=['user_id', 'item_id', 'time'])\r\n    \r\n    \r\n#     for user in tqdm(qtime_test['user_id'].unique()):\r\n#         if user in user_time_dict:\r\n#             times = user_time_dict[user]\r\n#             rank_item = recommend(item_sim_list, user_item, user, times, item_dic, item_time_dict, 500, 500)\r\n#             for j in rank_item:\r\n#                 recom_item.append([user, int(j[0])] + list(j[1].values()))      \r\n#     gc.collect()\r\nfile = open(input_path + 'recom_item.pkl', 'rb')\r\nrecom_item = pickle.load(file)\r\nfile.close()\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nfor phase in range(now_phase + 1):\r\n    a = time.time()\r\n    history_list = []\r\n    for i in range(now_phase + 1):\r\n        click_train = pd.read_csv(offline + header + '_train_click-{}.csv'.format(i), header=None,\r\n                                  names=['user_id', 'item_id', 'time'])\r\n        click_test = pd.read_csv(offline + header + '_test_click-{}.csv'.format(i), header=None,\r\n                                 names=['user_id', 'item_id', 'time'])\r\n\r\n        all_click = click_train.append(click_test)\r\n        history_list.append(all_click)\r\n\r\n    # qtime_test = pd.read_csv(offline + 'offline_test_qtime-{}.csv'.format(phase), header=None,\r\n    #                          names=['user_id', 'item_id', 'time'])\r\n    click_test = pd.read_csv(offline + header+ '_test_click-{}.csv'.format(phase), header=None,\r\n                             names=['user_id', 'item_id', 'time'])\r\n    print(click_test['user_id'].nunique())\r\n\r\n    print('phase:', phase)\r\n    time_diff = max(history_list[now_phase]['time']) - min(history_list[0]['time'])\r\n    for i in range(phase + 1, now_phase + 1):\r\n        history_list[i]['time'] = history_list[i]['time'] - time_diff\r\n\r\n    whole_click = pd.DataFrame()\r\n    for i in range(now_phase + 1):\r\n        whole_click = whole_click.append(history_list[i])\r\n\r\n\r\n    whole_click = whole_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    whole_click = whole_click.sort_values('time')\r\n    whole_click = whole_click.reset_index(drop=True)\r\n\r\n\r\n# In[7]:\r\n\r\n\r\ndef phase_predict(df, pred_col, top_fill, topk=50):\r\n    \"\"\"recom_df, 'sim', top50_click, \"click_valid\"\r\n    \"\"\"\r\n    top_fill = [int(t) for t in top_fill.split(',')]\r\n    top_fill = top_fill[:topk]\r\n    scores = [-1 * i for i in range(1, len(top_fill) + 1)]\r\n    ids = list(df['user_id'].unique())\r\n    fill_df = pd.DataFrame(ids * len(top_fill), columns=['user_id'])\r\n    fill_df.sort_values('user_id', inplace=True)\r\n    fill_df['item_id'] = top_fill * len(ids)\r\n    fill_df[pred_col] = scores * len(ids)\r\n    df = df.append(fill_df)\r\n    df.sort_values(pred_col, ascending=False, inplace=True)\r\n    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')\r\n    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)\r\n    df.sort_values(\"rank\", inplace=True)\r\n    df = df[df[\"rank\"] <= topk]\r\n    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',',\r\n                                                                                                   expand=True).reset_index()\r\n    return df\r\n\r\n\r\n# In[8]:\r\n\r\n\r\n# find most popular items\r\ntop50_click = whole_click['item_id'].value_counts().index[:500].values\r\ntop50_click = ','.join([str(i) for i in top50_click])\r\n\r\nrecom_df = pd.DataFrame(recom_item, columns=['user_id', 'item_id', 'sim'] + ['feature_' + str(x) for x in range(len(recom_item[0]) - 3)])\r\nresult = phase_predict(recom_df, 'sim', top50_click, 50)\r\nresult['user_id'] = result['user_id'].astype(int)\r\nresult.to_csv('Recall_0531.csv', index=False, header=None)        \r\n\r\n\r\n# In[9]:\r\n\r\n\r\nimport datetime\r\n\r\n\r\n# In[10]:\r\n\r\n\r\n# the higher scores, the better performance\r\ndef evaluate_each_phase(predictions, answers, rank_num):\r\n    list_item_degress = []\r\n    for user_id in answers:\r\n        item_id, item_degree = answers[user_id]\r\n        list_item_degress.append(item_degree)\r\n    list_item_degress.sort()\r\n    median_item_degree = list_item_degress[len(list_item_degress) // 2]\r\n\r\n    num_cases_full = 0.0\r\n    ndcg_50_full = 0.0\r\n    ndcg_50_half = 0.0\r\n    num_cases_half = 0.0\r\n    hitrate_50_full = 0.0\r\n    hitrate_50_half = 0.0\r\n    for user_id in answers:\r\n        item_id, item_degree = answers[user_id]\r\n        rank = 0\r\n        while rank < rank_num and predictions[user_id][rank] != item_id:\r\n            rank += 1\r\n        num_cases_full += 1.0\r\n        if rank < rank_num:\r\n            ndcg_50_full += 1.0 / np.log2(rank + 2.0)\r\n            hitrate_50_full += 1.0\r\n        if item_degree <= median_item_degree:\r\n            num_cases_half += 1.0\r\n            if rank < rank_num:\r\n                ndcg_50_half += 1.0 / np.log2(rank + 2.0)\r\n                hitrate_50_half += 1.0\r\n    ndcg_50_full /= num_cases_full\r\n    hitrate_50_full /= num_cases_full\r\n    ndcg_50_half /= num_cases_half\r\n    hitrate_50_half /= num_cases_half\r\n    \r\n    print([ndcg_50_full, ndcg_50_half,\r\n                     hitrate_50_full, hitrate_50_half])\r\n    \r\n    return np.array([ndcg_50_full, ndcg_50_half,\r\n                     hitrate_50_full, hitrate_50_half], dtype=np.float32)\r\n\r\n# submit_fname is the path to the file submitted by the participants.\r\n# debias_track_answer.csv is the standard answer, which is not released.\r\ndef evaluate(stdout, submit_fname,\r\n             answer_fname='debias_track_answer.csv', rank_num=50, current_time=None):\r\n    schedule_in_unix_time = [\r\n        0,  # ........ 1970-01-01 08:00:00 (T=0)\r\n        1586534399,  # 2020-04-10 23:59:59 (T=1)\r\n        1587139199,  # 2020-04-17 23:59:59 (T=2)\r\n        1587743999,  # 2020-04-24 23:59:59 (T=3)\r\n        1588348799,  # 2020-05-01 23:59:59 (T=4)\r\n        1588953599,  # 2020-05-08 23:59:59 (T=5)\r\n        1589558399,  # 2020-05-15 23:59:59 (T=6)\r\n        1590163199,  # 2020-05-22 23:59:59 (T=7)\r\n        #1589558399,\r\n        1590767999,  # 2020-05-29 23:59:59 (T=8)\r\n        1591372799  # .2020-06-05 23:59:59 (T=9)\r\n    ]\r\n    assert len(schedule_in_unix_time) == 10\r\n    for i in range(1, len(schedule_in_unix_time) - 1):\r\n        # 604800 == one week\r\n        assert schedule_in_unix_time[i] + 604800 == schedule_in_unix_time[i + 1]\r\n\r\n    if current_time is None:\r\n        current_time = int(time.time())\r\n    print('current_time:', current_time)\r\n    print('date_time:', datetime.datetime.fromtimestamp(current_time))\r\n    current_phase = 0\r\n    while (current_phase < 9) and (\r\n            current_time > schedule_in_unix_time[current_phase + 1]):\r\n        current_phase += 1\r\n    print('current_phase:', current_phase)\r\n\r\n    try:\r\n        answers = [{} for _ in range(10)]\r\n        with open(answer_fname, 'r') as fin:\r\n            for line in fin:\r\n                line = [int(x) for x in line.split(',')]\r\n                phase_id, user_id, item_id, item_degree = line\r\n                assert user_id % 11 == phase_id\r\n                # exactly one test case for each user_id\r\n                answers[phase_id][user_id] = (item_id, item_degree)\r\n    except Exception as _:\r\n        print( 'server-side error: answer file incorrect\\n')\r\n        return -1\r\n\r\n    try:\r\n        predictions = {}\r\n        with open(submit_fname, 'r') as fin:\r\n            for line in fin:\r\n                line = line.strip()\r\n                if line == '':\r\n                    continue\r\n                line = line.split(',')\r\n                user_id = int(line[0])\r\n                if user_id in predictions:\r\n                    print('submitted duplicate user_ids \\n')\r\n                    return -1\r\n                item_ids = [int(i) for i in line[1:]]\r\n                if len(item_ids) != rank_num:\r\n                    print('each row need have 50 items \\n')\r\n                    return -1\r\n                if len(set(item_ids)) != rank_num:\r\n                    print('each row need have 50 DISTINCT items \\n')\r\n                    return -1\r\n                predictions[user_id] = item_ids\r\n    except Exception as _:\r\n        print('submission not in correct format \\n')\r\n        return -1\r\n\r\n    scores = np.zeros(4, dtype=np.float32)\r\n\r\n    # The final winning teams will be decided based on phase T=7,8,9 only.\r\n    # We thus fix the scores to 1.0 for phase 0,1,2,...,6 at the final stage.\r\n    #if current_phase >= 7:  # if at the final stage, i.e., T=7,8,9\r\n    #    scores += 7.0  # then fix the scores to 1.0 for phase 0,1,2,...,6\r\n    #phase_beg = (7 if (current_phase >= 7) else 0)\r\n    phase_beg = 0\r\n    phase_end = current_phase + 1\r\n    for phase_id in range(phase_beg, phase_end):\r\n        for user_id in answers[phase_id]:\r\n            if user_id not in predictions:\r\n                print('user_id %d of phase %d not in submission' % (user_id, phase_id))\r\n                return -1\r\n        try:\r\n            # We sum the scores from all the phases, instead of averaging them.\r\n            scores += evaluate_each_phase(predictions, answers[phase_id], rank_num)\r\n        except Exception as _:\r\n            print('error occurred during evaluation')\r\n            return -1\r\n\r\n    return [float(scores[0]),float(scores[0]),float(scores[1]),float(scores[2]),float(scores[3])]\r\n\r\n\r\n# In[11]:\r\n\r\n\r\nrecom_df[['user_id','item_id']].to_csv(output_path +'user_item_index.csv', index=False)\r\n\r\n\r\n# In[12]:\r\n\r\n\r\nrecom_df.to_csv(output_path + 'recall_0531.csv', index=False)\r\n\r\n\r\n# In[13]:\r\n\r\n\r\noutput_path + 'recall_0531.csv'\r\n\r\n\r\n# In[14]:\r\n\r\n\r\nfrom sys import stdout\r\nprint(evaluate(stdout,'Recall_0531.csv',\r\n             answer_fname='./user_data/model_1/model_1_debias_track_answer.csv', rank_num=50))\r\n\r\n\r\n#         current_time: 1590673576\r\n#         date_time: 2020-05-28 21:46:16\r\n#         current_phase: 6\r\n#         [0.07291776530294389, 0.04257302451332752, 0.16795865633074936, 0.10839160839160839]\r\n#         [0.07522970326234413, 0.047286878349803496, 0.1778875849289685, 0.12471655328798185]\r\n#         [0.08768431272730617, 0.05220432316374826, 0.2040429564118762, 0.13366960907944514]\r\n#         [0.08137267931092253, 0.04650284552993235, 0.18584070796460178, 0.10888610763454318]\r\n#         [0.086082070609559, 0.06099578564127202, 0.20061919504643963, 0.14116251482799524]\r\n#         [0.08282023366562385, 0.05404211657982558, 0.18724400234055003, 0.1210710128055879]\r\n#         [0.08625658967639374, 0.05129722585118765, 0.19410745233968804, 0.12543153049482164]\r\n#         [0.5723633170127869, 0.5723633170127869, 0.3549021780490875, 1.3177005052566528, 0.8633289337158203]\r\n\r\n#         current_time: 1590730998\r\n#         date_time: 2020-05-29 13:43:18\r\n#         current_phase: 6\r\n#         [0.07336197799145278, 0.04333070177814886, 0.17118863049095606, 0.11188811188811189]\r\n#         [0.07551020515190006, 0.047111743016730066, 0.17974058060531192, 0.12698412698412698]\r\n#         [0.0877367887009624, 0.052890596296164785, 0.2040429564118762, 0.13619167717528374]\r\n#         user_id 3 of phase 3 not in submission\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/3_Recall/01_Recall-Wu-offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[7]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\nimport time\r\nimport gc\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ndef get_predict(df, pred_col, top_fill, ranknum):  \r\n    top_fill = [int(t) for t in top_fill.split(',')]  \r\n    scores = [-1 * i for i in range(1, len(top_fill) + 1)]  \r\n    ids = list(df['user_id'].unique())  \r\n    fill_df = pd.DataFrame(ids * len(top_fill), columns=['user_id'])  \r\n    fill_df.sort_values('user_id', inplace=True)  \r\n    fill_df['item_id'] = top_fill * len(ids)  \r\n    fill_df[pred_col] = scores * len(ids)  \r\n    df = df.append(fill_df)  \r\n    df.sort_values(pred_col, ascending=False, inplace=True)  \r\n    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')  \r\n    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)  \r\n    df = df[df['rank'] <= ranknum]  \r\n    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',', expand=True).reset_index()  \r\n    return df  \r\n\r\n\r\n# In[9]:\r\n\r\n\r\ndef recommend(sim_item_corr, user_item_dict, user_id, times, item_dict, item_time_dict, top_k, item_num):\r\n    '''\r\n    input:item_sim_list, user_item, uid, 500, 50\r\n    # 用户历史序列中的所有商品均有关联商品,整合这些关联商品,进行相似性排序\r\n    '''\r\n    rank = {}\r\n    interacted_items = user_item_dict[user_id]\r\n    interacted_items = interacted_items[::-1]\r\n    times = times[::-1]\r\n    t0 = times[0]\r\n    for loc, i in enumerate(interacted_items):\r\n        for j, wij in sorted(sim_item_corr[i].items(), key=lambda d: d[1]['sim'], reverse=True)[0:top_k]:\r\n            if j not in interacted_items:\r\n                rank.setdefault(j, {'sim': 0,\r\n                                        'item_cf': 0,\r\n                                        'item_cf_weighted': 0,\r\n                                        'time_diff': np.inf,\r\n                                        'loc_diff': np.inf,\r\n                                        # Some feature generated by recall\r\n                                        'time_diff_recall': np.inf,\r\n                                        'time_diff_recall_1': np.inf,\r\n                                        'loc_diff_recall': np.inf,\r\n                                        # Nodesim and Deepsim\r\n                                          'node_sim_max': -1e8,\r\n                                          'node_sim_sum':0,\r\n                                          'deep_sim_max': -1e8,\r\n                                          'deep_sim_sum':0,\r\n                                          })\r\n                t1 = times[loc]\r\n                t2 = item_time_dict[j][0]\r\n                delta_t1 = abs(t0 - t1) * 650000\r\n                delta_t2 = abs(t0 - t2) * 650000\r\n                alpha = max(0.2, 1 / (1 + item_dict[j]))\r\n                beta = max(0.5, (0.9 ** loc))\r\n                theta = max(0.5, 1 / (1 + delta_t1))\r\n                gamma = max(0.5, 1 / (1 + delta_t2))\r\n\r\n                rank[j]['sim'] += wij['sim'] * (alpha ** 2) * (beta) * (theta ** 2) * gamma\r\n                rank[j]['item_cf'] += wij['item_cf']\r\n                rank[j]['item_cf_weighted'] += wij['item_cf_weighted']\r\n                \r\n                if wij['time_diff'] < rank[j]['time_diff']:\r\n                    rank[j]['time_diff'] = wij['time_diff']\r\n                if wij['loc_diff'] < rank[j]['loc_diff']:\r\n                    rank[j]['loc_diff'] = wij['loc_diff']\r\n                if delta_t1 < rank[j]['time_diff_recall']:\r\n                    rank[j]['time_diff_recall'] = delta_t1\r\n                if delta_t2 < rank[j]['time_diff_recall_1']:\r\n                    rank[j]['time_diff_recall_1'] = delta_t2\r\n                if loc < rank[j]['loc_diff_recall']:\r\n                    rank[j]['loc_diff_recall'] = loc\r\n                    \r\n                if wij['node_sim_max'] > rank[j]['node_sim_max']:\r\n                    rank[j]['node_sim_max'] = wij['node_sim_max']\r\n                rank[j]['node_sim_sum'] += wij['node_sim_sum'] / wij['item_cf']\r\n                \r\n                if wij['deep_sim_max'] > rank[j]['deep_sim_max']:\r\n                    rank[j]['deep_sim_max'] = wij['deep_sim_max']\r\n                rank[j]['deep_sim_sum'] += wij['deep_sim_sum'] / wij['item_cf']\r\n                \r\n    return sorted(rank.items(), key=lambda d: d[1]['sim'], reverse=True)[:item_num]\r\n\r\n\r\n# In[10]:\r\n\r\n\r\nnow_phase = 9\r\n\r\noffline = \"./user_data/offline/\"\r\nheader = 'offline'\r\ninput_path = './user_data/offline/new_similarity/'\r\noutput_path = './user_data/offline/new_recall/'\r\n\r\n\r\n# In[11]:\r\n\r\n\r\n# recom_item = []  \r\n\r\n# for c in range(now_phase + 1):  \r\n#     a = time.time()\r\n\r\n#     print('phase:', c)  \r\n    \r\n#     with open(input_path+'itemCF_new'+str(c)+'.pkl','rb') as f:\r\n#         item_sim_list = pickle.load(f)    \r\n\r\n#     with open(input_path+'user2item_new'+str(c)+'.pkl','rb') as f:\r\n#         user_item = pickle.load(f)                  \r\n              \r\n#     with open(input_path+'item2cnt_new'+str(c)+'.pkl','rb') as f:\r\n#         item_dic = pickle.load(f) \r\n\r\n#     with open(input_path+'userTime'+str(c)+'.pkl','rb') as f:\r\n#         user_time_dict = pickle.load(f)         \r\n        \r\n#     with open(input_path+'itemTime'+str(c)+'.pkl','rb') as f:\r\n#         item_time_dict = pickle.load(f)          \r\n        \r\n#     qtime_test = pd.read_csv(offline + header + '_test_qtime-{}.csv'.format(c), header=None,\r\n#                               names=['user_id', 'item_id', 'time'])\r\n    \r\n    \r\n#     for user in tqdm(qtime_test['user_id'].unique()):\r\n#         if user in user_time_dict:\r\n#             times = user_time_dict[user]\r\n#             rank_item = recommend(item_sim_list, user_item, user, times, item_dic, item_time_dict, 500, 500)\r\n#             for j in rank_item:\r\n#                 recom_item.append([user, int(j[0])] + list(j[1].values()))      \r\n#     gc.collect()\r\nfile = open(input_path + 'recom_item.pkl', 'rb')\r\nrecom_item = pickle.load(file)\r\nfile.close()\r\n\r\n\r\n# In[12]:\r\n\r\n\r\nfor phase in range(now_phase + 1):\r\n    a = time.time()\r\n    history_list = []\r\n    for i in range(now_phase + 1):\r\n        click_train = pd.read_csv(offline + header + '_train_click-{}.csv'.format(i), header=None,\r\n                                  names=['user_id', 'item_id', 'time'])\r\n        click_test = pd.read_csv(offline + header + '_test_click-{}.csv'.format(i), header=None,\r\n                                 names=['user_id', 'item_id', 'time'])\r\n\r\n        all_click = click_train.append(click_test)\r\n        history_list.append(all_click)\r\n\r\n    # qtime_test = pd.read_csv(offline + 'offline_test_qtime-{}.csv'.format(phase), header=None,\r\n    #                          names=['user_id', 'item_id', 'time'])\r\n    click_test = pd.read_csv(offline + header+ '_test_click-{}.csv'.format(phase), header=None,\r\n                             names=['user_id', 'item_id', 'time'])\r\n    print(click_test['user_id'].nunique())\r\n\r\n    print('phase:', phase)\r\n    time_diff = max(history_list[now_phase]['time']) - min(history_list[0]['time'])\r\n    for i in range(phase + 1, now_phase + 1):\r\n        history_list[i]['time'] = history_list[i]['time'] - time_diff\r\n\r\n    whole_click = pd.DataFrame()\r\n    for i in range(now_phase + 1):\r\n        whole_click = whole_click.append(history_list[i])\r\n\r\n\r\n    whole_click = whole_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    whole_click = whole_click.sort_values('time')\r\n    whole_click = whole_click.reset_index(drop=True)\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ndef phase_predict(df, pred_col, top_fill, topk=50):\r\n    \"\"\"recom_df, 'sim', top50_click, \"click_valid\"\r\n    \"\"\"\r\n    top_fill = [int(t) for t in top_fill.split(',')]\r\n    top_fill = top_fill[:topk]\r\n    scores = [-1 * i for i in range(1, len(top_fill) + 1)]\r\n    ids = list(df['user_id'].unique())\r\n    fill_df = pd.DataFrame(ids * len(top_fill), columns=['user_id'])\r\n    fill_df.sort_values('user_id', inplace=True)\r\n    fill_df['item_id'] = top_fill * len(ids)\r\n    fill_df[pred_col] = scores * len(ids)\r\n    df = df.append(fill_df)\r\n    df.sort_values(pred_col, ascending=False, inplace=True)\r\n    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')\r\n    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)\r\n    df.sort_values(\"rank\", inplace=True)\r\n    df = df[df[\"rank\"] <= topk]\r\n    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',',\r\n                                                                                                   expand=True).reset_index()\r\n    return df\r\n\r\n\r\n# In[14]:\r\n\r\n\r\n# find most popular items\r\ntop50_click = whole_click['item_id'].value_counts().index[:500].values\r\ntop50_click = ','.join([str(i) for i in top50_click])\r\n\r\nrecom_df = pd.DataFrame(recom_item, columns=['user_id', 'item_id', 'sim'] + ['feature_' + str(x) for x in range(len(recom_item[0]) - 3)])\r\nresult = phase_predict(recom_df, 'sim', top50_click, 50)\r\nresult['user_id'] = result['user_id'].astype(int)\r\nresult.to_csv('Recall_0531.csv', index=False, header=None)        \r\n\r\n\r\n# In[15]:\r\n\r\n\r\nimport datetime\r\n\r\n\r\n# In[16]:\r\n\r\n\r\n# the higher scores, the better performance\r\ndef evaluate_each_phase(predictions, answers, rank_num):\r\n    list_item_degress = []\r\n    for user_id in answers:\r\n        item_id, item_degree = answers[user_id]\r\n        list_item_degress.append(item_degree)\r\n    list_item_degress.sort()\r\n    median_item_degree = list_item_degress[len(list_item_degress) // 2]\r\n\r\n    num_cases_full = 0.0\r\n    ndcg_50_full = 0.0\r\n    ndcg_50_half = 0.0\r\n    num_cases_half = 0.0\r\n    hitrate_50_full = 0.0\r\n    hitrate_50_half = 0.0\r\n    for user_id in answers:\r\n        item_id, item_degree = answers[user_id]\r\n        rank = 0\r\n        while rank < rank_num and predictions[user_id][rank] != item_id:\r\n            rank += 1\r\n        num_cases_full += 1.0\r\n        if rank < rank_num:\r\n            ndcg_50_full += 1.0 / np.log2(rank + 2.0)\r\n            hitrate_50_full += 1.0\r\n        if item_degree <= median_item_degree:\r\n            num_cases_half += 1.0\r\n            if rank < rank_num:\r\n                ndcg_50_half += 1.0 / np.log2(rank + 2.0)\r\n                hitrate_50_half += 1.0\r\n    ndcg_50_full /= num_cases_full\r\n    hitrate_50_full /= num_cases_full\r\n    ndcg_50_half /= num_cases_half\r\n    hitrate_50_half /= num_cases_half\r\n    \r\n    print([ndcg_50_full, ndcg_50_half,\r\n                     hitrate_50_full, hitrate_50_half])\r\n    \r\n    return np.array([ndcg_50_full, ndcg_50_half,\r\n                     hitrate_50_full, hitrate_50_half], dtype=np.float32)\r\n\r\n# submit_fname is the path to the file submitted by the participants.\r\n# debias_track_answer.csv is the standard answer, which is not released.\r\ndef evaluate(stdout, submit_fname,\r\n             answer_fname='debias_track_answer.csv', rank_num=50, current_time=None):\r\n    schedule_in_unix_time = [\r\n        0,  # ........ 1970-01-01 08:00:00 (T=0)\r\n        1586534399,  # 2020-04-10 23:59:59 (T=1)\r\n        1587139199,  # 2020-04-17 23:59:59 (T=2)\r\n        1587743999,  # 2020-04-24 23:59:59 (T=3)\r\n        1588348799,  # 2020-05-01 23:59:59 (T=4)\r\n        1588953599,  # 2020-05-08 23:59:59 (T=5)\r\n        1589558399,  # 2020-05-15 23:59:59 (T=6)\r\n        1590163199,  # 2020-05-22 23:59:59 (T=7)\r\n        #1589558399,\r\n        1590767999,  # 2020-05-29 23:59:59 (T=8)\r\n        1591372799  # .2020-06-05 23:59:59 (T=9)\r\n    ]\r\n    assert len(schedule_in_unix_time) == 10\r\n    for i in range(1, len(schedule_in_unix_time) - 1):\r\n        # 604800 == one week\r\n        assert schedule_in_unix_time[i] + 604800 == schedule_in_unix_time[i + 1]\r\n\r\n    if current_time is None:\r\n        current_time = int(time.time())\r\n    print('current_time:', current_time)\r\n    print('date_time:', datetime.datetime.fromtimestamp(current_time))\r\n    current_phase = 0\r\n    while (current_phase < 9) and (\r\n            current_time > schedule_in_unix_time[current_phase + 1]):\r\n        current_phase += 1\r\n    print('current_phase:', current_phase)\r\n\r\n    try:\r\n        answers = [{} for _ in range(10)]\r\n        with open(answer_fname, 'r') as fin:\r\n            for line in fin:\r\n                line = [int(x) for x in line.split(',')]\r\n                phase_id, user_id, item_id, item_degree = line\r\n                assert user_id % 11 == phase_id\r\n                # exactly one test case for each user_id\r\n                answers[phase_id][user_id] = (item_id, item_degree)\r\n    except Exception as _:\r\n        print( 'server-side error: answer file incorrect\\n')\r\n        return -1\r\n\r\n    try:\r\n        predictions = {}\r\n        with open(submit_fname, 'r') as fin:\r\n            for line in fin:\r\n                line = line.strip()\r\n                if line == '':\r\n                    continue\r\n                line = line.split(',')\r\n                user_id = int(line[0])\r\n                if user_id in predictions:\r\n                    print('submitted duplicate user_ids \\n')\r\n                    return -1\r\n                item_ids = [int(i) for i in line[1:]]\r\n                if len(item_ids) != rank_num:\r\n                    print('each row need have 50 items \\n')\r\n                    return -1\r\n                if len(set(item_ids)) != rank_num:\r\n                    print('each row need have 50 DISTINCT items \\n')\r\n                    return -1\r\n                predictions[user_id] = item_ids\r\n    except Exception as _:\r\n        print('submission not in correct format \\n')\r\n        return -1\r\n\r\n    scores = np.zeros(4, dtype=np.float32)\r\n\r\n    # The final winning teams will be decided based on phase T=7,8,9 only.\r\n    # We thus fix the scores to 1.0 for phase 0,1,2,...,6 at the final stage.\r\n    #if current_phase >= 7:  # if at the final stage, i.e., T=7,8,9\r\n    #    scores += 7.0  # then fix the scores to 1.0 for phase 0,1,2,...,6\r\n    #phase_beg = (7 if (current_phase >= 7) else 0)\r\n    phase_beg = 0\r\n    phase_end = current_phase + 1\r\n    for phase_id in range(phase_beg, phase_end):\r\n        for user_id in answers[phase_id]:\r\n            if user_id not in predictions:\r\n                print('user_id %d of phase %d not in submission' % (user_id, phase_id))\r\n                return -1\r\n        try:\r\n            # We sum the scores from all the phases, instead of averaging them.\r\n            scores += evaluate_each_phase(predictions, answers[phase_id], rank_num)\r\n        except Exception as _:\r\n            print('error occurred during evaluation')\r\n            return -1\r\n\r\n    return [float(scores[0]),float(scores[0]),float(scores[1]),float(scores[2]),float(scores[3])]\r\n\r\n\r\n# In[17]:\r\n\r\n\r\nrecom_df[['user_id','item_id']].to_csv(output_path +'user_item_index.csv', index=False)\r\n\r\n\r\n# In[18]:\r\n\r\n\r\nrecom_df.to_csv(output_path + 'recall_0531.csv', index=False)\r\n\r\n\r\n# In[19]:\r\n\r\n\r\noutput_path + 'recall_0531.csv'\r\n\r\n\r\n# In[20]:\r\n\r\n\r\nfrom sys import stdout\r\nprint(evaluate(stdout,'Recall_0531.csv',\r\n             answer_fname=offline + header + '_debias_track_answer.csv', rank_num=50))\r\n\r\n\r\n#         current_time: 1590673576\r\n#         date_time: 2020-05-28 21:46:16\r\n#         current_phase: 6\r\n#         [0.07291776530294389, 0.04257302451332752, 0.16795865633074936, 0.10839160839160839]\r\n#         [0.07522970326234413, 0.047286878349803496, 0.1778875849289685, 0.12471655328798185]\r\n#         [0.08768431272730617, 0.05220432316374826, 0.2040429564118762, 0.13366960907944514]\r\n#         [0.08137267931092253, 0.04650284552993235, 0.18584070796460178, 0.10888610763454318]\r\n#         [0.086082070609559, 0.06099578564127202, 0.20061919504643963, 0.14116251482799524]\r\n#         [0.08282023366562385, 0.05404211657982558, 0.18724400234055003, 0.1210710128055879]\r\n#         [0.08625658967639374, 0.05129722585118765, 0.19410745233968804, 0.12543153049482164]\r\n#         [0.5723633170127869, 0.5723633170127869, 0.3549021780490875, 1.3177005052566528, 0.8633289337158203]\r\n\r\n#         current_time: 1590730998\r\n#         date_time: 2020-05-29 13:43:18\r\n#         current_phase: 6\r\n#         [0.07336197799145278, 0.04333070177814886, 0.17118863049095606, 0.11188811188811189]\r\n#         [0.07551020515190006, 0.047111743016730066, 0.17974058060531192, 0.12698412698412698]\r\n#         [0.0877367887009624, 0.052890596296164785, 0.2040429564118762, 0.13619167717528374]\r\n#         user_id 3 of phase 3 not in submission\r\n"
  },
  {
    "path": "code/3_Recall/01_Recall-Wu-online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\nimport time\r\nimport gc\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ndef get_predict(df, pred_col, top_fill, ranknum):  \r\n    top_fill = [int(t) for t in top_fill.split(',')]  \r\n    scores = [-1 * i for i in range(1, len(top_fill) + 1)]  \r\n    ids = list(df['user_id'].unique())  \r\n    fill_df = pd.DataFrame(ids * len(top_fill), columns=['user_id'])  \r\n    fill_df.sort_values('user_id', inplace=True)  \r\n    fill_df['item_id'] = top_fill * len(ids)  \r\n    fill_df[pred_col] = scores * len(ids)  \r\n    df = df.append(fill_df)  \r\n    df.sort_values(pred_col, ascending=False, inplace=True)  \r\n    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')  \r\n    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)  \r\n    df = df[df['rank'] <= ranknum]  \r\n    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',', expand=True).reset_index()  \r\n    return df  \r\n\r\n\r\n# In[3]:\r\n\r\n\r\ndef recommend(sim_item_corr, user_item_dict, user_id, times, item_dict, item_time_dict, top_k, item_num):\r\n    '''\r\n    input:item_sim_list, user_item, uid, 500, 50\r\n    # 用户历史序列中的所有商品均有关联商品,整合这些关联商品,进行相似性排序\r\n    '''\r\n    rank = {}\r\n    interacted_items = user_item_dict[user_id]\r\n    interacted_items = interacted_items[::-1]\r\n    times = times[::-1]\r\n    t0 = times[0]\r\n    for loc, i in enumerate(interacted_items):\r\n        for j, wij in sorted(sim_item_corr[i].items(), key=lambda d: d[1]['sim'], reverse=True)[0:top_k]:\r\n            if j not in interacted_items:\r\n                rank.setdefault(j, {'sim': 0,\r\n                                        'item_cf': 0,\r\n                                        'item_cf_weighted': 0,\r\n                                        'time_diff': np.inf,\r\n                                        'loc_diff': np.inf,\r\n                                        # Some feature generated by recall\r\n                                        'time_diff_recall': np.inf,\r\n                                        'time_diff_recall_1': np.inf,\r\n                                        'loc_diff_recall': np.inf,\r\n                                        # Nodesim and Deepsim\r\n                                          'node_sim_max': -1e8,\r\n                                          'node_sim_sum':0,\r\n                                          'deep_sim_max': -1e8,\r\n                                          'deep_sim_sum':0,\r\n                                          })\r\n                t1 = times[loc]\r\n                t2 = item_time_dict[j][0]\r\n                delta_t1 = abs(t0 - t1) * 650000\r\n                delta_t2 = abs(t0 - t2) * 650000\r\n                alpha = max(0.2, 1 / (1 + item_dict[j]))\r\n                beta = max(0.5, (0.9 ** loc))\r\n                theta = max(0.5, 1 / (1 + delta_t1))\r\n                gamma = max(0.5, 1 / (1 + delta_t2))\r\n\r\n                rank[j]['sim'] += wij['sim'] * (alpha ** 2) * (beta) * (theta ** 2) * gamma\r\n                rank[j]['item_cf'] += wij['item_cf']\r\n                rank[j]['item_cf_weighted'] += wij['item_cf_weighted']\r\n                \r\n                if wij['time_diff'] < rank[j]['time_diff']:\r\n                    rank[j]['time_diff'] = wij['time_diff']\r\n                if wij['loc_diff'] < rank[j]['loc_diff']:\r\n                    rank[j]['loc_diff'] = wij['loc_diff']\r\n                if delta_t1 < rank[j]['time_diff_recall']:\r\n                    rank[j]['time_diff_recall'] = delta_t1\r\n                if delta_t2 < rank[j]['time_diff_recall_1']:\r\n                    rank[j]['time_diff_recall_1'] = delta_t2\r\n                if loc < rank[j]['loc_diff_recall']:\r\n                    rank[j]['loc_diff_recall'] = loc\r\n                    \r\n                if wij['node_sim_max'] > rank[j]['node_sim_max']:\r\n                    rank[j]['node_sim_max'] = wij['node_sim_max']\r\n                rank[j]['node_sim_sum'] += wij['node_sim_sum'] / wij['item_cf']\r\n                \r\n                if wij['deep_sim_max'] > rank[j]['deep_sim_max']:\r\n                    rank[j]['deep_sim_max'] = wij['deep_sim_max']\r\n                rank[j]['deep_sim_sum'] += wij['deep_sim_sum'] / wij['item_cf']\r\n                \r\n    return sorted(rank.items(), key=lambda d: d[1]['sim'], reverse=True)[:item_num]\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nnow_phase = 9\r\n\r\noffline = \"./user_data/dataset/\"\r\nheader = 'underexpose'\r\ninput_path = './user_data/dataset/new_similarity/'\r\noutput_path = './user_data/dataset/new_recall/'\r\n\r\n\r\n# In[5]:\r\n\r\n\r\n# recom_item = []  \r\n\r\n# for c in range(now_phase + 1):  \r\n#     a = time.time()\r\n\r\n#     print('phase:', c)  \r\n    \r\n#     with open(input_path+'itemCF_new'+str(c)+'.pkl','rb') as f:\r\n#         item_sim_list = pickle.load(f)    \r\n\r\n#     with open(input_path+'user2item_new'+str(c)+'.pkl','rb') as f:\r\n#         user_item = pickle.load(f)                  \r\n              \r\n#     with open(input_path+'item2cnt_new'+str(c)+'.pkl','rb') as f:\r\n#         item_dic = pickle.load(f) \r\n\r\n#     with open(input_path+'userTime'+str(c)+'.pkl','rb') as f:\r\n#         user_time_dict = pickle.load(f)         \r\n        \r\n#     with open(input_path+'itemTime'+str(c)+'.pkl','rb') as f:\r\n#         item_time_dict = pickle.load(f)          \r\n        \r\n#     qtime_test = pd.read_csv(offline + header + '_test_qtime-{}.csv'.format(c), header=None,\r\n#                               names=['user_id', 'item_id', 'time'])\r\n    \r\n    \r\n#     for user in tqdm(qtime_test['user_id'].unique()):\r\n#         if user in user_time_dict:\r\n#             times = user_time_dict[user]\r\n#             rank_item = recommend(item_sim_list, user_item, user, times, item_dic, item_time_dict, 500, 500)\r\n#             for j in rank_item:\r\n#                 recom_item.append([user, int(j[0])] + list(j[1].values()))      \r\n#     gc.collect()\r\nfile = open(input_path + 'recom_item.pkl', 'rb')\r\nrecom_item = pickle.load(file)\r\nfile.close()\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nfor phase in range(now_phase + 1):\r\n    a = time.time()\r\n    history_list = []\r\n    for i in range(now_phase + 1):\r\n        click_train = pd.read_csv(offline + header + '_train_click-{}.csv'.format(i), header=None,\r\n                                  names=['user_id', 'item_id', 'time'])\r\n        click_test = pd.read_csv(offline + header + '_test_click-{}.csv'.format(i), header=None,\r\n                                 names=['user_id', 'item_id', 'time'])\r\n\r\n        all_click = click_train.append(click_test)\r\n        history_list.append(all_click)\r\n\r\n    # qtime_test = pd.read_csv(offline + 'offline_test_qtime-{}.csv'.format(phase), header=None,\r\n    #                          names=['user_id', 'item_id', 'time'])\r\n    click_test = pd.read_csv(offline + header+ '_test_click-{}.csv'.format(phase), header=None,\r\n                             names=['user_id', 'item_id', 'time'])\r\n    print(click_test['user_id'].nunique())\r\n\r\n    print('phase:', phase)\r\n    time_diff = max(history_list[now_phase]['time']) - min(history_list[0]['time'])\r\n    for i in range(phase + 1, now_phase + 1):\r\n        history_list[i]['time'] = history_list[i]['time'] - time_diff\r\n\r\n    whole_click = pd.DataFrame()\r\n    for i in range(now_phase + 1):\r\n        whole_click = whole_click.append(history_list[i])\r\n\r\n\r\n    whole_click = whole_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    whole_click = whole_click.sort_values('time')\r\n    whole_click = whole_click.reset_index(drop=True)\r\n\r\n\r\n# In[7]:\r\n\r\n\r\ndef phase_predict(df, pred_col, top_fill, topk=50):\r\n    \"\"\"recom_df, 'sim', top50_click, \"click_valid\"\r\n    \"\"\"\r\n    top_fill = [int(t) for t in top_fill.split(',')]\r\n    top_fill = top_fill[:topk]\r\n    scores = [-1 * i for i in range(1, len(top_fill) + 1)]\r\n    ids = list(df['user_id'].unique())\r\n    fill_df = pd.DataFrame(ids * len(top_fill), columns=['user_id'])\r\n    fill_df.sort_values('user_id', inplace=True)\r\n    fill_df['item_id'] = top_fill * len(ids)\r\n    fill_df[pred_col] = scores * len(ids)\r\n    df = df.append(fill_df)\r\n    df.sort_values(pred_col, ascending=False, inplace=True)\r\n    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')\r\n    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)\r\n    df.sort_values(\"rank\", inplace=True)\r\n    df = df[df[\"rank\"] <= topk]\r\n    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',',\r\n                                                                                                   expand=True).reset_index()\r\n    return df\r\n\r\n\r\n# In[8]:\r\n\r\n\r\n# find most popular items\r\ntop50_click = whole_click['item_id'].value_counts().index[:500].values\r\ntop50_click = ','.join([str(i) for i in top50_click])\r\n\r\nrecom_df = pd.DataFrame(recom_item, columns=['user_id', 'item_id', 'sim'] + ['feature_' + str(x) for x in range(len(recom_item[0]) - 3)])\r\nresult = phase_predict(recom_df, 'sim', top50_click, 50)\r\nresult['user_id'] = result['user_id'].astype(int)\r\nresult.to_csv('Recall_0531.csv', index=False, header=None)        \r\n\r\n\r\n# In[9]:\r\n\r\n\r\nimport datetime\r\n\r\n\r\n# In[10]:\r\n\r\n\r\n# the higher scores, the better performance\r\ndef evaluate_each_phase(predictions, answers, rank_num):\r\n    list_item_degress = []\r\n    for user_id in answers:\r\n        item_id, item_degree = answers[user_id]\r\n        list_item_degress.append(item_degree)\r\n    list_item_degress.sort()\r\n    median_item_degree = list_item_degress[len(list_item_degress) // 2]\r\n\r\n    num_cases_full = 0.0\r\n    ndcg_50_full = 0.0\r\n    ndcg_50_half = 0.0\r\n    num_cases_half = 0.0\r\n    hitrate_50_full = 0.0\r\n    hitrate_50_half = 0.0\r\n    for user_id in answers:\r\n        item_id, item_degree = answers[user_id]\r\n        rank = 0\r\n        while rank < rank_num and predictions[user_id][rank] != item_id:\r\n            rank += 1\r\n        num_cases_full += 1.0\r\n        if rank < rank_num:\r\n            ndcg_50_full += 1.0 / np.log2(rank + 2.0)\r\n            hitrate_50_full += 1.0\r\n        if item_degree <= median_item_degree:\r\n            num_cases_half += 1.0\r\n            if rank < rank_num:\r\n                ndcg_50_half += 1.0 / np.log2(rank + 2.0)\r\n                hitrate_50_half += 1.0\r\n    ndcg_50_full /= num_cases_full\r\n    hitrate_50_full /= num_cases_full\r\n    ndcg_50_half /= num_cases_half\r\n    hitrate_50_half /= num_cases_half\r\n    \r\n    print([ndcg_50_full, ndcg_50_half,\r\n                     hitrate_50_full, hitrate_50_half])\r\n    \r\n    return np.array([ndcg_50_full, ndcg_50_half,\r\n                     hitrate_50_full, hitrate_50_half], dtype=np.float32)\r\n\r\n# submit_fname is the path to the file submitted by the participants.\r\n# debias_track_answer.csv is the standard answer, which is not released.\r\ndef evaluate(stdout, submit_fname,\r\n             answer_fname='debias_track_answer.csv', rank_num=50, current_time=None):\r\n    schedule_in_unix_time = [\r\n        0,  # ........ 1970-01-01 08:00:00 (T=0)\r\n        1586534399,  # 2020-04-10 23:59:59 (T=1)\r\n        1587139199,  # 2020-04-17 23:59:59 (T=2)\r\n        1587743999,  # 2020-04-24 23:59:59 (T=3)\r\n        1588348799,  # 2020-05-01 23:59:59 (T=4)\r\n        1588953599,  # 2020-05-08 23:59:59 (T=5)\r\n        1589558399,  # 2020-05-15 23:59:59 (T=6)\r\n        1590163199,  # 2020-05-22 23:59:59 (T=7)\r\n        #1589558399,\r\n        1590767999,  # 2020-05-29 23:59:59 (T=8)\r\n        1591372799  # .2020-06-05 23:59:59 (T=9)\r\n    ]\r\n    assert len(schedule_in_unix_time) == 10\r\n    for i in range(1, len(schedule_in_unix_time) - 1):\r\n        # 604800 == one week\r\n        assert schedule_in_unix_time[i] + 604800 == schedule_in_unix_time[i + 1]\r\n\r\n    if current_time is None:\r\n        current_time = int(time.time())\r\n    print('current_time:', current_time)\r\n    print('date_time:', datetime.datetime.fromtimestamp(current_time))\r\n    current_phase = 0\r\n    while (current_phase < 9) and (\r\n            current_time > schedule_in_unix_time[current_phase + 1]):\r\n        current_phase += 1\r\n    print('current_phase:', current_phase)\r\n\r\n    try:\r\n        answers = [{} for _ in range(10)]\r\n        with open(answer_fname, 'r') as fin:\r\n            for line in fin:\r\n                line = [int(x) for x in line.split(',')]\r\n                phase_id, user_id, item_id, item_degree = line\r\n                assert user_id % 11 == phase_id\r\n                # exactly one test case for each user_id\r\n                answers[phase_id][user_id] = (item_id, item_degree)\r\n    except Exception as _:\r\n        print( 'server-side error: answer file incorrect\\n')\r\n        return -1\r\n\r\n    try:\r\n        predictions = {}\r\n        with open(submit_fname, 'r') as fin:\r\n            for line in fin:\r\n                line = line.strip()\r\n                if line == '':\r\n                    continue\r\n                line = line.split(',')\r\n                user_id = int(line[0])\r\n                if user_id in predictions:\r\n                    print('submitted duplicate user_ids \\n')\r\n                    return -1\r\n                item_ids = [int(i) for i in line[1:]]\r\n                if len(item_ids) != rank_num:\r\n                    print('each row need have 50 items \\n')\r\n                    return -1\r\n                if len(set(item_ids)) != rank_num:\r\n                    print('each row need have 50 DISTINCT items \\n')\r\n                    return -1\r\n                predictions[user_id] = item_ids\r\n    except Exception as _:\r\n        print('submission not in correct format \\n')\r\n        return -1\r\n\r\n    scores = np.zeros(4, dtype=np.float32)\r\n\r\n    # The final winning teams will be decided based on phase T=7,8,9 only.\r\n    # We thus fix the scores to 1.0 for phase 0,1,2,...,6 at the final stage.\r\n    #if current_phase >= 7:  # if at the final stage, i.e., T=7,8,9\r\n    #    scores += 7.0  # then fix the scores to 1.0 for phase 0,1,2,...,6\r\n    #phase_beg = (7 if (current_phase >= 7) else 0)\r\n    phase_beg = 0\r\n    phase_end = current_phase + 1\r\n    for phase_id in range(phase_beg, phase_end):\r\n        for user_id in answers[phase_id]:\r\n            if user_id not in predictions:\r\n                print('user_id %d of phase %d not in submission' % (user_id, phase_id))\r\n                return -1\r\n        try:\r\n            # We sum the scores from all the phases, instead of averaging them.\r\n            scores += evaluate_each_phase(predictions, answers[phase_id], rank_num)\r\n        except Exception as _:\r\n            print('error occurred during evaluation')\r\n            return -1\r\n\r\n    return [float(scores[0]),float(scores[0]),float(scores[1]),float(scores[2]),float(scores[3])]\r\n\r\n\r\n# In[11]:\r\n\r\n\r\nrecom_df[['user_id','item_id']].to_csv(output_path + 'user_item_index.csv', index=False)\r\n\r\n\r\n# In[12]:\r\n\r\n\r\nrecom_df.to_csv(output_path + 'recall_0531.csv', index=False)\r\n\r\n\r\n# In[13]:\r\n\r\n\r\noutput_path + 'recall_0531.csv'\r\n\r\n\r\n# In[14]:\r\n\r\n\r\nfrom sys import stdout\r\nprint(evaluate(stdout,'Recall_0531.csv',\r\n             answer_fname=offline + header + '_debias_track_answer.csv', rank_num=50))\r\n\r\n\r\n#         current_time: 1590673576\r\n#         date_time: 2020-05-28 21:46:16\r\n#         current_phase: 6\r\n#         [0.07291776530294389, 0.04257302451332752, 0.16795865633074936, 0.10839160839160839]\r\n#         [0.07522970326234413, 0.047286878349803496, 0.1778875849289685, 0.12471655328798185]\r\n#         [0.08768431272730617, 0.05220432316374826, 0.2040429564118762, 0.13366960907944514]\r\n#         [0.08137267931092253, 0.04650284552993235, 0.18584070796460178, 0.10888610763454318]\r\n#         [0.086082070609559, 0.06099578564127202, 0.20061919504643963, 0.14116251482799524]\r\n#         [0.08282023366562385, 0.05404211657982558, 0.18724400234055003, 0.1210710128055879]\r\n#         [0.08625658967639374, 0.05129722585118765, 0.19410745233968804, 0.12543153049482164]\r\n#         [0.5723633170127869, 0.5723633170127869, 0.3549021780490875, 1.3177005052566528, 0.8633289337158203]\r\n\r\n#         current_time: 1590730998\r\n#         date_time: 2020-05-29 13:43:18\r\n#         current_phase: 6\r\n#         [0.07336197799145278, 0.04333070177814886, 0.17118863049095606, 0.11188811188811189]\r\n#         [0.07551020515190006, 0.047111743016730066, 0.17974058060531192, 0.12698412698412698]\r\n#         [0.0877367887009624, 0.052890596296164785, 0.2040429564118762, 0.13619167717528374]\r\n#         user_id 3 of phase 3 not in submission\r\n"
  },
  {
    "path": "code/4_RankFeature/01_sim_feature_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[7]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[8]:\r\n\r\n\r\n\r\n\r\n\r\ndef ReComputeSim(sim_cor,candidate_item_list,interacted_items,item_weight_dict,flag=False):\r\n    \r\n    sim_list = []\r\n    for j in candidate_item_list:\r\n        sim_tmp = 0\r\n        for loc, i in enumerate(interacted_items):  \r\n        #Just for RA gernerated by offline\r\n            if i not in sim_cor or j not in sim_cor[i]:\r\n                continue\r\n            if i in item_weight_dict:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * item_weight_dict[i] if flag else sim_cor[i][j] * (0.7**loc) * item_weight_dict[i]\r\n            else:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * 0.5 if flag else sim_cor[i][j] * (0.7**loc) * 0.5\r\n        \r\n        sim_list.append(sim_tmp)\r\n            \r\n    return sim_list\r\n\r\n\r\n# In[9]:\r\n\r\n\r\nfile_name = 'recall_0531'\r\n\r\noffline = pd.read_csv('./user_data/model_1/new_recall/' + file_name + '.csv')\r\n\r\nnow_phase = 9\r\n\r\n\r\ntrain_path = './user_data/model_1/'  \r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\nout_path = './user_data/model_1/new_similarity/'\r\n\r\nrecom_item = []  \r\n\r\nwhole_click = pd.DataFrame()  \r\n\r\n\r\nuser_id_list = []\r\nitem_id_list = []\r\n\r\nitem_sim_list = []\r\nra_sim_list = []\r\naa_sim_list = []\r\ncn_sim_list = []\r\ntxt_sim_list = []\r\n\r\nhdi_sim_list = []\r\nhpi_sim_list = []\r\nlhn1_sim_list = []\r\n\r\n    \r\nfor c in range(now_phase + 1):  \r\n    print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path +  header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path +  header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n\r\n    click_train['datetime'] = pd.to_datetime(click_train['datetime'])\r\n    click_test['datetime'] = pd.to_datetime(click_test['datetime'])\r\n    click_query['datetime'] = pd.to_datetime(click_query['datetime'])\r\n\r\n\r\n\r\n    click_train['timestamp'] = click_train['datetime'].dt.day + ( click_train['datetime'].dt.hour + \r\n                          (click_train['datetime'].dt.minute + click_train['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_test['timestamp'] = click_test['datetime'].dt.day + ( click_test['datetime'].dt.hour + \r\n                          (click_test['datetime'].dt.minute + click_test['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_query['timestamp'] = click_query['datetime'].dt.day + ( click_query['datetime'].dt.hour + \r\n                          (click_query['datetime'].dt.minute + click_query['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n\r\n    all_click = click_train.append(click_test)  \r\n        \r\n\r\n    with open(out_path+'user2item_new'+str(c)+'.pkl','rb') as f:\r\n        user_item_tmp = pickle.load(f)         \r\n        \r\n    with open(out_path+'CN_P'+str(c)+'_new.pkl','rb') as f:\r\n         CN_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(CN_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        cn_sim_list += sim_list_tmp    \r\n        \r\n        item_id_list += candidate_item_list\r\n        user_id_list += [row['user_id'] for x in candidate_item_list]        \r\n        \r\n    CN_sim_list_new = []        \r\n\r\n    \r\n        \r\n    with open(out_path+'HDI_P'+str(c)+'_new.pkl','rb') as f:\r\n         HDI_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(HDI_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        hdi_sim_list += sim_list_tmp\r\n         \r\n    \r\n    HDI_sim_list_new = []   \r\n\r\n    \r\n    with open(out_path+'HPI_P'+str(c)+'_new.pkl','rb') as f:\r\n         HPI_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(HPI_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        hpi_sim_list += sim_list_tmp\r\n         \r\n    \r\n    HPI_sim_list_new = []      \r\n    \r\n    \r\n    with open(out_path+'LHN1_P'+str(c)+'_new.pkl','rb') as f:\r\n         LHN1_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(LHN1_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        lhn1_sim_list += sim_list_tmp\r\n         \r\n    \r\n    LHN1_sim_list_new = []  \r\n    \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[10]:\r\n\r\n\r\noffline.shape\r\n\r\n\r\n# In[11]:\r\n\r\n\r\nlen(lhn1_sim_list)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[12]:\r\n\r\n\r\nsim_df = pd.DataFrame()\r\nsim_df['user_id'] = user_id_list\r\nsim_df['item_id'] = item_id_list\r\nsim_df['cn_sim'] = cn_sim_list\r\nsim_df['hpi_sim'] = hpi_sim_list\r\nsim_df['hdi_sim'] = hdi_sim_list\r\nsim_df['lhn1_sim'] = lhn1_sim_list\r\n\r\n\r\n# In[13]:\r\n\r\n\r\nsim_df.shape\r\n\r\n\r\n# In[14]:\r\n\r\n\r\noffline = offline.merge(sim_df,on=['user_id','item_id'])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[17]:\r\n\r\n\r\noffline.to_csv('./user_data/model_1/new_recall/'+ file_name + '_addsim.csv',index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/01_sim_feature_model1_RA_AA.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[ ]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\ndef ReComputeSim(sim_cor,candidate_item_list,interacted_items,item_weight_dict,flag=False):\r\n    \r\n    sim_list = []\r\n    for j in candidate_item_list:\r\n        sim_tmp = 0\r\n        for loc, i in enumerate(interacted_items):  \r\n        #Just for RA gernerated by offline\r\n            if i not in sim_cor or j not in sim_cor[i]:\r\n                continue\r\n            if i in item_weight_dict:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * item_weight_dict[i] if flag else sim_cor[i][j] * (0.7**loc) * item_weight_dict[i]\r\n            else:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * 0.5 if flag else sim_cor[i][j] * (0.7**loc) * 0.5\r\n        \r\n        sim_list.append(sim_tmp)\r\n            \r\n    return sim_list\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim'\r\n\r\noffline = pd.read_csv('./user_data/model_1/new_recall/' + file_name + '.csv')\r\n\r\nnow_phase = 9\r\n\r\n\r\ntrain_path = './user_data/model_1/'  \r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\nout_path = './user_data/model_1/new_similarity/'\r\n\r\nrecom_item = []  \r\n\r\nwhole_click = pd.DataFrame()  \r\n\r\n\r\nuser_id_list = []\r\nitem_id_list = []\r\n\r\n\r\nra_sim_list = []\r\naa_sim_list = []\r\n\r\n\r\n    \r\nfor c in range(now_phase + 1):  \r\n    print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path +  header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path +  header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n\r\n    click_train['datetime'] = pd.to_datetime(click_train['datetime'])\r\n    click_test['datetime'] = pd.to_datetime(click_test['datetime'])\r\n    click_query['datetime'] = pd.to_datetime(click_query['datetime'])\r\n\r\n\r\n\r\n    click_train['timestamp'] = click_train['datetime'].dt.day + ( click_train['datetime'].dt.hour + \r\n                          (click_train['datetime'].dt.minute + click_train['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_test['timestamp'] = click_test['datetime'].dt.day + ( click_test['datetime'].dt.hour + \r\n                          (click_test['datetime'].dt.minute + click_test['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_query['timestamp'] = click_query['datetime'].dt.day + ( click_query['datetime'].dt.hour + \r\n                          (click_query['datetime'].dt.minute + click_query['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n\r\n    all_click = click_train.append(click_test)  \r\n        \r\n\r\n    with open(out_path+'user2item_new'+str(c)+'.pkl','rb') as f:\r\n        user_item_tmp = pickle.load(f)         \r\n        \r\n    with open(out_path+'RA_P'+str(c)+'_new.pkl','rb') as f:\r\n         RA_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(RA_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        ra_sim_list += sim_list_tmp    \r\n        \r\n        item_id_list += candidate_item_list\r\n        user_id_list += [row['user_id'] for x in candidate_item_list]        \r\n        \r\n    RA_sim_list_new = []        \r\n\r\n    \r\n        \r\n    with open(out_path+'AA_P'+str(c)+'_new.pkl','rb') as f:\r\n         AA_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(AA_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        aa_sim_list += sim_list_tmp\r\n         \r\n    \r\n    AA_sim_list_new = []   \r\n\r\n    \r\n    \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\noffline.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\nsim_df = pd.DataFrame()\r\nsim_df['user_id'] = user_id_list\r\nsim_df['item_id'] = item_id_list\r\nsim_df['ra_sim'] = ra_sim_list\r\nsim_df['aa_sim'] = aa_sim_list\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\nsim_df.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\noffline = offline.merge(sim_df,on=['user_id','item_id'])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\noffline.to_csv('./user_data/model_1/new_recall/'+ file_name + '_addAA_RA.csv',index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/01_sim_feature_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[2]:\r\n\r\n\r\n\r\n\r\n\r\ndef ReComputeSim(sim_cor,candidate_item_list,interacted_items,item_weight_dict,flag=False):\r\n    \r\n    sim_list = []\r\n    for j in candidate_item_list:\r\n        sim_tmp = 0\r\n        for loc, i in enumerate(interacted_items):  \r\n        #Just for RA gernerated by offline\r\n            if i not in sim_cor or j not in sim_cor[i]:\r\n                continue\r\n            if i in item_weight_dict:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * item_weight_dict[i] if flag else sim_cor[i][j] * (0.7**loc) * item_weight_dict[i]\r\n            else:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * 0.5 if flag else sim_cor[i][j] * (0.7**loc) * 0.5\r\n        \r\n        sim_list.append(sim_tmp)\r\n            \r\n    return sim_list\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531'\r\n\r\noffline = pd.read_csv('./user_data/offline/new_recall/' + file_name + '.csv')\r\n\r\nnow_phase = 9\r\n\r\n\r\ntrain_path = './user_data/offline/'  \r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\nout_path = './user_data/offline/new_similarity/'\r\n\r\nrecom_item = []  \r\n\r\nwhole_click = pd.DataFrame()  \r\n\r\n\r\nuser_id_list = []\r\nitem_id_list = []\r\n\r\nitem_sim_list = []\r\nra_sim_list = []\r\naa_sim_list = []\r\ncn_sim_list = []\r\ntxt_sim_list = []\r\n\r\nhdi_sim_list = []\r\nhpi_sim_list = []\r\nlhn1_sim_list = []\r\n\r\n    \r\nfor c in range(now_phase + 1):  \r\n    print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path +  header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path +  header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n\r\n    click_train['datetime'] = pd.to_datetime(click_train['datetime'])\r\n    click_test['datetime'] = pd.to_datetime(click_test['datetime'])\r\n    click_query['datetime'] = pd.to_datetime(click_query['datetime'])\r\n\r\n\r\n\r\n    click_train['timestamp'] = click_train['datetime'].dt.day + ( click_train['datetime'].dt.hour + \r\n                          (click_train['datetime'].dt.minute + click_train['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_test['timestamp'] = click_test['datetime'].dt.day + ( click_test['datetime'].dt.hour + \r\n                          (click_test['datetime'].dt.minute + click_test['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_query['timestamp'] = click_query['datetime'].dt.day + ( click_query['datetime'].dt.hour + \r\n                          (click_query['datetime'].dt.minute + click_query['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n\r\n    all_click = click_train.append(click_test)  \r\n        \r\n\r\n    with open(out_path+'user2item_new'+str(c)+'.pkl','rb') as f:\r\n        user_item_tmp = pickle.load(f)         \r\n        \r\n    with open(out_path+'CN_P'+str(c)+'_new.pkl','rb') as f:\r\n         CN_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(CN_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        cn_sim_list += sim_list_tmp    \r\n        \r\n        item_id_list += candidate_item_list\r\n        user_id_list += [row['user_id'] for x in candidate_item_list]        \r\n        \r\n    CN_sim_list_new = []        \r\n\r\n    \r\n        \r\n    with open(out_path+'HDI_P'+str(c)+'_new.pkl','rb') as f:\r\n         HDI_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(HDI_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        hdi_sim_list += sim_list_tmp\r\n         \r\n    \r\n    HDI_sim_list_new = []   \r\n\r\n    \r\n    with open(out_path+'HPI_P'+str(c)+'_new.pkl','rb') as f:\r\n         HPI_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(HPI_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        hpi_sim_list += sim_list_tmp\r\n         \r\n    \r\n    HPI_sim_list_new = []      \r\n    \r\n    \r\n    with open(out_path+'LHN1_P'+str(c)+'_new.pkl','rb') as f:\r\n         LHN1_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(LHN1_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        lhn1_sim_list += sim_list_tmp\r\n         \r\n    \r\n    LHN1_sim_list_new = []  \r\n    \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[4]:\r\n\r\n\r\noffline.shape\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nlen(lhn1_sim_list)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nsim_df = pd.DataFrame()\r\nsim_df['user_id'] = user_id_list\r\nsim_df['item_id'] = item_id_list\r\nsim_df['cn_sim'] = cn_sim_list\r\nsim_df['hpi_sim'] = hpi_sim_list\r\nsim_df['hdi_sim'] = hdi_sim_list\r\nsim_df['lhn1_sim'] = lhn1_sim_list\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nsim_df.shape\r\n\r\n\r\n# In[8]:\r\n\r\n\r\noffline = offline.merge(sim_df,on=['user_id','item_id'])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[9]:\r\n\r\n\r\noffline.to_csv('./user_data/offline/new_recall/'+ file_name + '_addsim.csv',index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/01_sim_feature_offline_RA_AA.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[2]:\r\n\r\n\r\n\r\n\r\n\r\ndef ReComputeSim(sim_cor,candidate_item_list,interacted_items,item_weight_dict,flag=False):\r\n    \r\n    sim_list = []\r\n    for j in candidate_item_list:\r\n        sim_tmp = 0\r\n        for loc, i in enumerate(interacted_items):  \r\n        #Just for RA gernerated by offline\r\n            if i not in sim_cor or j not in sim_cor[i]:\r\n                continue\r\n            if i in item_weight_dict:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * item_weight_dict[i] if flag else sim_cor[i][j] * (0.7**loc) * item_weight_dict[i]\r\n            else:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * 0.5 if flag else sim_cor[i][j] * (0.7**loc) * 0.5\r\n        \r\n        sim_list.append(sim_tmp)\r\n            \r\n    return sim_list\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim'\r\n\r\noffline = pd.read_csv('./user_data/offline/new_recall/' + file_name + '.csv')\r\n\r\nnow_phase = 9\r\n\r\n\r\ntrain_path = './user_data/offline/'  \r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\nout_path = './user_data/offline/new_similarity/'\r\n\r\nrecom_item = []  \r\n\r\nwhole_click = pd.DataFrame()  \r\n\r\n\r\nuser_id_list = []\r\nitem_id_list = []\r\n\r\n\r\nra_sim_list = []\r\naa_sim_list = []\r\n\r\n\r\n    \r\nfor c in range(now_phase + 1):  \r\n    print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path +  header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path +  header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n\r\n    click_train['datetime'] = pd.to_datetime(click_train['datetime'])\r\n    click_test['datetime'] = pd.to_datetime(click_test['datetime'])\r\n    click_query['datetime'] = pd.to_datetime(click_query['datetime'])\r\n\r\n\r\n\r\n    click_train['timestamp'] = click_train['datetime'].dt.day + ( click_train['datetime'].dt.hour + \r\n                          (click_train['datetime'].dt.minute + click_train['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_test['timestamp'] = click_test['datetime'].dt.day + ( click_test['datetime'].dt.hour + \r\n                          (click_test['datetime'].dt.minute + click_test['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_query['timestamp'] = click_query['datetime'].dt.day + ( click_query['datetime'].dt.hour + \r\n                          (click_query['datetime'].dt.minute + click_query['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n\r\n    all_click = click_train.append(click_test)  \r\n        \r\n\r\n    with open(out_path+'user2item_new'+str(c)+'.pkl','rb') as f:\r\n        user_item_tmp = pickle.load(f)         \r\n        \r\n    with open(out_path+'RA_P'+str(c)+'_new.pkl','rb') as f:\r\n         RA_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(RA_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        ra_sim_list += sim_list_tmp    \r\n        \r\n        item_id_list += candidate_item_list\r\n        user_id_list += [row['user_id'] for x in candidate_item_list]        \r\n        \r\n    RA_sim_list_new = []        \r\n\r\n    \r\n        \r\n    with open(out_path+'AA_P'+str(c)+'_new.pkl','rb') as f:\r\n         AA_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(AA_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        aa_sim_list += sim_list_tmp\r\n         \r\n    \r\n    AA_sim_list_new = []   \r\n\r\n    \r\n    \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[4]:\r\n\r\n\r\noffline.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nsim_df = pd.DataFrame()\r\nsim_df['user_id'] = user_id_list\r\nsim_df['item_id'] = item_id_list\r\nsim_df['ra_sim'] = ra_sim_list\r\nsim_df['aa_sim'] = aa_sim_list\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nsim_df.shape\r\n\r\n\r\n# In[7]:\r\n\r\n\r\noffline = offline.merge(sim_df,on=['user_id','item_id'])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[8]:\r\n\r\n\r\noffline.to_csv('./user_data/offline/new_recall/'+ file_name + '_addAA_RA.csv',index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/01_sim_feature_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[2]:\r\n\r\n\r\n\r\n\r\n\r\ndef ReComputeSim(sim_cor,candidate_item_list,interacted_items,item_weight_dict,flag=False):\r\n    \r\n    sim_list = []\r\n    for j in candidate_item_list:\r\n        sim_tmp = 0\r\n        for loc, i in enumerate(interacted_items):  \r\n        #Just for RA gernerated by offline\r\n            if i not in sim_cor or j not in sim_cor[i]:\r\n                continue\r\n            if i in item_weight_dict:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * item_weight_dict[i] if flag else sim_cor[i][j] * (0.7**loc) * item_weight_dict[i]\r\n            else:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * 0.5 if flag else sim_cor[i][j] * (0.7**loc) * 0.5\r\n        \r\n        sim_list.append(sim_tmp)\r\n            \r\n    return sim_list\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531'\r\n\r\noffline = pd.read_csv('./user_data/dataset/new_recall/' + file_name + '.csv')\r\n\r\nnow_phase = 9\r\n\r\n\r\ntrain_path = './user_data/dataset/'  \r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\nout_path = './user_data/dataset/new_similarity/'\r\n\r\nrecom_item = []  \r\n\r\nwhole_click = pd.DataFrame()  \r\n\r\n\r\nuser_id_list = []\r\nitem_id_list = []\r\n\r\nitem_sim_list = []\r\nra_sim_list = []\r\naa_sim_list = []\r\ncn_sim_list = []\r\ntxt_sim_list = []\r\n\r\nhdi_sim_list = []\r\nhpi_sim_list = []\r\nlhn1_sim_list = []\r\n\r\n    \r\nfor c in range(now_phase + 1):  \r\n    print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path +  header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path +  header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n\r\n    click_train['datetime'] = pd.to_datetime(click_train['datetime'])\r\n    click_test['datetime'] = pd.to_datetime(click_test['datetime'])\r\n    click_query['datetime'] = pd.to_datetime(click_query['datetime'])\r\n\r\n\r\n\r\n    click_train['timestamp'] = click_train['datetime'].dt.day + ( click_train['datetime'].dt.hour + \r\n                          (click_train['datetime'].dt.minute + click_train['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_test['timestamp'] = click_test['datetime'].dt.day + ( click_test['datetime'].dt.hour + \r\n                          (click_test['datetime'].dt.minute + click_test['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_query['timestamp'] = click_query['datetime'].dt.day + ( click_query['datetime'].dt.hour + \r\n                          (click_query['datetime'].dt.minute + click_query['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n\r\n    all_click = click_train.append(click_test)  \r\n        \r\n\r\n    with open(out_path+'user2item_new'+str(c)+'.pkl','rb') as f:\r\n        user_item_tmp = pickle.load(f)         \r\n        \r\n    with open(out_path+'CN_P'+str(c)+'_new.pkl','rb') as f:\r\n         CN_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(CN_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        cn_sim_list += sim_list_tmp    \r\n        \r\n        item_id_list += candidate_item_list\r\n        user_id_list += [row['user_id'] for x in candidate_item_list]        \r\n        \r\n    CN_sim_list_new = []        \r\n\r\n    \r\n        \r\n    with open(out_path+'HDI_P'+str(c)+'_new.pkl','rb') as f:\r\n         HDI_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(HDI_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        hdi_sim_list += sim_list_tmp\r\n         \r\n    \r\n    HDI_sim_list_new = []   \r\n\r\n    \r\n    with open(out_path+'HPI_P'+str(c)+'_new.pkl','rb') as f:\r\n         HPI_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(HPI_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        hpi_sim_list += sim_list_tmp\r\n         \r\n    \r\n    HPI_sim_list_new = []      \r\n    \r\n    \r\n    with open(out_path+'LHN1_P'+str(c)+'_new.pkl','rb') as f:\r\n         LHN1_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(LHN1_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        lhn1_sim_list += sim_list_tmp\r\n         \r\n    \r\n    LHN1_sim_list_new = []  \r\n    \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[4]:\r\n\r\n\r\noffline.shape\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nlen(lhn1_sim_list)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nsim_df = pd.DataFrame()\r\nsim_df['user_id'] = user_id_list\r\nsim_df['item_id'] = item_id_list\r\nsim_df['cn_sim'] = cn_sim_list\r\nsim_df['hpi_sim'] = hpi_sim_list\r\nsim_df['hdi_sim'] = hdi_sim_list\r\nsim_df['lhn1_sim'] = lhn1_sim_list\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nsim_df.shape\r\n\r\n\r\n# In[8]:\r\n\r\n\r\noffline = offline.merge(sim_df,on=['user_id','item_id'])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[9]:\r\n\r\n\r\noffline.to_csv('./user_data/dataset/new_recall/'+ file_name + '_addsim.csv',index=False)\r\n\r\n\r\n# In[10]:\r\n\r\n\r\noffline.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/01_sim_feature_online_RA_AA.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[2]:\r\n\r\n\r\n\r\n\r\n\r\ndef ReComputeSim(sim_cor,candidate_item_list,interacted_items,item_weight_dict,flag=False):\r\n    \r\n    sim_list = []\r\n    for j in candidate_item_list:\r\n        sim_tmp = 0\r\n        for loc, i in enumerate(interacted_items):  \r\n        #Just for RA gernerated by offline\r\n            if i not in sim_cor or j not in sim_cor[i]:\r\n                continue\r\n            if i in item_weight_dict:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * item_weight_dict[i] if flag else sim_cor[i][j] * (0.7**loc) * item_weight_dict[i]\r\n            else:\r\n                sim_tmp += sim_cor[i][j][0] * (0.7**loc) * 0.5 if flag else sim_cor[i][j] * (0.7**loc) * 0.5\r\n        \r\n        sim_list.append(sim_tmp)\r\n            \r\n    return sim_list\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim'\r\n\r\noffline = pd.read_csv('./user_data/dataset/new_recall/' + file_name + '.csv')\r\n\r\nnow_phase = 9\r\n\r\n\r\ntrain_path = './user_data/dataset/'  \r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\nout_path = './user_data/dataset/new_similarity/'\r\n\r\nrecom_item = []  \r\n\r\nwhole_click = pd.DataFrame()  \r\n\r\n\r\nuser_id_list = []\r\nitem_id_list = []\r\n\r\n\r\nra_sim_list = []\r\naa_sim_list = []\r\n\r\n\r\n    \r\nfor c in range(now_phase + 1):  \r\n    print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path +  header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path +  header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n\r\n    click_train['datetime'] = pd.to_datetime(click_train['datetime'])\r\n    click_test['datetime'] = pd.to_datetime(click_test['datetime'])\r\n    click_query['datetime'] = pd.to_datetime(click_query['datetime'])\r\n\r\n\r\n\r\n    click_train['timestamp'] = click_train['datetime'].dt.day + ( click_train['datetime'].dt.hour + \r\n                          (click_train['datetime'].dt.minute + click_train['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_test['timestamp'] = click_test['datetime'].dt.day + ( click_test['datetime'].dt.hour + \r\n                          (click_test['datetime'].dt.minute + click_test['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n    click_query['timestamp'] = click_query['datetime'].dt.day + ( click_query['datetime'].dt.hour + \r\n                          (click_query['datetime'].dt.minute + click_query['datetime'].dt.second/60)/float(60) )/float(24)\r\n\r\n\r\n    all_click = click_train.append(click_test)  \r\n        \r\n\r\n    with open(out_path+'user2item_new'+str(c)+'.pkl','rb') as f:\r\n        user_item_tmp = pickle.load(f)         \r\n        \r\n    with open(out_path+'RA_P'+str(c)+'_new.pkl','rb') as f:\r\n         RA_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(RA_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        ra_sim_list += sim_list_tmp    \r\n        \r\n        item_id_list += candidate_item_list\r\n        user_id_list += [row['user_id'] for x in candidate_item_list]        \r\n        \r\n    RA_sim_list_new = []        \r\n\r\n    \r\n        \r\n    with open(out_path+'AA_P'+str(c)+'_new.pkl','rb') as f:\r\n         AA_sim_list_new = pickle.load(f)  \r\n    \r\n    \r\n    for i, row in click_query.iterrows():\r\n        offline_tmp = offline[offline['user_id']==row['user_id']]\r\n        candidate_item_list = list(offline_tmp['item_id'])\r\n        \r\n        time_min = min(all_click['timestamp'])\r\n        time_max = row['timestamp']\r\n\r\n        df_tmp = all_click[all_click['user_id']==row['user_id']]\r\n        df_tmp = df_tmp.reset_index(drop=True)\r\n        df_tmp['weight'] = 1 - (time_max-df_tmp['timestamp']+0.01) / (time_max-time_min+0.01)\r\n        item_weight_dict = dict(zip(df_tmp['item_id'], df_tmp['weight']))\r\n\r\n        interacted_items = user_item_tmp[row['user_id']]\r\n        interacted_items = interacted_items[::-1]\r\n        \r\n        sim_list_tmp = ReComputeSim(AA_sim_list_new,candidate_item_list,interacted_items,item_weight_dict)\r\n        aa_sim_list += sim_list_tmp\r\n         \r\n    \r\n    AA_sim_list_new = []   \r\n\r\n    \r\n    \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\noffline.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\nsim_df = pd.DataFrame()\r\nsim_df['user_id'] = user_id_list\r\nsim_df['item_id'] = item_id_list\r\nsim_df['ra_sim'] = ra_sim_list\r\nsim_df['aa_sim'] = aa_sim_list\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\nsim_df.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\noffline = offline.merge(sim_df,on=['user_id','item_id'])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\noffline.to_csv('./user_data/dataset/new_recall/'+ file_name + '_addAA_RA.csv',index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/02_itemtime_feature_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ndef extractItemCount(df, df_qTime, df_click, intervals, col_name):\r\n    \r\n    \r\n    \r\n    df_click = getTimeInterval(df_click,intervals)\r\n    \r\n    if 'time_interval' not in df.columns:\r\n        df_qTime = getTimeInterval(df_qTime,intervals)\r\n        df = df.merge(df_qTime[['user_id','time_interval']])\r\n    \r\n    df_click_sta = df_click[['user_id','item_id','time_interval']].groupby(by=['item_id','time_interval'],as_index=False).count()\r\n    df_click_sta.columns = ['item_id','time_interval',col_name]\r\n    \r\n    df = df.merge(df_click_sta,on=['item_id','time_interval'],how='left')\r\n    \r\n    return df\r\n    \r\n\r\n\r\n# In[3]:\r\n\r\n\r\ndef getTimeInterval(df,intervals):\r\n    df['hour_minute'] = (df['datetime'].dt.hour + df['datetime'].dt.minute/60)/24\r\n\r\n    time_interval_list = np.linspace(0,1,intervals)\r\n\r\n    df['time_interval'] = df['hour_minute'].apply(lambda x: np.where(x<time_interval_list)[0][0]-1 )\r\n    df['time_interval'] = (df['datetime'].dt.day - min(df['datetime'].dt.day))*intervals + df['time_interval']\r\n    return df\r\n\r\n\r\n# In[4]:\r\n\r\n\r\ntrain_path = './user_data/model_1/'\r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\n\r\nnow_phase = 9\r\nfile_name = 'recall_0531_addsim_addAA_RA'\r\n\r\ndf = pd.read_csv('./user_data/model_1/new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nwhole_qTime = pd.DataFrame() \r\n\r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c))  \r\n    whole_qTime = whole_qTime.append(click_query)  \r\n    \r\nwhole_qTime = whole_qTime.reset_index(drop=True)\r\nwhole_qTime['datetime'] = pd.to_datetime(whole_qTime['datetime'])\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n\r\n    all_click = click_train.append(click_test)  \r\n    whole_click = whole_click.append(all_click)  \r\n    \r\n    \r\nwhole_click =  whole_click.drop_duplicates(subset=['user_id','item_id','time'],keep='last')\r\nwhole_click = whole_click.sort_values('time')\r\nwhole_click = whole_click.reset_index(drop=True)\r\nwhole_click['datetime'] = pd.to_datetime(whole_click['datetime'])\r\n\r\n\r\n# In[7]:\r\n\r\n\r\ndf = extractItemCount(df,whole_qTime,whole_click,2,'item_count_12h')\r\ndf = extractItemCount(df,whole_qTime,whole_click,4,'item_count_6h')\r\ndf = extractItemCount(df,whole_qTime,whole_click,6,'item_count_4h')\r\ndf = extractItemCount(df,whole_qTime,whole_click,12,'item_count_2h')\r\ndf = extractItemCount(df,whole_qTime,whole_click,24,'item_count_1h')\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ndf.to_csv('./user_data/model_1/new_recall/' + file_name + '_additemtime.csv',index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/02_itemtime_feature_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ndef extractItemCount(df, df_qTime, df_click, intervals, col_name):\r\n    \r\n    \r\n    \r\n    df_click = getTimeInterval(df_click,intervals)\r\n    \r\n    if 'time_interval' not in df.columns:\r\n        df_qTime = getTimeInterval(df_qTime,intervals)\r\n        df = df.merge(df_qTime[['user_id','time_interval']])\r\n    \r\n    df_click_sta = df_click[['user_id','item_id','time_interval']].groupby(by=['item_id','time_interval'],as_index=False).count()\r\n    df_click_sta.columns = ['item_id','time_interval',col_name]\r\n    \r\n    df = df.merge(df_click_sta,on=['item_id','time_interval'],how='left')\r\n    \r\n    return df\r\n    \r\n\r\n\r\n# In[3]:\r\n\r\n\r\ndef getTimeInterval(df,intervals):\r\n    df['hour_minute'] = (df['datetime'].dt.hour + df['datetime'].dt.minute/60)/24\r\n\r\n    time_interval_list = np.linspace(0,1,intervals)\r\n\r\n    df['time_interval'] = df['hour_minute'].apply(lambda x: np.where(x<time_interval_list)[0][0]-1 )\r\n    df['time_interval'] = (df['datetime'].dt.day - min(df['datetime'].dt.day))*intervals + df['time_interval']\r\n    return df\r\n\r\n\r\n# In[4]:\r\n\r\n\r\ntrain_path = './user_data/offline/'\r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\n\r\nnow_phase = 9\r\nfile_name = 'recall_0531_addsim_addAA_RA'\r\n\r\ndf = pd.read_csv('./user_data/offline/new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nwhole_qTime = pd.DataFrame() \r\n\r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c))  \r\n    whole_qTime = whole_qTime.append(click_query)  \r\n    \r\nwhole_qTime = whole_qTime.reset_index(drop=True)\r\nwhole_qTime['datetime'] = pd.to_datetime(whole_qTime['datetime'])\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n\r\n    all_click = click_train.append(click_test)  \r\n    whole_click = whole_click.append(all_click)  \r\n    \r\n    \r\nwhole_click =  whole_click.drop_duplicates(subset=['user_id','item_id','time'],keep='last')\r\nwhole_click = whole_click.sort_values('time')\r\nwhole_click = whole_click.reset_index(drop=True)\r\nwhole_click['datetime'] = pd.to_datetime(whole_click['datetime'])\r\n\r\n\r\n# In[7]:\r\n\r\n\r\ndf = extractItemCount(df,whole_qTime,whole_click,2,'item_count_12h')\r\ndf = extractItemCount(df,whole_qTime,whole_click,4,'item_count_6h')\r\ndf = extractItemCount(df,whole_qTime,whole_click,6,'item_count_4h')\r\ndf = extractItemCount(df,whole_qTime,whole_click,12,'item_count_2h')\r\ndf = extractItemCount(df,whole_qTime,whole_click,24,'item_count_1h')\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ndf.to_csv('./user_data/offline/new_recall/' + file_name + '_additemtime.csv',index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/02_itemtime_feature_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ndef extractItemCount(df, df_qTime, df_click, intervals, col_name):\r\n    \r\n    \r\n    \r\n    df_click = getTimeInterval(df_click,intervals)\r\n    \r\n    if 'time_interval' not in df.columns:\r\n        df_qTime = getTimeInterval(df_qTime,intervals)\r\n        df = df.merge(df_qTime[['user_id','time_interval']])\r\n    \r\n    df_click_sta = df_click[['user_id','item_id','time_interval']].groupby(by=['item_id','time_interval'],as_index=False).count()\r\n    df_click_sta.columns = ['item_id','time_interval',col_name]\r\n    \r\n    df = df.merge(df_click_sta,on=['item_id','time_interval'],how='left')\r\n    \r\n    return df\r\n    \r\n\r\n\r\n# In[3]:\r\n\r\n\r\ndef getTimeInterval(df,intervals):\r\n    df['hour_minute'] = (df['datetime'].dt.hour + df['datetime'].dt.minute/60)/24\r\n\r\n    time_interval_list = np.linspace(0,1,intervals)\r\n\r\n    df['time_interval'] = df['hour_minute'].apply(lambda x: np.where(x<time_interval_list)[0][0]-1 )\r\n    df['time_interval'] = (df['datetime'].dt.day - min(df['datetime'].dt.day))*intervals + df['time_interval']\r\n    return df\r\n\r\n\r\n# In[4]:\r\n\r\n\r\ntrain_path = './user_data/dataset/'\r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\n\r\nnow_phase = 9\r\nfile_name = 'recall_0531_addsim_addAA_RA'\r\n\r\ndf = pd.read_csv('./user_data/dataset/new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[5]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[6]:\r\n\r\n\r\ndf.head()\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nwhole_qTime = pd.DataFrame() \r\n\r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c))  \r\n    whole_qTime = whole_qTime.append(click_query)  \r\n    \r\nwhole_qTime = whole_qTime.reset_index(drop=True)\r\nwhole_qTime['datetime'] = pd.to_datetime(whole_qTime['datetime'])\r\n\r\n\r\n# In[8]:\r\n\r\n\r\nwhole_qTime.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[9]:\r\n\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n\r\n    all_click = click_train.append(click_test)  \r\n    whole_click = whole_click.append(all_click)  \r\n    \r\n    \r\nwhole_click =  whole_click.drop_duplicates(subset=['user_id','item_id','time'],keep='last')\r\nwhole_click = whole_click.sort_values('time')\r\nwhole_click = whole_click.reset_index(drop=True)\r\nwhole_click['datetime'] = pd.to_datetime(whole_click['datetime'])\r\n\r\n\r\n# In[10]:\r\n\r\n\r\nwhole_qTime.shape\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf = extractItemCount(df,whole_qTime,whole_click,2,'item_count_12h')\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ndf.head()\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ndf = extractItemCount(df,whole_qTime,whole_click,4,'item_count_6h')\r\n\r\n\r\n# In[15]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[16]:\r\n\r\n\r\ndf.head()\r\n\r\n\r\n# In[17]:\r\n\r\n\r\ndf = extractItemCount(df,whole_qTime,whole_click,6,'item_count_4h')\r\n\r\n\r\n# In[18]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[19]:\r\n\r\n\r\ndf.head()\r\n\r\n\r\n# In[20]:\r\n\r\n\r\ndf = extractItemCount(df,whole_qTime,whole_click,12,'item_count_2h')\r\n\r\n\r\n# In[21]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[22]:\r\n\r\n\r\ndf.head()\r\n\r\n\r\n# In[23]:\r\n\r\n\r\ndf = extractItemCount(df,whole_qTime,whole_click,24,'item_count_1h')\r\n\r\n\r\n# In[24]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[25]:\r\n\r\n\r\ndf\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[26]:\r\n\r\n\r\n#df = extractItemCount(df,whole_qTime,whole_click,2,'item_count_12h')\r\n#df = extractItemCount(df,whole_qTime,whole_click,4,'item_count_6h')\r\n#df = extractItemCount(df,whole_qTime,whole_click,6,'item_count_4h')\r\n#df = extractItemCount(df,whole_qTime,whole_click,12,'item_count_2h')\r\n#df = extractItemCount(df,whole_qTime,whole_click,24,'item_count_1h')\r\n\r\n\r\n# In[27]:\r\n\r\n\r\ndf.to_csv('./user_data/dataset/new_recall/' + file_name + '_additemtime.csv',index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/03_count_feature_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[2]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pickle\r\n\r\nfrom tqdm import tqdm\r\n\r\n\r\n# In[3]:\r\n\r\n\r\ntrain_path = './user_data/model_1/'\r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\nnow_phase = 9\r\n\r\n\r\n# In[1]:\r\n\r\n\r\nitem_count_dict = {}\r\nuser_phase_dict = {}\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n    all_click = click_train.append(click_test)  \r\n    whole_click = whole_click.append(all_click)  \r\n    df_item_count = whole_click.groupby(['item_id'],as_index=False)['user_id'].agg({'count':'count'}) \r\n    \r\n    for i, row in df_item_count.iterrows():\r\n        item_count_dict.setdefault(row['item_id'],list(np.zeros(now_phase+1)))\r\n        item_count_dict[row['item_id']][c] = row['count']   \r\n        \r\n    \r\n    for i, row in click_query.iterrows():\r\n        user_phase_dict[row['user_id']] = c\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nitem_count_df = pd.DataFrame({'item_id': list(item_count_dict.keys()), 'count': list(item_count_dict.values())})\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nitem_count_list = []\r\n\r\nfor i,row in item_count_df.iterrows():\r\n    for j in range(now_phase + 1):\r\n        item_count_list.append([row['item_id'],j,row['count'][j]])\r\n\r\nitem_count_df = pd.DataFrame(item_count_list, columns=['item_id', 'phrase', 'count'])\r\n\r\n\r\n# In[6]:\r\n\r\n\r\n# 与上下阶段的差\r\n\r\nitem_count_df_list = []\r\nfor i,x in tqdm(item_count_df.groupby('item_id')):\r\n    x['diff_from_last'] = x['count'].diff(1)\r\n    x['diff_from_next'] = x['count'].diff(-1)\r\n    item_count_df_list.append(x)\r\n\r\nitem_count_df = pd.concat(item_count_df_list)\r\n\r\n\r\n# 最大值与当前phase的差\r\n\r\nitem_count_df['max'] = item_count_df.groupby('item_id')['count'].transform('max')\r\nitem_count_df['diff_from_max'] = item_count_df['max'] - item_count_df['count']\r\nitem_count_df = item_count_df.drop(columns = 'max')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime'\r\n\r\ndf = pd.read_csv('./user_data/model_1/new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ndf['phrase'] = df['user_id'].apply(lambda x:user_phase_dict[x])\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n             right = item_count_df,\r\n             how = 'left',\r\n             on = ['item_id','phrase'])\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ndf.to_csv('./user_data/model_1/new_recall/' + file_name + '_addcount.csv',index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/03_count_feature_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pickle\r\n\r\nfrom tqdm import tqdm\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/offline/'\r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nitem_count_dict = {}\r\nuser_phase_dict = {}\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n    all_click = click_train.append(click_test)  \r\n    whole_click = whole_click.append(all_click)  \r\n    df_item_count = whole_click.groupby(['item_id'],as_index=False)['user_id'].agg({'count':'count'}) \r\n    \r\n    for i, row in df_item_count.iterrows():\r\n        item_count_dict.setdefault(row['item_id'],list(np.zeros(now_phase+1)))\r\n        item_count_dict[row['item_id']][c] = row['count']   \r\n        \r\n    \r\n    for i, row in click_query.iterrows():\r\n        user_phase_dict[row['user_id']] = c\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nitem_count_df = pd.DataFrame({'item_id': list(item_count_dict.keys()), 'count': list(item_count_dict.values())})\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nitem_count_list = []\r\n\r\nfor i,row in item_count_df.iterrows():\r\n    for j in range(now_phase + 1):\r\n        item_count_list.append([row['item_id'],j,row['count'][j]])\r\n\r\nitem_count_df = pd.DataFrame(item_count_list, columns=['item_id', 'phrase', 'count'])\r\n\r\n\r\n# In[6]:\r\n\r\n\r\n# 与上下阶段的差\r\n\r\nitem_count_df_list = []\r\nfor i,x in tqdm(item_count_df.groupby('item_id')):\r\n    x['diff_from_last'] = x['count'].diff(1)\r\n    x['diff_from_next'] = x['count'].diff(-1)\r\n    item_count_df_list.append(x)\r\n\r\nitem_count_df = pd.concat(item_count_df_list)\r\n\r\n\r\n# 最大值与当前phase的差\r\n\r\nitem_count_df['max'] = item_count_df.groupby('item_id')['count'].transform('max')\r\nitem_count_df['diff_from_max'] = item_count_df['max'] - item_count_df['count']\r\nitem_count_df = item_count_df.drop(columns = 'max')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime'\r\n\r\ndf = pd.read_csv('./user_data/offline/new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ndf['phrase'] = df['user_id'].apply(lambda x:user_phase_dict[x])\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n             right = item_count_df,\r\n             how = 'left',\r\n             on = ['item_id','phrase'])\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf.to_csv('./user_data/offline/new_recall/' + file_name + '_addcount.csv',index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/03_count_feature_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pickle\r\n\r\nfrom tqdm import tqdm\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/dataset/'\r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nitem_count_dict = {}\r\nuser_phase_dict = {}\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n    all_click = click_train.append(click_test)  \r\n    whole_click = whole_click.append(all_click)  \r\n    df_item_count = whole_click.groupby(['item_id'],as_index=False)['user_id'].agg({'count':'count'}) \r\n    \r\n    for i, row in df_item_count.iterrows():\r\n        item_count_dict.setdefault(row['item_id'],list(np.zeros(now_phase+1)))\r\n        item_count_dict[row['item_id']][c] = row['count']   \r\n        \r\n    \r\n    for i, row in click_query.iterrows():\r\n        user_phase_dict[row['user_id']] = c\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nitem_count_df = pd.DataFrame({'item_id': list(item_count_dict.keys()), 'count': list(item_count_dict.values())})\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nitem_count_list = []\r\n\r\nfor i,row in item_count_df.iterrows():\r\n    for j in range(now_phase + 1):\r\n        item_count_list.append([row['item_id'],j,row['count'][j]])\r\n\r\nitem_count_df = pd.DataFrame(item_count_list, columns=['item_id', 'phrase', 'count'])\r\n\r\n\r\n# In[6]:\r\n\r\n\r\n# 与上下阶段的差\r\n\r\nitem_count_df_list = []\r\nfor i,x in tqdm(item_count_df.groupby('item_id')):\r\n    x['diff_from_last'] = x['count'].diff(1)\r\n    x['diff_from_next'] = x['count'].diff(-1)\r\n    item_count_df_list.append(x)\r\n\r\nitem_count_df = pd.concat(item_count_df_list)\r\n\r\n\r\n# 最大值与当前phase的差\r\n\r\nitem_count_df['max'] = item_count_df.groupby('item_id')['count'].transform('max')\r\nitem_count_df['diff_from_max'] = item_count_df['max'] - item_count_df['count']\r\nitem_count_df = item_count_df.drop(columns = 'max')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime'\r\n\r\ndf = pd.read_csv('./user_data/dataset/new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ndf['phrase'] = df['user_id'].apply(lambda x:user_phase_dict[x])\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n             right = item_count_df,\r\n             how = 'left',\r\n             on = ['item_id','phrase'])\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf.to_csv('./user_data/dataset/new_recall/' + file_name + '_addcount.csv',index=False)\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ndf.head()\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/04_NN_feature_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/model_1/'\r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nnn = pd.read_csv(train_path + 'nn/nn_' + header + '.csv')\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nresult = pd.DataFrame()\r\nresult['item'] = nn['item']\r\nresult['item'] = result['item'].apply(lambda x: x[1:])\r\nresult['item'] = result['item'].apply(lambda x: x[:-1])\r\nresult['item'] = result['item'].apply(lambda x: x.split(','))\r\nresult['item'] = result['item'].apply(lambda x: [int(y) for y in x]) \r\nresult['user_id'] = nn['user']\r\n\r\n\r\n\r\nresult['score'] = nn['score']\r\nresult['score'] = result['score'].apply(lambda x: x[1:])\r\nresult['score'] = result['score'].apply(lambda x: x[:-1])\r\nresult['score'] = result['score'].apply(lambda x: x.split(','))\r\nresult['score'] = result['score'].apply(lambda x: [float(y) for y in x]) \r\n\r\nresult['score'] = result['score'].apply(lambda x: [1/(1+np.exp(-y)) for y in x])\r\n\r\nrecom_item = []\r\n\r\nfor i,row in tqdm(result.iterrows()):\r\n    tmp_list = row['item']\r\n    score_list = row['score']\r\n    for j in range(len(score_list)):\r\n        recom_item.append([ row['user_id'],tmp_list[j],score_list[j]])\r\n\r\nrecom_df = pd.DataFrame(recom_item, columns=['user_id', 'item_id', 'nn']) \r\n\r\n\r\n# In[5]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount'\r\nrecall = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nrecall = pd.merge(left=recall,\r\n                 right=recom_df,\r\n                 how='left',\r\n                 on=['user_id','item_id'])\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nrecall.to_csv(train_path + 'new_recall/' + file_name + '_addnn.csv',index=False)\r\n\r\n\r\n# In[8]:\r\n\r\n\r\nrecall.describe()\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/04_NN_feature_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/offline/'\r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nnn = pd.read_csv(train_path + 'nn/nn_' + header + '.csv')\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nresult = pd.DataFrame()\r\nresult['item'] = nn['item']\r\nresult['item'] = result['item'].apply(lambda x: x[1:])\r\nresult['item'] = result['item'].apply(lambda x: x[:-1])\r\nresult['item'] = result['item'].apply(lambda x: x.split(','))\r\nresult['item'] = result['item'].apply(lambda x: [int(y) for y in x]) \r\nresult['user_id'] = nn['user']\r\n\r\n\r\n\r\nresult['score'] = nn['score']\r\nresult['score'] = result['score'].apply(lambda x: x[1:])\r\nresult['score'] = result['score'].apply(lambda x: x[:-1])\r\nresult['score'] = result['score'].apply(lambda x: x.split(','))\r\nresult['score'] = result['score'].apply(lambda x: [float(y) for y in x]) \r\n\r\nresult['score'] = result['score'].apply(lambda x: [1/(1+np.exp(-y)) for y in x])\r\n\r\nrecom_item = []\r\n\r\nfor i,row in tqdm(result.iterrows()):\r\n    tmp_list = row['item']\r\n    score_list = row['score']\r\n    for j in range(len(score_list)):\r\n        recom_item.append([ row['user_id'],tmp_list[j],score_list[j]])\r\n\r\nrecom_df = pd.DataFrame(recom_item, columns=['user_id', 'item_id', 'nn']) \r\n\r\n\r\n# In[5]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount'\r\nrecall = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nrecall = pd.merge(left=recall,\r\n                 right=recom_df,\r\n                 how='left',\r\n                 on=['user_id','item_id'])\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nrecall.to_csv(train_path + 'new_recall/' + file_name + '_addnn.csv',index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/04_NN_feature_online.csv.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nimport json\r\nfrom sys import stdout\r\nimport pickle\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/dataset/'\r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nnn = pd.read_csv(train_path + 'nn/nn_' + header + '.csv')\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nresult = pd.DataFrame()\r\nresult['item'] = nn['item']\r\nresult['item'] = result['item'].apply(lambda x: x[1:])\r\nresult['item'] = result['item'].apply(lambda x: x[:-1])\r\nresult['item'] = result['item'].apply(lambda x: x.split(','))\r\nresult['item'] = result['item'].apply(lambda x: [int(y) for y in x]) \r\nresult['user_id'] = nn['user']\r\n\r\n\r\n\r\nresult['score'] = nn['score']\r\nresult['score'] = result['score'].apply(lambda x: x[1:])\r\nresult['score'] = result['score'].apply(lambda x: x[:-1])\r\nresult['score'] = result['score'].apply(lambda x: x.split(','))\r\nresult['score'] = result['score'].apply(lambda x: [float(y) for y in x]) \r\n\r\nresult['score'] = result['score'].apply(lambda x: [1/(1+np.exp(-y)) for y in x])\r\n\r\nrecom_item = []\r\n\r\nfor i,row in tqdm(result.iterrows()):\r\n    tmp_list = row['item']\r\n    score_list = row['score']\r\n    for j in range(len(score_list)):\r\n        recom_item.append([ row['user_id'],tmp_list[j],score_list[j]])\r\n\r\nrecom_df = pd.DataFrame(recom_item, columns=['user_id', 'item_id', 'nn']) \r\n\r\n\r\n# In[5]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount'\r\nrecall = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nrecall.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nrecall = pd.merge(left=recall,\r\n                 right=recom_df,\r\n                 how='left',\r\n                 on=['user_id','item_id'])\r\n\r\n\r\n# In[8]:\r\n\r\n\r\nrecall.to_csv(train_path + 'new_recall/' + file_name + '_addnn.csv',index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/05_txt_feature_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[2]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pickle\r\nfrom tqdm import tqdm\r\nfrom gensim.models import KeyedVectors\r\nimport gc\r\n\r\n\r\n# In[3]:\r\n\r\n\r\ntrain_path = './user_data/model_1/'\r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\nnow_phase = 9\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nuser_item_list = []\r\nfor phase in range(now_phase + 1):\r\n    file = open(train_path + 'new_similarity/' + 'user2item_new%d.pkl'%phase, 'rb')\r\n    user_item_list.append(pickle.load(file))\r\n\r\n\r\n# In[6]:\r\n\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/w2v_txt_vec.txt')\r\nimg_model = KeyedVectors.load_word2vec_format('./user_data/w2v_img_vec.txt')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\ntxt_similarity = {}\r\nimg_similarity = {}\r\ntxt_feature = []\r\nimg_feature = []\r\n\r\n\r\n# In[8]:\r\n\r\n\r\nfor phase in range(0, now_phase + 1):\r\n    current_recall = df[df['phrase'] == phase]\r\n    current_data = user_item_list[phase]\r\n    for eachrow in tqdm(current_recall[['user_id','item_id']].values):\r\n        history_click = current_data[eachrow[0]][-15:]\r\n        item = eachrow[1]\r\n        txt_sim_list = []\r\n        img_sim_list = []\r\n        for related_item in history_click:\r\n            index = '_'.join(sorted([str(item), str(related_item)]))\r\n            \r\n            # calculate txt similarity\r\n            if index in txt_similarity:\r\n                txt_sim = txt_similarity[index]\r\n            else:\r\n                try:\r\n                    txt_sim = int(txt_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n                except:\r\n                    txt_sim = np.nan\r\n            txt_similarity[index] = txt_sim\r\n            txt_sim_list.append(txt_sim)\r\n                \r\n            # calculate img similarity\r\n            \r\n            if index in img_similarity:\r\n                img_sim = img_similarity[index]\r\n            else:\r\n                try:\r\n                    img_sim = int(img_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n                except:\r\n                    img_sim = np.nan\r\n            img_similarity[index] = img_sim\r\n            img_sim_list.append(img_sim)\r\n            \r\n        txt_feature.append([eachrow[0], item,\r\n                            np.nanmax(txt_sim_list),\r\n                            np.nanmean(txt_sim_list),\r\n                            np.nanstd(txt_sim_list),\r\n                            np.nansum(txt_sim_list),\r\n                            np.sum(np.isnan(txt_sim_list))])\r\n        img_feature.append([eachrow[0], item,\r\n                            np.nanmax(img_sim_list),\r\n                            np.nanmean(img_sim_list),\r\n                            np.nanstd(img_sim_list),\r\n                            np.nansum(img_sim_list),\r\n                            np.sum(np.isnan(img_sim_list))])\r\n    gc.collect()\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ntxt_feature = pd.DataFrame(txt_feature, columns=['user_id','item_id'] + ['txt_feature_' + str(x) for x in range(5)])\r\nimg_feature = pd.DataFrame(img_feature, columns=['user_id','item_id'] + ['img_feature_' + str(x) for x in range(5)])\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = txt_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = img_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_addtxt.csv', index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ntxt_df = pd.DataFrame(txt_similarity.items())\r\ntxt_df.columns = ['item_pair','txt_sim']\r\ntxt_df = txt_df[~pd.isna(txt_df['txt_sim'])]\r\ntxt_df['txt_sim'] = txt_df['txt_sim'].astype(np.float16)\r\ntxt_df.to_csv('txt_similarity.csv', index=False)\r\ntxt_df = []\r\ntxt_similarity = []\r\ngc.collect()\r\n\r\nimg_df = pd.DataFrame(img_similarity.items())\r\nimg_df.columns = ['item_pair','img_sim']\r\nimg_df = img_df[~pd.isna(img_df['img_sim'])]\r\nimg_df['img_sim'] = img_df['img_sim'].astype(np.float16)\r\nimg_df.to_csv('img_similarity.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/05_txt_feature_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[2]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pickle\r\nfrom tqdm import tqdm\r\nfrom gensim.models import KeyedVectors\r\nimport gc\r\n\r\n\r\n# In[3]:\r\n\r\n\r\ntrain_path = './user_data/offline/'\r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\nnow_phase = 9\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nuser_item_list = []\r\nfor phase in range(now_phase + 1):\r\n    file = open(train_path + 'new_similarity/' + 'user2item_new%d.pkl'%phase, 'rb')\r\n    user_item_list.append(pickle.load(file))\r\n\r\n\r\n# In[7]:\r\n\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/w2v_txt_vec.txt')\r\nimg_model = KeyedVectors.load_word2vec_format('./user_data/w2v_img_vec.txt')\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ntxt_df = pd.read_csv('txt_similarity.csv')\r\ntxt_similarity = dict(zip(txt_df['item_pair'],txt_df['txt_sim']))\r\ntxt_df = []\r\nimage_df = pd.read_csv('img_similarity.csv')\r\nimg_similarity = dict(zip(image_df['item_pair'],image_df['img_sim']))\r\nimage_df = []\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ntxt_feature = []\r\nimg_feature = []\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\nfor phase in range(0, now_phase+1):\r\n    current_recall = df[df['phrase'] == phase]\r\n    current_data = user_item_list[phase]\r\n    for eachrow in tqdm(current_recall[['user_id','item_id']].values):\r\n        history_click = current_data[eachrow[0]][-15:]\r\n        item = eachrow[1]\r\n        txt_sim_list = []\r\n        img_sim_list = []\r\n        for related_item in history_click:\r\n            index = '_'.join(sorted([str(item), str(related_item)]))\r\n            \r\n            # calculate txt similarity\r\n            if index in txt_similarity:\r\n                txt_sim = txt_similarity[index]\r\n            else:\r\n                try:\r\n                    txt_sim = txt_model.similarity(str(item), str(related_item))\r\n                except:\r\n                    txt_sim = np.nan\r\n            txt_similarity[index] = txt_sim\r\n            txt_sim_list.append(txt_sim)\r\n                \r\n            # calculate img similarity\r\n            \r\n            if index in img_similarity:\r\n                img_sim = img_similarity[index]\r\n            else:\r\n                try:\r\n                    img_sim = img_model.similarity(str(item), str(related_item))\r\n                except:\r\n                    img_sim = np.nan\r\n            img_similarity[index] = img_sim\r\n            img_sim_list.append(img_sim)\r\n            \r\n        txt_feature.append([eachrow[0], item,\r\n                            np.nanmax(txt_sim_list),\r\n                            np.nanmean(txt_sim_list),\r\n                            np.nanstd(txt_sim_list),\r\n                            np.nansum(txt_sim_list),\r\n                            np.sum(np.isnan(txt_sim_list))])\r\n        img_feature.append([eachrow[0], item,\r\n                            np.nanmax(img_sim_list),\r\n                            np.nanmean(img_sim_list),\r\n                            np.nanstd(img_sim_list),\r\n                            np.nansum(img_sim_list),\r\n                            np.sum(np.isnan(img_sim_list))])\r\n    gc.collect()\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ntxt_feature = pd.DataFrame(txt_feature, columns=['user_id','item_id'] + ['txt_feature_' + str(x) for x in range(5)])\r\nimg_feature = pd.DataFrame(img_feature, columns=['user_id','item_id'] + ['img_feature_' + str(x) for x in range(5)])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = txt_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = img_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_addtxt.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/05_txt_feature_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pickle\r\nfrom tqdm import tqdm\r\nfrom gensim.models import KeyedVectors\r\nimport gc\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/dataset/'\r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# # 线上只需提交对7-9阶段的预测\r\n\r\n# In[4]:\r\n\r\n\r\nuser_item_list = []\r\nfor phase in range(now_phase + 1):\r\n    file = open(train_path + 'new_similarity/' + 'user2item_new%d.pkl'%phase, 'rb')\r\n    user_item_list.append(pickle.load(file))\r\n\r\n\r\n# In[5]:\r\n\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/w2v_txt_vec.txt')\r\nimg_model = KeyedVectors.load_word2vec_format('./user_data/w2v_img_vec.txt')\r\n\r\n\r\n# In[6]:\r\n\r\n\r\n\r\ntxt_similarity = {}\r\nimg_similarity = {}\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\ntxt_feature = []\r\nimg_feature = []\r\nfor phase in range(7,now_phase + 1):\r\n    current_recall = df[df['phrase'] == phase]\r\n    current_data = user_item_list[phase]\r\n    for eachrow in tqdm(current_recall[['user_id','item_id']].values):\r\n        history_click = current_data[eachrow[0]][-15:]\r\n        item = eachrow[1]\r\n        txt_sim_list = []\r\n        img_sim_list = []\r\n        for related_item in history_click:\r\n            index = '_'.join(sorted([str(item), str(related_item)]))\r\n            \r\n            # calculate txt similarity\r\n            if index in txt_similarity:\r\n                txt_sim = txt_similarity[index]\r\n            else:\r\n                try:\r\n                    txt_sim = txt_model.similarity(str(item), str(related_item))\r\n                except:\r\n                    txt_sim = np.nan\r\n            txt_similarity[index] = txt_sim\r\n            txt_sim_list.append(txt_sim)\r\n                \r\n            # calculate img similarity\r\n            \r\n            if index in img_similarity:\r\n                img_sim = img_similarity[index]\r\n            else:\r\n                try:\r\n                    img_sim = img_model.similarity(str(item), str(related_item))\r\n                except:\r\n                    img_sim = np.nan\r\n            img_similarity[index] = img_sim\r\n            img_sim_list.append(img_sim)\r\n            \r\n        txt_feature.append([eachrow[0], item,\r\n                            np.nanmax(txt_sim_list),\r\n                            np.nanmean(txt_sim_list),\r\n                            np.nanstd(txt_sim_list),\r\n                            np.nansum(txt_sim_list),\r\n                            np.sum(np.isnan(txt_sim_list))])\r\n        img_feature.append([eachrow[0], item,\r\n                            np.nanmax(img_sim_list),\r\n                            np.nanmean(img_sim_list),\r\n                            np.nanstd(img_sim_list),\r\n                            np.nansum(img_sim_list),\r\n                            np.sum(np.isnan(img_sim_list))])\r\n    gc.collect()\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ntxt_feature = pd.DataFrame(txt_feature, columns=['user_id','item_id'] + ['txt_feature_' + str(x) for x in range(5)])\r\nimg_feature = pd.DataFrame(img_feature, columns=['user_id','item_id'] + ['img_feature_' + str(x) for x in range(5)])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = txt_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = img_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_addtxt.csv', index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/06_interactive_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pickle\r\nfrom tqdm import tqdm\r\nfrom gensim.models import KeyedVectors\r\nimport gc\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/model_1/'\r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nclick_trn = []\r\nclick_tst = []\r\n#qtime_tst = []\r\nfor p in tqdm(range(0, now_phase+1)):\r\n    tmp = pd.read_csv(train_path + header + f'_train_click-{p}.csv', header=None, names=['user_id', 'item_id', 'time'])\r\n    tmp['phrase'] = p\r\n    click_trn.append(tmp)\r\n    tmp = pd.read_csv(test_path + header + f'_test_click-{p}.csv', header=None, names=['user_id', 'item_id', 'time'])\r\n    tmp['phrase'] = p\r\n    click_tst.append(tmp)\r\n    #tmp = pd.read_csv(test_path + header + f'_test_qtime-{p}.csv', header=None, names=['user_id', 'item_id', 'query_time'])\r\n    #tmp['phrase'] = p\r\n    #qtime_tst.append(tmp)\r\n    \r\nclick_trn = pd.concat(click_trn, axis=0, ignore_index=True)\r\nclick_tst = pd.concat(click_tst, axis=0, ignore_index=True)\r\n#qtime_tst = pd.concat(qtime_tst, axis=0, ignore_index=True)\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nclick_df = pd.concat([click_trn, click_tst], axis=0, ignore_index=True)\r\nclick_df['item_count'] = click_df.groupby(['item_id', 'phrase'])['user_id'].transform('count')\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nclick_df.shape\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nclick_df.head()\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ncount_map = click_df[['item_id', 'phrase', 'item_count']].drop_duplicates()\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ndf.columns\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ndef gen_add_1(df):\r\n    group_df = df.groupby(['user_id', 'phrase', 'item_count'])['time'].agg([['user_item_count_cnt', 'count'],\r\n                                                                            ['user_item_count_max_time', 'max'],\r\n                                                                            ['user_item_count_min_time', 'min']]).reset_index()\r\n    group_df['sum'] = group_df.groupby(['user_id', 'phrase'])['user_item_count_cnt'].transform('sum')\r\n    group_df['user_item_count_ratio'] = group_df['user_item_count_cnt'] / group_df['sum']\r\n    group_df['user_item_count_timedelta'] = group_df['user_item_count_max_time'] - group_df['user_item_count_min_time']\r\n    del group_df['sum']\r\n    return group_df\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf = df.merge(count_map, on=['item_id', 'phrase'], how='left')\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ntrain_add1 = gen_add_1(click_df)\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ntrain_add1.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[15]:\r\n\r\n\r\ndf = df.merge(train_add1, on=['user_id', 'phrase', 'item_count'], how='left')\r\n\r\n\r\n# In[16]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[18]:\r\n\r\n\r\n\r\n\r\n\r\n# In[17]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_interactive.csv', index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/06_interactive_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pickle\r\nfrom tqdm import tqdm\r\nfrom gensim.models import KeyedVectors\r\nimport gc\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/offline/'\r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nclick_trn = []\r\nclick_tst = []\r\n#qtime_tst = []\r\nfor p in tqdm(range(0, now_phase+1)):\r\n    tmp = pd.read_csv(train_path + header + f'_train_click-{p}.csv', header=None, names=['user_id', 'item_id', 'time'])\r\n    tmp['phrase'] = p\r\n    click_trn.append(tmp)\r\n    tmp = pd.read_csv(test_path + header + f'_test_click-{p}.csv', header=None, names=['user_id', 'item_id', 'time'])\r\n    tmp['phrase'] = p\r\n    click_tst.append(tmp)\r\n    #tmp = pd.read_csv(test_path + header + f'_test_qtime-{p}.csv', header=None, names=['user_id', 'item_id', 'query_time'])\r\n    #tmp['phrase'] = p\r\n    #qtime_tst.append(tmp)\r\n    \r\nclick_trn = pd.concat(click_trn, axis=0, ignore_index=True)\r\nclick_tst = pd.concat(click_tst, axis=0, ignore_index=True)\r\n#qtime_tst = pd.concat(qtime_tst, axis=0, ignore_index=True)\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nclick_df = pd.concat([click_trn, click_tst], axis=0, ignore_index=True)\r\nclick_df['item_count'] = click_df.groupby(['item_id', 'phrase'])['user_id'].transform('count')\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nclick_df.shape\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nclick_df.head()\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ncount_map = click_df[['item_id', 'phrase', 'item_count']].drop_duplicates()\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ndef gen_add_1(df):\r\n    group_df = df.groupby(['user_id', 'phrase', 'item_count'])['time'].agg([['user_item_count_cnt', 'count'],\r\n                                                                            ['user_item_count_max_time', 'max'],\r\n                                                                            ['user_item_count_min_time', 'min']]).reset_index()\r\n    group_df['sum'] = group_df.groupby(['user_id', 'phrase'])['user_item_count_cnt'].transform('sum')\r\n    group_df['user_item_count_ratio'] = group_df['user_item_count_cnt'] / group_df['sum']\r\n    group_df['user_item_count_timedelta'] = group_df['user_item_count_max_time'] - group_df['user_item_count_min_time']\r\n    del group_df['sum']\r\n    return group_df\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ndf = df.merge(count_map, on=['item_id', 'phrase'], how='left')\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ntrain_add1 = gen_add_1(click_df)\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ntrain_add1.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ndf = df.merge(train_add1, on=['user_id', 'phrase', 'item_count'], how='left')\r\n\r\n\r\n# In[15]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[25]:\r\n\r\n\r\ndf[df['phrase'] == 7]['img_feature_1']\r\n\r\n\r\n# In[16]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_interactive.csv', index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/06_interactive_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pickle\r\nfrom tqdm import tqdm\r\nfrom gensim.models import KeyedVectors\r\nimport gc\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/dataset/'\r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nclick_trn = []\r\nclick_tst = []\r\n#qtime_tst = []\r\nfor p in tqdm(range(0, now_phase+1)):\r\n    tmp = pd.read_csv(train_path + header + f'_train_click-{p}.csv', header=None, names=['user_id', 'item_id', 'time'])\r\n    tmp['phrase'] = p\r\n    click_trn.append(tmp)\r\n    tmp = pd.read_csv(test_path + header + f'_test_click-{p}.csv', header=None, names=['user_id', 'item_id', 'time'])\r\n    tmp['phrase'] = p\r\n    click_tst.append(tmp)\r\n    #tmp = pd.read_csv(test_path + header + f'_test_qtime-{p}.csv', header=None, names=['user_id', 'item_id', 'query_time'])\r\n    #tmp['phrase'] = p\r\n    #qtime_tst.append(tmp)\r\n    \r\nclick_trn = pd.concat(click_trn, axis=0, ignore_index=True)\r\nclick_tst = pd.concat(click_tst, axis=0, ignore_index=True)\r\n#qtime_tst = pd.concat(qtime_tst, axis=0, ignore_index=True)\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nclick_df = pd.concat([click_trn, click_tst], axis=0, ignore_index=True)\r\nclick_df['item_count'] = click_df.groupby(['item_id', 'phrase'])['user_id'].transform('count')\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nclick_df.shape\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nclick_df.head()\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ncount_map = click_df[['item_id', 'phrase', 'item_count']].drop_duplicates()\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ndf.columns\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ndef gen_add_1(df):\r\n    group_df = df.groupby(['user_id', 'phrase', 'item_count'])['time'].agg([['user_item_count_cnt', 'count'],\r\n                                                                            ['user_item_count_max_time', 'max'],\r\n                                                                            ['user_item_count_min_time', 'min']]).reset_index()\r\n    group_df['sum'] = group_df.groupby(['user_id', 'phrase'])['user_item_count_cnt'].transform('sum')\r\n    group_df['user_item_count_ratio'] = group_df['user_item_count_cnt'] / group_df['sum']\r\n    group_df['user_item_count_timedelta'] = group_df['user_item_count_max_time'] - group_df['user_item_count_min_time']\r\n    del group_df['sum']\r\n    return group_df\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf = df.merge(count_map, on=['item_id', 'phrase'], how='left')\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ntrain_add1 = gen_add_1(click_df)\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ntrain_add1.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[15]:\r\n\r\n\r\ndf = df.merge(train_add1, on=['user_id', 'phrase', 'item_count'], how='left')\r\n\r\n\r\n# In[16]:\r\n\r\n\r\ndf.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[17]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_interactive.csv', index=False)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/07_count_detail_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/model_1/'\r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n    all_click = click_train.append(click_test)  \r\n    all_click['phrase'] = c\r\n    whole_click = whole_click.append(all_click)  \r\n\r\n\r\n# In[4]:\r\n\r\n\r\nwhole_click['item_count'] = whole_click.groupby(['item_id','phrase'])['time'].transform('count')\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nwhole_click['item_max_time_in_phrase'] = whole_click.groupby(['item_id','phrase'])['time'].transform('max')\r\n\r\n\r\n# 比如当前阶段count是否是峰值，当前阶段的count和最小值的差，当前阶段count在当前阶段所有商品的rank，当前阶段count在当前商品即使阶段的tank\r\n\r\n# In[6]:\r\n\r\n\r\n## 当前阶段是否是峰值\r\nclimax = whole_click.sort_values('item_count').reset_index(drop=True).drop_duplicates(['item_id'], keep='last')\r\nclimax['is_climix'] = 1\r\nwhole_click = pd.merge(left = whole_click,\r\n         right = climax[['item_id', 'phrase','is_climix']],\r\n         how = 'left',\r\n         on = ['item_id','phrase'])\r\nwhole_click.loc[pd.isna(whole_click['is_climix']), 'is_climix'] = 0\r\n\r\n\r\n# In[7]:\r\n\r\n\r\n# 当前阶段是否是波谷\r\nvalley = whole_click.sort_values('item_count').reset_index(drop=True).drop_duplicates(['item_id'], keep='first')\r\nvalley['is_lowest_point'] = 1\r\nwhole_click = pd.merge(left = whole_click,\r\n         right = valley[['item_id', 'phrase','is_lowest_point']],\r\n         how = 'left',\r\n         on = ['item_id','phrase'])\r\nwhole_click.loc[pd.isna(whole_click['is_lowest_point']), 'is_lowest_point'] = 0\r\n\r\n\r\n# In[8]:\r\n\r\n\r\n# 当前阶段count与最小值和均值的差\r\nwhole_click['item_count_min'] = whole_click.groupby('item_id')['item_count'].transform('min')\r\nwhole_click['item_count_mean'] = whole_click.groupby('item_id')['item_count'].transform('mean')\r\nwhole_click['item_diff_from_min'] = whole_click['item_count'] - whole_click['item_count_min']\r\nwhole_click['item_diff_from_mean'] = whole_click['item_count'] - whole_click['item_count_mean']\r\n\r\n\r\n# In[9]:\r\n\r\n\r\n# 当前阶段count在当前阶段所有商品的rank\r\nwhole_click['item_count_rankin_phrase'] = whole_click.groupby('phrase')['item_count'].rank(method='dense')\r\n\r\n\r\n# In[10]:\r\n\r\n\r\n# 当前阶段count在商品历史阶段\r\nwhole_click['item_count_rankin_history'] = whole_click.groupby('item_id')['item_count'].rank(method='dense')\r\n\r\n\r\n# In[11]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n        right = whole_click[['item_id', 'phrase'\r\n            ,'item_max_time_in_phrase',\r\n                            'is_climix',\r\n                            'is_lowest_point',\r\n                            'item_diff_from_min',\r\n                            'item_diff_from_mean',\r\n                            'item_count_rankin_phrase',\r\n                            'item_count_rankin_history'\r\n                            ]].drop_duplicates(['item_id','phrase']),\r\n        how = 'left',\r\n        on = ['item_id','phrase'])\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ndf.loc[pd.isna(df['item_count']), 'item_never_in_phrase'] = 1\r\ndf.loc[~pd.isna(df['item_count']), 'item_never_in_phrase'] = 0\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_countdetail.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/07_count_detail_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/offline/'\r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n    all_click = click_train.append(click_test)  \r\n    all_click['phrase'] = c\r\n    whole_click = whole_click.append(all_click)  \r\n\r\n\r\n# In[4]:\r\n\r\n\r\nwhole_click['item_count'] = whole_click.groupby(['item_id','phrase'])['time'].transform('count')\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nwhole_click['item_max_time_in_phrase'] = whole_click.groupby(['item_id','phrase'])['time'].transform('max')\r\n\r\n\r\n# 比如当前阶段count是否是峰值，当前阶段的count和最小值的差，当前阶段count在当前阶段所有商品的rank，当前阶段count在当前商品即使阶段的tank\r\n\r\n# In[6]:\r\n\r\n\r\n## 当前阶段是否是峰值\r\nclimax = whole_click.sort_values('item_count').reset_index(drop=True).drop_duplicates(['item_id'], keep='last')\r\nclimax['is_climix'] = 1\r\nwhole_click = pd.merge(left = whole_click,\r\n         right = climax[['item_id', 'phrase','is_climix']],\r\n         how = 'left',\r\n         on = ['item_id','phrase'])\r\nwhole_click.loc[pd.isna(whole_click['is_climix']), 'is_climix'] = 0\r\n\r\n\r\n# In[7]:\r\n\r\n\r\n# 当前阶段是否是波谷\r\nvalley = whole_click.sort_values('item_count').reset_index(drop=True).drop_duplicates(['item_id'], keep='first')\r\nvalley['is_lowest_point'] = 1\r\nwhole_click = pd.merge(left = whole_click,\r\n         right = valley[['item_id', 'phrase','is_lowest_point']],\r\n         how = 'left',\r\n         on = ['item_id','phrase'])\r\nwhole_click.loc[pd.isna(whole_click['is_lowest_point']), 'is_lowest_point'] = 0\r\n\r\n\r\n# In[8]:\r\n\r\n\r\n# 当前阶段count与最小值和均值的差\r\nwhole_click['item_count_min'] = whole_click.groupby('item_id')['item_count'].transform('min')\r\nwhole_click['item_count_mean'] = whole_click.groupby('item_id')['item_count'].transform('mean')\r\nwhole_click['item_diff_from_min'] = whole_click['item_count'] - whole_click['item_count_min']\r\nwhole_click['item_diff_from_mean'] = whole_click['item_count'] - whole_click['item_count_mean']\r\n\r\n\r\n# In[9]:\r\n\r\n\r\n# 当前阶段count在当前阶段所有商品的rank\r\nwhole_click['item_count_rankin_phrase'] = whole_click.groupby('phrase')['item_count'].rank(method='dense')\r\n\r\n\r\n# In[10]:\r\n\r\n\r\n# 当前阶段count在商品历史阶段\r\nwhole_click['item_count_rankin_history'] = whole_click.groupby('item_id')['item_count'].rank(method='dense')\r\n\r\n\r\n# In[11]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n        right = whole_click[['item_id', 'phrase'\r\n            ,'item_max_time_in_phrase',\r\n                            'is_climix',\r\n                            'is_lowest_point',\r\n                            'item_diff_from_min',\r\n                            'item_diff_from_mean',\r\n                            'item_count_rankin_phrase',\r\n                            'item_count_rankin_history'\r\n                            ]].drop_duplicates(['item_id','phrase']),\r\n        how = 'left',\r\n        on = ['item_id','phrase'])\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ndf.loc[pd.isna(df['item_count']), 'item_never_in_phrase'] = 1\r\ndf.loc[~pd.isna(df['item_count']), 'item_never_in_phrase'] = 0\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_countdetail.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/07_count_detail_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/dataset/'\r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n    all_click = click_train.append(click_test)  \r\n    all_click['phrase'] = c\r\n    whole_click = whole_click.append(all_click)  \r\n\r\n\r\n# In[4]:\r\n\r\n\r\nwhole_click['item_count'] = whole_click.groupby(['item_id','phrase'])['time'].transform('count')\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nwhole_click['item_max_time_in_phrase'] = whole_click.groupby(['item_id','phrase'])['time'].transform('max')\r\n\r\n\r\n# 比如当前阶段count是否是峰值，当前阶段的count和最小值的差，当前阶段count在当前阶段所有商品的rank，当前阶段count在当前商品即使阶段的tank\r\n\r\n# In[6]:\r\n\r\n\r\n## 当前阶段是否是峰值\r\nclimax = whole_click.sort_values('item_count').reset_index(drop=True).drop_duplicates(['item_id'], keep='last')\r\nclimax['is_climix'] = 1\r\nwhole_click = pd.merge(left = whole_click,\r\n         right = climax[['item_id', 'phrase','is_climix']],\r\n         how = 'left',\r\n         on = ['item_id','phrase'])\r\nwhole_click.loc[pd.isna(whole_click['is_climix']), 'is_climix'] = 0\r\n\r\n\r\n# In[7]:\r\n\r\n\r\n# 当前阶段是否是波谷\r\nvalley = whole_click.sort_values('item_count').reset_index(drop=True).drop_duplicates(['item_id'], keep='first')\r\nvalley['is_lowest_point'] = 1\r\nwhole_click = pd.merge(left = whole_click,\r\n         right = valley[['item_id', 'phrase','is_lowest_point']],\r\n         how = 'left',\r\n         on = ['item_id','phrase'])\r\nwhole_click.loc[pd.isna(whole_click['is_lowest_point']), 'is_lowest_point'] = 0\r\n\r\n\r\n# In[8]:\r\n\r\n\r\n# 当前阶段count与最小值和均值的差\r\nwhole_click['item_count_min'] = whole_click.groupby('item_id')['item_count'].transform('min')\r\nwhole_click['item_count_mean'] = whole_click.groupby('item_id')['item_count'].transform('mean')\r\nwhole_click['item_diff_from_min'] = whole_click['item_count'] - whole_click['item_count_min']\r\nwhole_click['item_diff_from_mean'] = whole_click['item_count'] - whole_click['item_count_mean']\r\n\r\n\r\n# In[9]:\r\n\r\n\r\n# 当前阶段count在当前阶段所有商品的rank\r\nwhole_click['item_count_rankin_phrase'] = whole_click.groupby('phrase')['item_count'].rank(method='dense')\r\n\r\n\r\n# In[10]:\r\n\r\n\r\n# 当前阶段count在商品历史阶段\r\nwhole_click['item_count_rankin_history'] = whole_click.groupby('item_id')['item_count'].rank(method='dense')\r\n\r\n\r\n# In[11]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n        right = whole_click[['item_id', 'phrase'\r\n            ,'item_max_time_in_phrase',\r\n                            'is_climix',\r\n                            'is_lowest_point',\r\n                            'item_diff_from_min',\r\n                            'item_diff_from_mean',\r\n                            'item_count_rankin_phrase',\r\n                            'item_count_rankin_history'\r\n                            ]].drop_duplicates(['item_id','phrase']),\r\n        how = 'left',\r\n        on = ['item_id','phrase'])\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ndf.loc[pd.isna(df['item_count']), 'item_never_in_phrase'] = 1\r\ndf.loc[~pd.isna(df['item_count']), 'item_never_in_phrase'] = 0\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_countdetail.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/08_user_feature_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/model_1/'\r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n    all_click = click_train.append(click_test)  \r\n    all_click['phrase'] = c\r\n    whole_click = whole_click.append(all_click)  \r\n\r\n\r\n# In[4]:\r\n\r\n\r\nwhole_click['item_count'] = whole_click.groupby(['item_id','phrase'])['time'].transform('count')\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nwhole_click['user_mean_count'] = whole_click.groupby(['user_id','phrase'])['item_count'].transform('mean')\r\n\r\nwhole_click['user_max_count'] = whole_click.groupby(['user_id','phrase'])['item_count'].transform('max')\r\n\r\nwhole_click['user_min_count'] = whole_click.groupby(['user_id','phrase'])['item_count'].transform('min')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nwhole_click['user_count'] = whole_click.groupby(['user_id','phrase'])['time'].transform('count')\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ntemp = whole_click[['user_id','phrase','user_count']].sort_values('user_count').reset_index(drop=True).drop_duplicates(['user_id'], keep='last')\r\n\r\ntemp['is_user_count_climax'] = 1\r\nwhole_click = pd.merge(left=whole_click,\r\n         right = temp[['user_id','phrase','is_user_count_climax']],\r\n         how = 'left',\r\n         on = ['user_id','phrase'])\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ntemp = whole_click[['user_id','phrase','user_count']].sort_values('user_count').reset_index(drop=True).drop_duplicates(['user_id'], keep='first')\r\n\r\ntemp['is_user_count_lowerpoint'] = 1\r\nwhole_click = pd.merge(left=whole_click,\r\n         right = temp[['user_id','phrase','is_user_count_lowerpoint']],\r\n         how = 'left',\r\n         on = ['user_id','phrase'])\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n        right = whole_click[['user_id', 'phrase'\r\n            ,'user_mean_count','user_max_count','user_min_count','is_user_count_climax','is_user_count_lowerpoint'\r\n                            ]].drop_duplicates(['user_id','phrase']),\r\n        how = 'left',\r\n        on = ['user_id','phrase'])\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf.loc[pd.isna(df['is_user_count_climax']), 'is_user_count_climax'] = 0\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.loc[pd.isna(df['is_user_count_lowerpoint']), 'is_user_count_lowerpoint'] = 0\r\n\r\n\r\n# In[13]:\r\n\r\n\r\nfor i in ['user_mean_count','user_max_count','user_min_count']:\r\n    df[i] = df['item_count'].fillna(0) / df[i]\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ndf['is_user_count_climax'] = df['is_user_count_climax'] * df['is_climix'].fillna(0)\r\n\r\n\r\n# In[15]:\r\n\r\n\r\ndf['is_user_count_lowerpoint'] = df['is_user_count_lowerpoint'] * df['is_user_count_lowerpoint'].fillna(0)\r\n\r\n\r\n# In[16]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_userfeature.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/08_user_feature_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/offline/'\r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n    all_click = click_train.append(click_test)  \r\n    all_click['phrase'] = c\r\n    whole_click = whole_click.append(all_click)  \r\n\r\n\r\n# In[4]:\r\n\r\n\r\nwhole_click['item_count'] = whole_click.groupby(['item_id','phrase'])['time'].transform('count')\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nwhole_click['user_mean_count'] = whole_click.groupby(['user_id','phrase'])['item_count'].transform('mean')\r\n\r\nwhole_click['user_max_count'] = whole_click.groupby(['user_id','phrase'])['item_count'].transform('max')\r\n\r\nwhole_click['user_min_count'] = whole_click.groupby(['user_id','phrase'])['item_count'].transform('min')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nwhole_click['user_count'] = whole_click.groupby(['user_id','phrase'])['time'].transform('count')\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ntemp = whole_click[['user_id','phrase','user_count']].sort_values('user_count').reset_index(drop=True).drop_duplicates(['user_id'], keep='last')\r\n\r\ntemp['is_user_count_climax'] = 1\r\nwhole_click = pd.merge(left=whole_click,\r\n         right = temp[['user_id','phrase','is_user_count_climax']],\r\n         how = 'left',\r\n         on = ['user_id','phrase'])\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ntemp = whole_click[['user_id','phrase','user_count']].sort_values('user_count').reset_index(drop=True).drop_duplicates(['user_id'], keep='first')\r\n\r\ntemp['is_user_count_lowerpoint'] = 1\r\nwhole_click = pd.merge(left=whole_click,\r\n         right = temp[['user_id','phrase','is_user_count_lowerpoint']],\r\n         how = 'left',\r\n         on = ['user_id','phrase'])\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n        right = whole_click[['user_id', 'phrase'\r\n            ,'user_mean_count','user_max_count','user_min_count','is_user_count_climax','is_user_count_lowerpoint'\r\n                            ]].drop_duplicates(['user_id','phrase']),\r\n        how = 'left',\r\n        on = ['user_id','phrase'])\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf.loc[pd.isna(df['is_user_count_climax']), 'is_user_count_climax'] = 0\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.loc[pd.isna(df['is_user_count_lowerpoint']), 'is_user_count_lowerpoint'] = 0\r\n\r\n\r\n# In[13]:\r\n\r\n\r\nfor i in ['user_mean_count','user_max_count','user_min_count']:\r\n    df[i] = df['item_count'].fillna(0) / df[i]\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ndf['is_user_count_climax'] = df['is_user_count_climax'] * df['is_climix'].fillna(0)\r\n\r\n\r\n# In[15]:\r\n\r\n\r\ndf['is_user_count_lowerpoint'] = df['is_user_count_lowerpoint'] * df['is_user_count_lowerpoint'].fillna(0)\r\n\r\n\r\n# In[16]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_userfeature.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/08_user_feature_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/dataset/'\r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nwhole_click = pd.DataFrame() \r\nfor c in range(now_phase + 1):  \r\n    #print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n    all_click = click_train.append(click_test)  \r\n    all_click['phrase'] = c\r\n    whole_click = whole_click.append(all_click)  \r\n\r\n\r\n# In[4]:\r\n\r\n\r\nwhole_click['item_count'] = whole_click.groupby(['item_id','phrase'])['time'].transform('count')\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nwhole_click['user_mean_count'] = whole_click.groupby(['user_id','phrase'])['item_count'].transform('mean')\r\n\r\nwhole_click['user_max_count'] = whole_click.groupby(['user_id','phrase'])['item_count'].transform('max')\r\n\r\nwhole_click['user_min_count'] = whole_click.groupby(['user_id','phrase'])['item_count'].transform('min')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nwhole_click['user_count'] = whole_click.groupby(['user_id','phrase'])['time'].transform('count')\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ntemp = whole_click[['user_id','phrase','user_count']].sort_values('user_count').reset_index(drop=True).drop_duplicates(['user_id'], keep='last')\r\n\r\ntemp['is_user_count_climax'] = 1\r\nwhole_click = pd.merge(left=whole_click,\r\n         right = temp[['user_id','phrase','is_user_count_climax']],\r\n         how = 'left',\r\n         on = ['user_id','phrase'])\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ntemp = whole_click[['user_id','phrase','user_count']].sort_values('user_count').reset_index(drop=True).drop_duplicates(['user_id'], keep='first')\r\n\r\ntemp['is_user_count_lowerpoint'] = 1\r\nwhole_click = pd.merge(left=whole_click,\r\n         right = temp[['user_id','phrase','is_user_count_lowerpoint']],\r\n         how = 'left',\r\n         on = ['user_id','phrase'])\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n        right = whole_click[['user_id', 'phrase'\r\n            ,'user_mean_count','user_max_count','user_min_count','is_user_count_climax','is_user_count_lowerpoint'\r\n                            ]].drop_duplicates(['user_id','phrase']),\r\n        how = 'left',\r\n        on = ['user_id','phrase'])\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf.loc[pd.isna(df['is_user_count_climax']), 'is_user_count_climax'] = 0\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.loc[pd.isna(df['is_user_count_lowerpoint']), 'is_user_count_lowerpoint'] = 0\r\n\r\n\r\n# In[13]:\r\n\r\n\r\nfor i in ['user_mean_count','user_max_count','user_min_count']:\r\n    df[i] = df['item_count'].fillna(0) / df[i]\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ndf['is_user_count_climax'] = df['is_user_count_climax'] * df['is_climix'].fillna(0)\r\n\r\n\r\n# In[15]:\r\n\r\n\r\ndf['is_user_count_lowerpoint'] = df['is_user_count_lowerpoint'] * df['is_user_count_lowerpoint'].fillna(0)\r\n\r\n\r\n# In[16]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_userfeature.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/09_partial_sim_feature_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/model_1/'\r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\nnow_phase = 9\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nsim_partial = pd.read_csv(train_path + 'new_recall/recall_partial.csv')\r\n\r\n\r\n# In[9]:\r\n\r\n\r\nsim_partial = sim_partial[['user_id','item_id','sim','feature_0','feature_1','feature_2','feature_3']]\r\n\r\n\r\n# In[10]:\r\n\r\n\r\nsim_partial.to_csv(train_path + 'new_recall/recall_partial.csv', index=False)\r\n\r\n\r\n# In[12]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[14]:\r\n\r\n\r\nsim_partial.columns = ['user_id','item_id','sim_partial','feature_0_partial','feature_1_partial','feature_2_partial','feature_3_partial']\r\n\r\n\r\n# In[15]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n             right = sim_partial,\r\n             how = 'left',\r\n             on = ['user_id','item_id'])\r\n\r\n\r\n# In[19]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_partialsim.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/09_partial_sim_feature_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/offline/'\r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nsim_partial = pd.read_csv(train_path + 'new_recall/recall_partial.csv')\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nsim_partial = sim_partial[['user_id','item_id','sim','feature_0','feature_1','feature_2','feature_3']]\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nsim_partial.to_csv(train_path + 'new_recall/recall_partial.csv', index=False)\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nsim_partial.columns = ['user_id','item_id','sim_partial','feature_0_partial','feature_1_partial','feature_2_partial','feature_3_partial']\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n             right = sim_partial,\r\n             how = 'left',\r\n             on = ['user_id','item_id'])\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_partialsim.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/09_partial_sim_feature_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/dataset/'\r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nsim_partial = pd.read_csv(train_path + 'new_recall/recall_partial.csv')\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nsim_partial = sim_partial[['user_id','item_id','sim','feature_0','feature_1','feature_2','feature_3']]\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nsim_partial.to_csv(train_path + 'new_recall/recall_partial.csv', index=False)\r\n\r\n\r\n# In[6]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nsim_partial.columns = ['user_id','item_id','sim_partial','feature_0_partial','feature_1_partial','feature_2_partial','feature_3_partial']\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n             right = sim_partial,\r\n             how = 'left',\r\n             on = ['user_id','item_id'])\r\n\r\n\r\n# In[9]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_partialsim.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/10_emergency_feature_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport pickle\r\nfrom gensim.models import KeyedVectors\r\nfrom tqdm import tqdm\r\nimport numpy as np\r\nimport gc\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/model_1/'\r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature_partialsim'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nuser_item_list = []\r\nfor phase in tqdm(range(now_phase + 1)):\r\n\r\n    click_train = pd.read_csv(train_path + header + '_train_click-{}.csv'.format(phase), header=None,\r\n                              names=['user_id', 'item_id', 'time'])\r\n    click_test = pd.read_csv(train_path + header +0 '_test_click-{}.csv'.format(phase), header=None,\r\n                             names=['user_id', 'item_id', 'time'])\r\n    all_click = click_train.append(click_test)\r\n    \r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.reset_index(drop=True)\r\n\r\n    user_item_ = all_click.groupby('user_id')['item_id'].agg(list).reset_index()\r\n    user_item_dict = dict(zip(user_item_['user_id'], user_item_['item_id']))\r\n    user_item_list.append(user_item_dict)\r\n\r\n\r\n# In[4]:\r\n\r\n\r\n# user_item_list = []\r\n# for phase in range(now_phase + 1):\r\n#     file = open(train_path + 'new_similarity/' + 'user2item_new%d.pkl'%phase, 'rb')\r\n#     user_item_list.append(pickle.load(file))\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_txt_vec.txt')\r\nimg_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_img_vec.txt')\r\n\r\n\r\n# In[9]:\r\n\r\n\r\nnodewalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/node2vec_' + header + '.bin',binary=True)\r\ndeepwalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/deepwalk_' + header + '.bin',binary=True)\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ntxt_similarity = {}\r\nimg_similarity = {}\r\ndeep_similarity = {}\r\nnode_similarity = {}\r\n\r\nemergency_feature = []\r\n\r\n\r\n# In[11]:\r\n\r\n\r\nfor phase in range(0, now_phase + 1):\r\n    current_recall = df[df['phrase'] == phase]\r\n    current_data = user_item_list[phase]\r\n    for eachrow in tqdm(current_recall[['user_id','item_id']].values):\r\n        related_item = current_data[eachrow[0]][-1]\r\n        item = eachrow[1]\r\n\r\n        index = '_'.join(sorted([str(item), str(related_item)]))\r\n\r\n        # calculate txt similarity\r\n        if index in txt_similarity:\r\n            txt_sim = txt_similarity[index]\r\n        else:\r\n            try:\r\n                txt_sim = int(txt_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                txt_sim = np.nan\r\n        txt_similarity[index] = txt_sim\r\n\r\n        # calculate img similarity\r\n        if index in img_similarity:\r\n            img_sim = img_similarity[index]\r\n        else:\r\n            try:\r\n                img_sim = int(img_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                img_sim = np.nan\r\n        img_similarity[index] = img_sim\r\n            \r\n        # calculate node similarity\r\n        if index in node_similarity:\r\n            node_sim = node_similarity[index]\r\n        else:\r\n            try:\r\n                node_sim = int(nodewalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                node_sim = np.nan\r\n        node_similarity[index] = node_sim\r\n        \r\n        # calculate deep similarity\r\n        if index in deep_similarity:\r\n            deep_sim = deep_similarity[index]\r\n        else:\r\n            try:\r\n                deep_sim = int(deepwalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                deep_sim = np.nan\r\n        deep_similarity[index] = deep_sim\r\n        \r\n        emergency_feature.append([eachrow[0], eachrow[1], txt_sim, img_sim, node_sim, deep_sim])\r\n        \r\n    gc.collect()\r\n\r\n\r\n# In[12]:\r\n\r\n\r\nemergency_feature = pd.DataFrame(emergency_feature, columns=['user_id','item_id'] + ['emergency_feature_' + str(x) for x in range(4)])\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = emergency_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_emergency.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/10_emergency_feature_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport pickle\r\nfrom gensim.models import KeyedVectors\r\nfrom tqdm import tqdm\r\nimport numpy as np\r\nimport gc\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/offline/'\r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature_partialsim'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nuser_item_list = []\r\nfor phase in tqdm(range(now_phase + 1)):\r\n\r\n    click_train = pd.read_csv(train_path + header + '_train_click-{}.csv'.format(phase), header=None,\r\n                              names=['user_id', 'item_id', 'time'])\r\n    click_test = pd.read_csv(train_path + header + '_test_click-{}.csv'.format(phase), header=None,\r\n                             names=['user_id', 'item_id', 'time'])\r\n    all_click = click_train.append(click_test)\r\n    \r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.reset_index(drop=True)\r\n\r\n    user_item_ = all_click.groupby('user_id')['item_id'].agg(list).reset_index()\r\n    user_item_dict = dict(zip(user_item_['user_id'], user_item_['item_id']))\r\n    user_item_list.append(user_item_dict)\r\n\r\n\r\n# In[5]:\r\n\r\n\r\n# user_item_list = []\r\n# for phase in range(now_phase + 1):\r\n#     file = open(train_path + 'new_similarity/' + 'user2item_new%d.pkl'%phase, 'rb')\r\n#     user_item_list.append(pickle.load(file))\r\n\r\n\r\n# In[6]:\r\n\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_txt_vec.txt')\r\nimg_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_img_vec.txt')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nnodewalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/node2vec_' + header + '.bin',binary=True)\r\ndeepwalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/deepwalk_' + header + '.bin',binary=True)\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ntxt_similarity = {}\r\nimg_similarity = {}\r\ndeep_similarity = {}\r\nnode_similarity = {}\r\n\r\nemergency_feature = []\r\n\r\n\r\n# In[9]:\r\n\r\n\r\nfor phase in range(0, now_phase + 1):\r\n    current_recall = df[df['phrase'] == phase]\r\n    current_data = user_item_list[phase]\r\n    for eachrow in tqdm(current_recall[['user_id','item_id']].values):\r\n        related_item = current_data[eachrow[0]][-1]\r\n        item = eachrow[1]\r\n\r\n        index = '_'.join(sorted([str(item), str(related_item)]))\r\n\r\n        # calculate txt similarity\r\n        if index in txt_similarity:\r\n            txt_sim = txt_similarity[index]\r\n        else:\r\n            try:\r\n                txt_sim = int(txt_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                txt_sim = np.nan\r\n        txt_similarity[index] = txt_sim\r\n\r\n        # calculate img similarity\r\n        if index in img_similarity:\r\n            img_sim = img_similarity[index]\r\n        else:\r\n            try:\r\n                img_sim = int(img_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                img_sim = np.nan\r\n        img_similarity[index] = img_sim\r\n            \r\n        # calculate node similarity\r\n        if index in node_similarity:\r\n            node_sim = node_similarity[index]\r\n        else:\r\n            try:\r\n                node_sim = int(nodewalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                node_sim = np.nan\r\n        node_similarity[index] = node_sim\r\n        \r\n        # calculate deep similarity\r\n        if index in deep_similarity:\r\n            deep_sim = deep_similarity[index]\r\n        else:\r\n            try:\r\n                deep_sim = int(deepwalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                deep_sim = np.nan\r\n        deep_similarity[index] = deep_sim\r\n        \r\n        emergency_feature.append([eachrow[0], eachrow[1], txt_sim, img_sim, node_sim, deep_sim])\r\n        \r\n    gc.collect()\r\n\r\n\r\n# In[10]:\r\n\r\n\r\nemergency_feature = pd.DataFrame(emergency_feature, columns=['user_id','item_id'] + ['emergency_feature_' + str(x) for x in range(4)])\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = emergency_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_emergency.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/10_emergency_feature_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport pickle\r\nfrom gensim.models import KeyedVectors\r\nfrom tqdm import tqdm\r\nimport numpy as np\r\nimport gc\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/dataset/'\r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature_partialsim'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nuser_item_list = []\r\nfor phase in tqdm(range(now_phase + 1)):\r\n\r\n    click_train = pd.read_csv(train_path + header + '_train_click-{}.csv'.format(phase), header=None,\r\n                              names=['user_id', 'item_id', 'time'])\r\n    click_test = pd.read_csv(train_path + header + '_test_click-{}.csv'.format(phase), header=None,\r\n                             names=['user_id', 'item_id', 'time'])\r\n    all_click = click_train.append(click_test)\r\n    \r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.reset_index(drop=True)\r\n\r\n    user_item_ = all_click.groupby('user_id')['item_id'].agg(list).reset_index()\r\n    user_item_dict = dict(zip(user_item_['user_id'], user_item_['item_id']))\r\n    user_item_list.append(user_item_dict)\r\n\r\n\r\n# In[5]:\r\n\r\n\r\n# user_item_list = []\r\n# for phase in range(now_phase + 1):\r\n#     file = open(train_path + 'new_similarity/' + 'user2item_new%d.pkl'%phase, 'rb')\r\n#     user_item_list.append(pickle.load(file))\r\n\r\n\r\n# In[6]:\r\n\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_txt_vec.txt')\r\nimg_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_img_vec.txt')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nnodewalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/node2vec_' + header + '.bin',binary=True)\r\ndeepwalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/deepwalk_' + header + '.bin',binary=True)\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ntxt_similarity = {}\r\nimg_similarity = {}\r\ndeep_similarity = {}\r\nnode_similarity = {}\r\n\r\nemergency_feature = []\r\n\r\n\r\n# In[9]:\r\n\r\n\r\nfor phase in range(0, now_phase + 1):\r\n    current_recall = df[df['phrase'] == phase]\r\n    current_data = user_item_list[phase]\r\n    for eachrow in tqdm(current_recall[['user_id','item_id']].values):\r\n        related_item = current_data[eachrow[0]][-1]\r\n        item = eachrow[1]\r\n\r\n        index = '_'.join(sorted([str(item), str(related_item)]))\r\n\r\n        # calculate txt similarity\r\n        if index in txt_similarity:\r\n            txt_sim = txt_similarity[index]\r\n        else:\r\n            try:\r\n                txt_sim = int(txt_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                txt_sim = np.nan\r\n        txt_similarity[index] = txt_sim\r\n\r\n        # calculate img similarity\r\n        if index in img_similarity:\r\n            img_sim = img_similarity[index]\r\n        else:\r\n            try:\r\n                img_sim = int(img_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                img_sim = np.nan\r\n        img_similarity[index] = img_sim\r\n            \r\n        # calculate node similarity\r\n        if index in node_similarity:\r\n            node_sim = node_similarity[index]\r\n        else:\r\n            try:\r\n                node_sim = int(nodewalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                node_sim = np.nan\r\n        node_similarity[index] = node_sim\r\n        \r\n        # calculate deep similarity\r\n        if index in deep_similarity:\r\n            deep_sim = deep_similarity[index]\r\n        else:\r\n            try:\r\n                deep_sim = int(deepwalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                deep_sim = np.nan\r\n        deep_similarity[index] = deep_sim\r\n        \r\n        emergency_feature.append([eachrow[0], eachrow[1], txt_sim, img_sim, node_sim, deep_sim])\r\n        \r\n    gc.collect()\r\n\r\n\r\n# In[10]:\r\n\r\n\r\nemergency_feature = pd.DataFrame(emergency_feature, columns=['user_id','item_id'] + ['emergency_feature_' + str(x) for x in range(4)])\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = emergency_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_emergency.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/10_紧急feature_model1.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport pickle\r\nfrom gensim.models import KeyedVectors\r\nfrom tqdm import tqdm\r\nimport numpy as np\r\nimport gc\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/model_1/'\r\ntest_path = './user_data/model_1/'\r\nheader = 'model_1'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature_partialsim'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nuser_item_list = []\r\nfor phase in tqdm(range(now_phase + 1)):\r\n\r\n    click_train = pd.read_csv(train_path + header + '_train_click-{}.csv'.format(phase), header=None,\r\n                              names=['user_id', 'item_id', 'time'])\r\n    click_test = pd.read_csv(train_path + header +0 '_test_click-{}.csv'.format(phase), header=None,\r\n                             names=['user_id', 'item_id', 'time'])\r\n    all_click = click_train.append(click_test)\r\n    \r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.reset_index(drop=True)\r\n\r\n    user_item_ = all_click.groupby('user_id')['item_id'].agg(list).reset_index()\r\n    user_item_dict = dict(zip(user_item_['user_id'], user_item_['item_id']))\r\n    user_item_list.append(user_item_dict)\r\n\r\n\r\n# In[4]:\r\n\r\n\r\n# user_item_list = []\r\n# for phase in range(now_phase + 1):\r\n#     file = open(train_path + 'new_similarity/' + 'user2item_new%d.pkl'%phase, 'rb')\r\n#     user_item_list.append(pickle.load(file))\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_txt_vec.txt')\r\nimg_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_img_vec.txt')\r\n\r\n\r\n# In[9]:\r\n\r\n\r\nnodewalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/node2vec_' + header + '.bin',binary=True)\r\ndeepwalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/deepwalk_' + header + '.bin',binary=True)\r\n\r\n\r\n# In[10]:\r\n\r\n\r\ntxt_similarity = {}\r\nimg_similarity = {}\r\ndeep_similarity = {}\r\nnode_similarity = {}\r\n\r\nemergency_feature = []\r\n\r\n\r\n# In[11]:\r\n\r\n\r\nfor phase in range(0, now_phase + 1):\r\n    current_recall = df[df['phrase'] == phase]\r\n    current_data = user_item_list[phase]\r\n    for eachrow in tqdm(current_recall[['user_id','item_id']].values):\r\n        related_item = current_data[eachrow[0]][-1]\r\n        item = eachrow[1]\r\n\r\n        index = '_'.join(sorted([str(item), str(related_item)]))\r\n\r\n        # calculate txt similarity\r\n        if index in txt_similarity:\r\n            txt_sim = txt_similarity[index]\r\n        else:\r\n            try:\r\n                txt_sim = int(txt_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                txt_sim = np.nan\r\n        txt_similarity[index] = txt_sim\r\n\r\n        # calculate img similarity\r\n        if index in img_similarity:\r\n            img_sim = img_similarity[index]\r\n        else:\r\n            try:\r\n                img_sim = int(img_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                img_sim = np.nan\r\n        img_similarity[index] = img_sim\r\n            \r\n        # calculate node similarity\r\n        if index in node_similarity:\r\n            node_sim = node_similarity[index]\r\n        else:\r\n            try:\r\n                node_sim = int(nodewalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                node_sim = np.nan\r\n        node_similarity[index] = node_sim\r\n        \r\n        # calculate deep similarity\r\n        if index in deep_similarity:\r\n            deep_sim = deep_similarity[index]\r\n        else:\r\n            try:\r\n                deep_sim = int(deepwalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                deep_sim = np.nan\r\n        deep_similarity[index] = deep_sim\r\n        \r\n        emergency_feature.append([eachrow[0], eachrow[1], txt_sim, img_sim, node_sim, deep_sim])\r\n        \r\n    gc.collect()\r\n\r\n\r\n# In[12]:\r\n\r\n\r\nemergency_feature = pd.DataFrame(emergency_feature, columns=['user_id','item_id'] + ['emergency_feature_' + str(x) for x in range(4)])\r\n\r\n\r\n# In[13]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = emergency_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[14]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_emergency.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/10_紧急feature_offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport pickle\r\nfrom gensim.models import KeyedVectors\r\nfrom tqdm import tqdm\r\nimport numpy as np\r\nimport gc\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/offline/'\r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature_partialsim'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nuser_item_list = []\r\nfor phase in tqdm(range(now_phase + 1)):\r\n\r\n    click_train = pd.read_csv(train_path + header + '_train_click-{}.csv'.format(phase), header=None,\r\n                              names=['user_id', 'item_id', 'time'])\r\n    click_test = pd.read_csv(train_path + header + '_test_click-{}.csv'.format(phase), header=None,\r\n                             names=['user_id', 'item_id', 'time'])\r\n    all_click = click_train.append(click_test)\r\n    \r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.reset_index(drop=True)\r\n\r\n    user_item_ = all_click.groupby('user_id')['item_id'].agg(list).reset_index()\r\n    user_item_dict = dict(zip(user_item_['user_id'], user_item_['item_id']))\r\n    user_item_list.append(user_item_dict)\r\n\r\n\r\n# In[5]:\r\n\r\n\r\n# user_item_list = []\r\n# for phase in range(now_phase + 1):\r\n#     file = open(train_path + 'new_similarity/' + 'user2item_new%d.pkl'%phase, 'rb')\r\n#     user_item_list.append(pickle.load(file))\r\n\r\n\r\n# In[6]:\r\n\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_txt_vec.txt')\r\nimg_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_img_vec.txt')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nnodewalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/node2vec_' + header + '.bin',binary=True)\r\ndeepwalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/deepwalk_' + header + '.bin',binary=True)\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ntxt_similarity = {}\r\nimg_similarity = {}\r\ndeep_similarity = {}\r\nnode_similarity = {}\r\n\r\nemergency_feature = []\r\n\r\n\r\n# In[9]:\r\n\r\n\r\nfor phase in range(0, now_phase + 1):\r\n    current_recall = df[df['phrase'] == phase]\r\n    current_data = user_item_list[phase]\r\n    for eachrow in tqdm(current_recall[['user_id','item_id']].values):\r\n        related_item = current_data[eachrow[0]][-1]\r\n        item = eachrow[1]\r\n\r\n        index = '_'.join(sorted([str(item), str(related_item)]))\r\n\r\n        # calculate txt similarity\r\n        if index in txt_similarity:\r\n            txt_sim = txt_similarity[index]\r\n        else:\r\n            try:\r\n                txt_sim = int(txt_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                txt_sim = np.nan\r\n        txt_similarity[index] = txt_sim\r\n\r\n        # calculate img similarity\r\n        if index in img_similarity:\r\n            img_sim = img_similarity[index]\r\n        else:\r\n            try:\r\n                img_sim = int(img_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                img_sim = np.nan\r\n        img_similarity[index] = img_sim\r\n            \r\n        # calculate node similarity\r\n        if index in node_similarity:\r\n            node_sim = node_similarity[index]\r\n        else:\r\n            try:\r\n                node_sim = int(nodewalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                node_sim = np.nan\r\n        node_similarity[index] = node_sim\r\n        \r\n        # calculate deep similarity\r\n        if index in deep_similarity:\r\n            deep_sim = deep_similarity[index]\r\n        else:\r\n            try:\r\n                deep_sim = int(deepwalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                deep_sim = np.nan\r\n        deep_similarity[index] = deep_sim\r\n        \r\n        emergency_feature.append([eachrow[0], eachrow[1], txt_sim, img_sim, node_sim, deep_sim])\r\n        \r\n    gc.collect()\r\n\r\n\r\n# In[10]:\r\n\r\n\r\nemergency_feature = pd.DataFrame(emergency_feature, columns=['user_id','item_id'] + ['emergency_feature_' + str(x) for x in range(4)])\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = emergency_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_emergency.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/4_RankFeature/10_紧急feature_online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport pickle\r\nfrom gensim.models import KeyedVectors\r\nfrom tqdm import tqdm\r\nimport numpy as np\r\nimport gc\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ntrain_path = './user_data/dataset/'\r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\nnow_phase = 9\r\n\r\n\r\n# In[3]:\r\n\r\n\r\nfile_name = 'recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature_partialsim'\r\ndf = pd.read_csv(train_path + 'new_recall/' + file_name + '.csv')\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nuser_item_list = []\r\nfor phase in tqdm(range(now_phase + 1)):\r\n\r\n    click_train = pd.read_csv(train_path + header + '_train_click-{}.csv'.format(phase), header=None,\r\n                              names=['user_id', 'item_id', 'time'])\r\n    click_test = pd.read_csv(train_path + header + '_test_click-{}.csv'.format(phase), header=None,\r\n                             names=['user_id', 'item_id', 'time'])\r\n    all_click = click_train.append(click_test)\r\n    \r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.drop_duplicates(subset=['user_id', 'item_id', 'time'], keep='last')\r\n    all_click = all_click.sort_values('time')\r\n    all_click = all_click.reset_index(drop=True)\r\n\r\n    user_item_ = all_click.groupby('user_id')['item_id'].agg(list).reset_index()\r\n    user_item_dict = dict(zip(user_item_['user_id'], user_item_['item_id']))\r\n    user_item_list.append(user_item_dict)\r\n\r\n\r\n# In[5]:\r\n\r\n\r\n# user_item_list = []\r\n# for phase in range(now_phase + 1):\r\n#     file = open(train_path + 'new_similarity/' + 'user2item_new%d.pkl'%phase, 'rb')\r\n#     user_item_list.append(pickle.load(file))\r\n\r\n\r\n# In[6]:\r\n\r\n\r\ntxt_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_txt_vec.txt')\r\nimg_model = KeyedVectors.load_word2vec_format('./user_data/dataset/w2v_img_vec.txt')\r\n\r\n\r\n# In[7]:\r\n\r\n\r\nnodewalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/node2vec_' + header + '.bin',binary=True)\r\ndeepwalk_model = KeyedVectors.load_word2vec_format('./user_data/2_New_Similarity/deepwalk_' + header + '.bin',binary=True)\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ntxt_similarity = {}\r\nimg_similarity = {}\r\ndeep_similarity = {}\r\nnode_similarity = {}\r\n\r\nemergency_feature = []\r\n\r\n\r\n# In[9]:\r\n\r\n\r\nfor phase in range(0, now_phase + 1):\r\n    current_recall = df[df['phrase'] == phase]\r\n    current_data = user_item_list[phase]\r\n    for eachrow in tqdm(current_recall[['user_id','item_id']].values):\r\n        related_item = current_data[eachrow[0]][-1]\r\n        item = eachrow[1]\r\n\r\n        index = '_'.join(sorted([str(item), str(related_item)]))\r\n\r\n        # calculate txt similarity\r\n        if index in txt_similarity:\r\n            txt_sim = txt_similarity[index]\r\n        else:\r\n            try:\r\n                txt_sim = int(txt_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                txt_sim = np.nan\r\n        txt_similarity[index] = txt_sim\r\n\r\n        # calculate img similarity\r\n        if index in img_similarity:\r\n            img_sim = img_similarity[index]\r\n        else:\r\n            try:\r\n                img_sim = int(img_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                img_sim = np.nan\r\n        img_similarity[index] = img_sim\r\n            \r\n        # calculate node similarity\r\n        if index in node_similarity:\r\n            node_sim = node_similarity[index]\r\n        else:\r\n            try:\r\n                node_sim = int(nodewalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                node_sim = np.nan\r\n        node_similarity[index] = node_sim\r\n        \r\n        # calculate deep similarity\r\n        if index in deep_similarity:\r\n            deep_sim = deep_similarity[index]\r\n        else:\r\n            try:\r\n                deep_sim = int(deepwalk_model.similarity(str(item), str(related_item)) * 1e4) / 1e4\r\n            except:\r\n                deep_sim = np.nan\r\n        deep_similarity[index] = deep_sim\r\n        \r\n        emergency_feature.append([eachrow[0], eachrow[1], txt_sim, img_sim, node_sim, deep_sim])\r\n        \r\n    gc.collect()\r\n\r\n\r\n# In[10]:\r\n\r\n\r\nemergency_feature = pd.DataFrame(emergency_feature, columns=['user_id','item_id'] + ['emergency_feature_' + str(x) for x in range(4)])\r\n\r\n\r\n# In[11]:\r\n\r\n\r\ndf = pd.merge(left = df,\r\n              right = emergency_feature,\r\n              how = 'left',\r\n              on = ['user_id','item_id'])\r\n\r\n\r\n# In[12]:\r\n\r\n\r\ndf.to_csv(train_path + 'new_recall/' + file_name + '_emergency.csv', index=False)\r\n\r\n"
  },
  {
    "path": "code/5_Modeling/Model_Offline.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[41]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nfrom sys import stdout\r\nimport pickle\r\nfrom evaulation import evaluate\r\n\r\n\r\n# In[42]:\r\n\r\n\r\ndef get_predict(df, pred_col, top_fill, ranknum):  \r\n    top_fill = [int(t) for t in top_fill.split(',')]  \r\n    scores = [-1 * i for i in range(1, len(top_fill) + 1)]  \r\n    ids = list(df['user_id'].unique())  \r\n    fill_df = pd.DataFrame(ids * len(top_fill), columns=['user_id'])  \r\n    fill_df.sort_values('user_id', inplace=True)  \r\n    fill_df['item_id'] = top_fill * len(ids)  \r\n    fill_df[pred_col] = scores * len(ids)  \r\n    df = df.append(fill_df)  \r\n    df.sort_values(pred_col, ascending=False, inplace=True)  \r\n    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')  \r\n    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)  \r\n    df = df[df['rank'] <= ranknum]  \r\n    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',', expand=True).reset_index()  \r\n    return df \r\n\r\n\r\n# In[43]:\r\n\r\n\r\ndef merge_label(train, label):\r\n    tmp = pd.merge(left = train,\r\n            right = label[['user_id','item_id','future_click']],\r\n            how = 'left',\r\n            on = ['user_id','item_id'])\r\n    tmp.loc[~pd.isna(tmp['future_click']), 'future_click'] = 1\r\n    tmp.loc[pd.isna(tmp['future_click']), 'future_click'] = 0\r\n    return tmp\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[44]:\r\n\r\n\r\nmodel1_train = pd.read_csv('./user_data/model_1/new_recall/recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature_partialsim_emergency.csv')\r\nmodel1_label = pd.read_csv('./user_data/model_1/model_1_debias_track_answer.csv', \r\n                           names = ['phase','user_id','item_id','future_click'])\r\nmodel1_train = merge_label(model1_train, model1_label)\r\n\r\n\r\n# In[45]:\r\n\r\n\r\nmodel1_train.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[46]:\r\n\r\n\r\noffline_train = pd.read_csv('./user_data/offline/new_recall/recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature_partialsim_emergency.csv')\r\noffline_label = pd.read_csv('./user_data/offline/offline_debias_track_answer.csv', \r\n                            names = ['phase','user_id','item_id','future_click'])\r\noffline_train = merge_label(offline_train, offline_label)\r\n\r\n\r\n# In[47]:\r\n\r\n\r\noffline_train.shape\r\n\r\n\r\n# In[48]:\r\n\r\n\r\noffline_train = offline_train[offline_train['phrase']>6]\r\noffline_train = offline_train.reset_index(drop=True)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[49]:\r\n\r\n\r\ncol_sel = [x for x in offline_train.columns if x not in ['user_item_count_max_time','user_item_count_min_time',\r\n                                                        'time_interval','item_count_4h','phrase','item_count_6h',\r\n                                                        'is_user_count_climax','item_count_2h','is_user_count_lowerpoint',\r\n                                                        'item_count_1h']]\r\n\r\n\r\n# In[50]:\r\n\r\n\r\nlen(col_sel)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# # 线下\r\n\r\n# In[51]:\r\n\r\n\r\nnow_phase = 9\r\ntrain_path = './user_data/offline/'  \r\ntest_path = './user_data/offline/'\r\nheader = 'offline'\r\n\r\n\r\nitem_sim_list = []\r\nitem_cnt_list = []\r\nuser_item = []\r\n\r\nwhole_click = pd.DataFrame()  \r\nfor c in range(7,now_phase + 1):  \r\n    print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n\r\n\r\n    all_click = click_train.append(click_test)  \r\n    whole_click = whole_click.append(all_click)  \r\n\r\n\r\n    whole_click =  whole_click.drop_duplicates(subset=['user_id','item_id','time'],keep='last')\r\n    whole_click = whole_click.sort_values('time')\r\n    whole_click = whole_click.reset_index(drop=True)\r\n\r\n# find most popular items  \r\ntop50_click = whole_click['item_id'].value_counts().index[:500].values  \r\ntop50_click = ','.join([str(i) for i in top50_click])  \r\n\r\n\r\n# In[162]:\r\n\r\n\r\nmodel_train = model1_train\r\n#model_train = pd.concat([model1_train,offline_train])\r\n#model_train = model_train.reset_index(drop=True)\r\n\r\nmodel_train_p = model_train[model_train['future_click']==1]\r\nmodel_train_p = model_train_p.reset_index(drop=True)\r\n\r\nmodel_train_n = model_train[model_train['future_click']==0]\r\nmodel_train_n = model_train_n.reset_index(drop=True)\r\n\r\n\r\n# In[164]:\r\n\r\n\r\nmodel_train_p.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[165]:\r\n\r\n\r\nonline_train = offline_train\r\n\r\n\r\n# In[166]:\r\n\r\n\r\nonline_train.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[167]:\r\n\r\n\r\nimport random\r\n\r\ndef generateDataset(df_n,df_p,random_seed):\r\n    random.seed(random_seed)\r\n    n_index = random.sample(list(range(len(df_n))), len(df_p)*5)\r\n    df_ns = df_n.loc[n_index]\r\n    df = pd.concat([df_ns,df_p])\r\n    df = df.reset_index(drop=True)\r\n    return df\r\n\r\nmodel_train_s_1 = generateDataset(model_train_n,model_train_p,2020)\r\nmodel_train_s_2 = generateDataset(model_train_n,model_train_p,0)\r\nmodel_train_s_3 = generateDataset(model_train_n,model_train_p,2019)\r\nmodel_train_s_4 = generateDataset(model_train_n,model_train_p,1000)\r\nmodel_train_s_5 = generateDataset(model_train_n,model_train_p,3000)\r\nmodel_train_s_6 = generateDataset(model_train_n,model_train_p,2021)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[168]:\r\n\r\n\r\ndef addWeightForDataSet(df,item_degree_median,weight):\r\n    df['sample_weight'] = df['count']/item_degree_median\r\n    df['sample_weight'] = df['sample_weight'].apply(lambda x: 5 if x<1 else 1)\r\n    df.loc[(df['count']<item_degree_median)&(df['future_click']==1),'sample_weight'] = weight\r\n    df.loc[(df['count']<item_degree_median)&(df['future_click']==1)&(df['phrase'].isin([7,8,9])), 'sample_weight'] = weight * 2\r\n    return df\r\n\r\n\r\n# In[169]:\r\n\r\n\r\nmodel_train_s_1 = addWeightForDataSet(model_train_s_1,30,35)\r\nmodel_train_s_2 = addWeightForDataSet(model_train_s_2,30,35)\r\nmodel_train_s_3 = addWeightForDataSet(model_train_s_3,30,35)\r\nmodel_train_s_4 = addWeightForDataSet(model_train_s_4,30,35)\r\nmodel_train_s_5 = addWeightForDataSet(model_train_s_5,30,35)\r\nmodel_train_s_6 = addWeightForDataSet(model_train_s_6,30,35)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[170]:\r\n\r\n\r\nfeature_list = [x for x in col_sel if x not in ['user_id','item_id','future_click','sample_weight'] \r\n                and 'result' not in x]\r\n\r\n\r\n# In[171]:\r\n\r\n\r\nlen(feature_list)\r\n\r\n\r\n# In[172]:\r\n\r\n\r\nfeature_list\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[173]:\r\n\r\n\r\nfeature_list_noleak = [x for x in col_sel if x not in ['user_id','item_id','future_click','sample_weight','result',\r\n                                                       'diff_from_next'] and 'result' not in x]\r\n\r\n\r\n# In[174]:\r\n\r\n\r\nlen(feature_list_noleak)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[175]:\r\n\r\n\r\ndef cbt_model(m,df_train,df_test,feat):\r\n    m.fit(df_train[feat],df_train[['future_click']],sample_weight=list(df_train['sample_weight']))\r\n    print(sorted(dict(zip(m.feature_names_,m.feature_importances_)).items(), key=lambda x:x[1], reverse=True))\r\n    result = m.predict_proba(df_test[feat])[:,1]\r\n    return result\r\n\r\n\r\n# In[176]:\r\n\r\n\r\ndf_res = pd.DataFrame()\r\n\r\n\r\n# In[177]:\r\n\r\n\r\nimport catboost as cat\r\nclf_cbt = cat.CatBoostClassifier(iterations=2500,learning_rate=0.01,depth=6,\r\n                                   verbose=True,thread_count=12,colsample_bylevel=0.8\r\n                                   ,l2_leaf_reg=1\r\n                                   ,random_seed=1024)\r\n\r\ndf_res['result_1'] = cbt_model(clf_cbt,model_train_s_1,online_train,feature_list)\r\n\r\ndf_res['result_2'] = cbt_model(clf_cbt,model_train_s_2,online_train,feature_list)\r\n\r\ndf_res['result_3'] = cbt_model(clf_cbt,model_train_s_3,online_train,feature_list)\r\n\r\ndf_res['result_4'] = cbt_model(clf_cbt,model_train_s_4,online_train,feature_list)\r\n\r\ndf_res['result_5'] = cbt_model(clf_cbt,model_train_s_5,online_train,feature_list)\r\n\r\ndf_res['result_6'] = cbt_model(clf_cbt,model_train_s_6,online_train,feature_list)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[178]:\r\n\r\n\r\ndf_res['phrase'] = online_train['phrase']\r\ndf_res['user_id'] = online_train['user_id']\r\ndf_res['item_id'] = online_train['item_id']\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[179]:\r\n\r\n\r\ndf_res['result'] = df_res['result_1'] + df_res['result_2'] + df_res['result_3'] + df_res['result_4'] + df_res['result_5'] + df_res['result_6'] \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[180]:\r\n\r\n\r\nrecom_df = df_res[['user_id','item_id','result']]\r\nresult = get_predict(recom_df, 'result', top50_click, 50) \r\nresult['user_id'] = result['user_id'].astype(int)\r\nresult.to_csv('rank_offline.csv', index=False, header=None)\r\n\r\nprint(evaluate(stdout, 'rank_offline.csv',\r\n             answer_fname='./user_data/offline/offline_debias_track_answer.csv', rank_num=50))\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[181]:\r\n\r\n\r\ndef lgb_model(df_train,df_test,feat,params,num_round):\r\n    train_data = lgb.Dataset(df_train[feat], \r\n                         label=df_train[['future_click']],weight=df_train['sample_weight'])  \r\n    print('lgb training')\r\n    bst = lgb.train(params,\r\n                train_data,\r\n                num_round)    \r\n    print('lgb predicting')\r\n    result = bst.predict(df_test[feat])    \r\n    return result\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[191]:\r\n\r\n\r\nimport lightgbm as lgb\r\nimport time\r\n\r\nnum_round = 2500\r\nparams = {\r\n        'learning_rate': 0.01,\r\n        'boosting_type': 'dart',\r\n        'objective': 'binary',\r\n        #'metric': 'auc',\r\n        'max_depth': 6,\r\n        'feature_fraction': 0.8,\r\n        'bagging_fraction': 0.8,\r\n        'bagging_freq': 5,\r\n        'seed': 1,\r\n        'bagging_seed': 10,\r\n        'feature_fraction_seed': 7,\r\n        'min_data_in_leaf': 20,\r\n        'nthread': 12,\r\n        'verbose': 1,\r\n    }\r\n\r\n\r\n# In[192]:\r\n\r\n\r\ndf_res['lgb_dart_1'] = lgb_model(model_train_s_1,online_train,feature_list,params,num_round)\r\n\r\ndf_res['lgb_dart_2'] = lgb_model(model_train_s_2,online_train,feature_list,params,num_round)\r\n\r\ndf_res['lgb_dart_3'] = lgb_model(model_train_s_3,online_train,feature_list,params,num_round)\r\n\r\ndf_res['lgb_dart_4'] = lgb_model(model_train_s_4,online_train,feature_list,params,num_round)\r\n\r\ndf_res['lgb_dart_5'] = lgb_model(model_train_s_5,online_train,feature_list,params,num_round)\r\n\r\ndf_res['lgb_dart_6'] = lgb_model(model_train_s_6,online_train,feature_list,params,num_round)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[193]:\r\n\r\n\r\ndf_res['result_lgb_dart'] = df_res['lgb_dart_1'] + df_res['lgb_dart_2'] + df_res['lgb_dart_3'] + df_res['lgb_dart_4'] + df_res['lgb_dart_5'] + df_res['lgb_dart_6'] \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[194]:\r\n\r\n\r\nrecom_df = df_res[['user_id','item_id','result_lgb_dart']]\r\nresult = get_predict(recom_df, 'result_lgb_dart', top50_click, 50) \r\nresult['user_id'] = result['user_id'].astype(int)\r\nresult.to_csv('rank_offline.csv', index=False, header=None)\r\n\r\nprint(evaluate(stdout, 'rank_offline.csv',\r\n             answer_fname='./user_data/offline/offline_debias_track_answer.csv', rank_num=50))\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[195]:\r\n\r\n\r\ndf_res['result'] = df_res['result']/6\r\ndf_res['result_lgb_dart'] = df_res['result_lgb_dart']/6\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[196]:\r\n\r\n\r\ndf_res['count_na'] = online_train['count'].apply(lambda x: np.nan if x ==0 else x)\r\ndf_res['m'] = df_res['count_na'].apply(lambda x:max(0.61,1/math.log1p(x+1)))\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[197]:\r\n\r\n\r\ndf_res['result_PostProcess'] = df_res['result'] * df_res['m']\r\ndf_res['result_lgb_dart_PostProcess'] = df_res['result_lgb_dart'] * df_res['m']\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[198]:\r\n\r\n\r\ndf_res['ensemble1'] = 10 / ( 6/df_res['result_PostProcess'] + 4/df_res['result_lgb_dart_PostProcess']) \r\ndf_res['ensemble2'] = np.power( df_res['result_PostProcess']**6 * df_res['result_lgb_dart_PostProcess']**4 , 1/10) \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[199]:\r\n\r\n\r\ndf_res['ensemble'] = df_res['ensemble1']  + df_res['ensemble2'] \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[200]:\r\n\r\n\r\n#df_res['PostProcess'] = df_res['ensemble'] * df_res['m']\r\n\r\nrecom_df = df_res[['user_id','item_id','ensemble']]\r\nresult = get_predict(recom_df, 'ensemble', top50_click, 50) \r\nresult['user_id'] = result['user_id'].astype(int)\r\nresult.to_csv('rank_offline.csv', index=False, header=None)\r\n\r\nprint(evaluate(stdout, 'rank_offline.csv',\r\n             answer_fname='./user_data/offline/offline_debias_track_answer.csv', rank_num=50))\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "code/5_Modeling/Model_Online.py",
    "content": "#!/usr/bin/env python\r\n# coding: utf-8\r\n\r\n# In[1]:\r\n\r\n\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom tqdm import tqdm\r\nimport os\r\nfrom collections import defaultdict  \r\nimport math  \r\nfrom sys import stdout\r\nimport pickle\r\nfrom evaulation import evaluate\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[2]:\r\n\r\n\r\ndef get_predict(df, pred_col, top_fill, ranknum):  \r\n    top_fill = [int(t) for t in top_fill.split(',')]  \r\n    scores = [-1 * i for i in range(1, len(top_fill) + 1)]  \r\n    ids = list(df['user_id'].unique())  \r\n    fill_df = pd.DataFrame(ids * len(top_fill), columns=['user_id'])  \r\n    fill_df.sort_values('user_id', inplace=True)  \r\n    fill_df['item_id'] = top_fill * len(ids)  \r\n    fill_df[pred_col] = scores * len(ids)  \r\n    df = df.append(fill_df)  \r\n    df.sort_values(pred_col, ascending=False, inplace=True)  \r\n    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')  \r\n    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)  \r\n    df = df[df['rank'] <= ranknum]  \r\n    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',', expand=True).reset_index()  \r\n    return df \r\n\r\n\r\n# In[3]:\r\n\r\n\r\ndef merge_label(train, label):\r\n    tmp = pd.merge(left = train,\r\n            right = label[['user_id','item_id','future_click']],\r\n            how = 'left',\r\n            on = ['user_id','item_id'])\r\n    tmp.loc[~pd.isna(tmp['future_click']), 'future_click'] = 1\r\n    tmp.loc[pd.isna(tmp['future_click']), 'future_click'] = 0\r\n    return tmp\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[4]:\r\n\r\n\r\nmodel1_train = pd.read_csv('./user_data/model_1/new_recall/recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature_partialsim_emergency.csv')\r\nmodel1_label = pd.read_csv('./user_data/model_1/model_1_debias_track_answer.csv', \r\n                           names = ['phase','user_id','item_id','future_click'])\r\nmodel1_train = merge_label(model1_train, model1_label)\r\n\r\n\r\n# In[5]:\r\n\r\n\r\nmodel1_train.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[6]:\r\n\r\n\r\noffline_train = pd.read_csv('./user_data/offline/new_recall/recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature_partialsim_emergency.csv')\r\noffline_label = pd.read_csv('./user_data/offline/offline_debias_track_answer.csv', \r\n                            names = ['phase','user_id','item_id','future_click'])\r\noffline_train = merge_label(offline_train, offline_label)\r\n\r\n\r\n# In[7]:\r\n\r\n\r\noffline_train.shape\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[8]:\r\n\r\n\r\ncol_sel = [x for x in offline_train.columns if x not in ['user_item_count_max_time','user_item_count_min_time',\r\n                                                        'time_interval','item_count_4h','phrase','item_count_6h',\r\n                                                        'is_user_count_climax','item_count_2h','is_user_count_lowerpoint',\r\n                                                        'item_count_1h']]\r\n\r\n\r\n# In[9]:\r\n\r\n\r\nlen(col_sel)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# # 线下\r\n\r\n# In[10]:\r\n\r\n\r\nnow_phase = 9\r\ntrain_path = './user_data/dataset/'  \r\ntest_path = './user_data/dataset/'\r\nheader = 'underexpose'\r\n\r\n\r\nitem_sim_list = []\r\nitem_cnt_list = []\r\nuser_item = []\r\n\r\nwhole_click = pd.DataFrame()  \r\nfor c in range(7,now_phase + 1):  \r\n    print('phase:', c)  \r\n    click_train = pd.read_csv(train_path + header + '_train_click_{}_time.csv'.format(c))  \r\n    click_test = pd.read_csv(test_path + header + '_test_click_{}_time.csv'.format(c))  \r\n    click_query = pd.read_csv(test_path + header + '_test_qtime_{}_time.csv'.format(c)) \r\n\r\n\r\n\r\n    all_click = click_train.append(click_test)  \r\n    whole_click = whole_click.append(all_click)  \r\n\r\n\r\n    whole_click =  whole_click.drop_duplicates(subset=['user_id','item_id','time'],keep='last')\r\n    whole_click = whole_click.sort_values('time')\r\n    whole_click = whole_click.reset_index(drop=True)\r\n\r\n# find most popular items  \r\ntop50_click = whole_click['item_id'].value_counts().index[:500].values  \r\ntop50_click = ','.join([str(i) for i in top50_click])  \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[11]:\r\n\r\n\r\nmodel_train = pd.concat([model1_train,offline_train])\r\nmodel_train = model_train.reset_index(drop=True)\r\n\r\nmodel_train_p = model_train[model_train['future_click']==1]\r\nmodel_train_p = model_train_p.reset_index(drop=True)\r\n\r\nmodel_train_n = model_train[model_train['future_click']==0]\r\nmodel_train_n = model_train_n.reset_index(drop=True)\r\n\r\n\r\n# In[12]:\r\n\r\n\r\nmodel_train_p.shape\r\n\r\n\r\n# In[13]:\r\n\r\n\r\nonline_train = pd.read_csv('./user_data/dataset/new_recall/recall_0531_addsim_addAA_RA_additemtime_addcount_addnn_addtxt_interactive_countdetail_userfeature_partialsim_emergency.csv')\r\n\r\n\r\n# In[14]:\r\n\r\n\r\nonline_train.shape\r\n\r\n\r\n# In[15]:\r\n\r\n\r\nonline_train = online_train[online_train['phrase']>6]\r\nonline_train = online_train.reset_index(drop=True)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[16]:\r\n\r\n\r\nimport random\r\n\r\ndef generateDataset(df_n,df_p,random_seed):\r\n    random.seed(random_seed)\r\n    n_index = random.sample(list(range(len(df_n))), len(df_p)*5)\r\n    df_ns = df_n.loc[n_index]\r\n    df = pd.concat([df_ns,df_p])\r\n    df = df.reset_index(drop=True)\r\n    return df\r\n\r\nmodel_train_s_1 = generateDataset(model_train_n,model_train_p,2020)\r\nmodel_train_s_2 = generateDataset(model_train_n,model_train_p,0)\r\nmodel_train_s_3 = generateDataset(model_train_n,model_train_p,2019)\r\nmodel_train_s_4 = generateDataset(model_train_n,model_train_p,1000)\r\nmodel_train_s_5 = generateDataset(model_train_n,model_train_p,3000)\r\nmodel_train_s_6 = generateDataset(model_train_n,model_train_p,2021)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[17]:\r\n\r\n\r\ndef addWeightForDataSet(df,item_degree_median,weight):\r\n    df['sample_weight'] = df['count']/item_degree_median\r\n    df['sample_weight'] = df['sample_weight'].apply(lambda x: 5 if x<1 else 1)\r\n    df.loc[(df['count']<item_degree_median)&(df['future_click']==1),'sample_weight'] = weight\r\n    df.loc[(df['count']<item_degree_median)&(df['future_click']==1)&(df['phrase'].isin([7,8,9])), 'sample_weight'] = weight * 2\r\n    return df\r\n\r\n\r\n# In[18]:\r\n\r\n\r\n# def addWeightForDataSet(df,item_degree_median,weight):\r\n#     df['sample_weight'] = df['count']/item_degree_median\r\n#     df['sample_weight'] = df['sample_weight'].apply(lambda x: 5 if x<1 else 1)\r\n#     df.loc[(df['count']<item_degree_median)&(df['future_click']==1),'sample_weight'] = weight\r\n#     return df\r\n\r\n\r\n# In[19]:\r\n\r\n\r\nmodel_train_s_1 = addWeightForDataSet(model_train_s_1,30,35)\r\nmodel_train_s_2 = addWeightForDataSet(model_train_s_2,30,35)\r\nmodel_train_s_3 = addWeightForDataSet(model_train_s_3,30,35)\r\nmodel_train_s_4 = addWeightForDataSet(model_train_s_4,30,35)\r\nmodel_train_s_5 = addWeightForDataSet(model_train_s_5,30,35)\r\nmodel_train_s_6 = addWeightForDataSet(model_train_s_6,30,35)\r\n\r\n\r\n# In[20]:\r\n\r\n\r\nfeature_list = [x for x in col_sel if x not in ['user_id','item_id','future_click','sample_weight'] \r\n                and 'result' not in x]\r\n\r\n\r\n# In[21]:\r\n\r\n\r\nlen(feature_list)\r\n\r\n\r\n# In[22]:\r\n\r\n\r\nfeature_list\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[23]:\r\n\r\n\r\nfeature_list_noleak = [x for x in col_sel if x not in ['user_id','item_id','future_click','sample_weight','result',\r\n                                                       'diff_from_next'] and 'result' not in x]\r\n\r\n\r\n# In[24]:\r\n\r\n\r\nlen(feature_list_noleak)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[25]:\r\n\r\n\r\ndef cbt_model(m,df_train,df_test,feat):\r\n    m.fit(df_train[feat],df_train[['future_click']],sample_weight=list(df_train['sample_weight']))\r\n    print(sorted(dict(zip(m.feature_names_,m.feature_importances_)).items(), key=lambda x:x[1], reverse=True))\r\n    result = m.predict_proba(df_test[feat])[:,1]\r\n    return result\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[26]:\r\n\r\n\r\ndf_res = pd.DataFrame()\r\n\r\n\r\n# In[27]:\r\n\r\n\r\nimport catboost as cat\r\nclf_cbt = cat.CatBoostClassifier(iterations=2500,learning_rate=0.01,depth=6,\r\n                                   verbose=True,thread_count=12,colsample_bylevel=0.8\r\n                                   ,l2_leaf_reg=1\r\n                                   ,random_seed=1024)\r\n\r\ndf_res['result_1'] = cbt_model(clf_cbt,model_train_s_1,online_train,feature_list)\r\n\r\ndf_res['result_2'] = cbt_model(clf_cbt,model_train_s_2,online_train,feature_list)\r\n\r\ndf_res['result_3'] = cbt_model(clf_cbt,model_train_s_3,online_train,feature_list)\r\n\r\ndf_res['result_4'] = cbt_model(clf_cbt,model_train_s_4,online_train,feature_list)\r\n\r\ndf_res['result_5'] = cbt_model(clf_cbt,model_train_s_5,online_train,feature_list)\r\n\r\ndf_res['result_6'] = cbt_model(clf_cbt,model_train_s_6,online_train,feature_list)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[28]:\r\n\r\n\r\ndf_res['phrase'] = online_train['phrase']\r\ndf_res['user_id'] = online_train['user_id']\r\ndf_res['item_id'] = online_train['item_id']\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[29]:\r\n\r\n\r\ndf_res['result'] = df_res['result_1'] + df_res['result_2'] + df_res['result_3'] + df_res['result_4'] + df_res['result_5'] + df_res['result_6'] \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[30]:\r\n\r\n\r\ndef lgb_model(df_train,df_test,feat,params,num_round):\r\n    train_data = lgb.Dataset(df_train[feat], \r\n                         label=df_train[['future_click']],weight=df_train['sample_weight'])  \r\n    print('lgb training')\r\n    bst = lgb.train(params,\r\n                train_data,\r\n                num_round)    \r\n    print('lgb predicting')\r\n    result = bst.predict(df_test[feat])    \r\n    return result\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[31]:\r\n\r\n\r\nimport lightgbm as lgb\r\nimport time\r\n\r\nnum_round = 2500\r\nparams = {\r\n        'learning_rate': 0.01,\r\n        'boosting_type': 'dart',\r\n        'objective': 'binary',\r\n        #'metric': 'auc',\r\n        'max_depth': 6,\r\n        'feature_fraction': 0.8,\r\n        'bagging_fraction': 0.8,\r\n        'bagging_freq': 5,\r\n        'seed': 1,\r\n        'bagging_seed': 10,\r\n        'feature_fraction_seed': 7,\r\n        'min_data_in_leaf': 20,\r\n        'nthread': 8,\r\n        'verbose': 1,\r\n    }\r\n\r\n\r\n# In[32]:\r\n\r\n\r\ndf_res['lgb_dart_1'] = lgb_model(model_train_s_1,online_train,feature_list,params,num_round)\r\n\r\ndf_res['lgb_dart_2'] = lgb_model(model_train_s_2,online_train,feature_list,params,num_round)\r\n\r\ndf_res['lgb_dart_3'] = lgb_model(model_train_s_3,online_train,feature_list,params,num_round)\r\n\r\ndf_res['lgb_dart_4'] = lgb_model(model_train_s_4,online_train,feature_list,params,num_round)\r\n\r\ndf_res['lgb_dart_5'] = lgb_model(model_train_s_5,online_train,feature_list,params,num_round)\r\n\r\ndf_res['lgb_dart_6'] = lgb_model(model_train_s_6,online_train,feature_list,params,num_round)\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[33]:\r\n\r\n\r\ndf_res['result_lgb_dart'] = df_res['lgb_dart_1'] + df_res['lgb_dart_2'] + df_res['lgb_dart_3'] + df_res['lgb_dart_4'] + df_res['lgb_dart_5'] + df_res['lgb_dart_6'] \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[34]:\r\n\r\n\r\ndf_res['result'] = df_res['result']/6\r\ndf_res['result_lgb_dart'] = df_res['result_lgb_dart']/6\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[35]:\r\n\r\n\r\ndf_res['count_na'] = online_train['count'].apply(lambda x: np.nan if x ==0 else x)\r\ndf_res['m'] = df_res['count_na'].apply(lambda x:max(0.61,1/math.log1p(x+1)))\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[36]:\r\n\r\n\r\ndf_res['result_PostProcess'] = df_res['result'] * df_res['m']\r\ndf_res['result_lgb_dart_PostProcess'] = df_res['result_lgb_dart'] * df_res['m']\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[37]:\r\n\r\n\r\ndf_res['ensemble1'] = 10 / ( 6/df_res['result_PostProcess'] + 4/df_res['result_lgb_dart_PostProcess']) \r\ndf_res['ensemble2'] = np.power( df_res['result_PostProcess']**6 * df_res['result_lgb_dart_PostProcess']**4 , 1/10) \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[38]:\r\n\r\n\r\ndf_res['ensemble'] = df_res['ensemble1']  + df_res['ensemble2'] \r\n\r\n\r\n# In[ ]:\r\n\r\n\r\n\r\n\r\n\r\n# In[39]:\r\n\r\n\r\n#df_res['PostProcess'] = df_res['ensemble'] * df_res['m']\r\n\r\nrecom_df = df_res[['user_id','item_id','ensemble']]\r\nresult = get_predict(recom_df, 'ensemble', top50_click, 50) \r\nresult['user_id'] = result['user_id'].astype(int)\r\nresult.to_csv('./prediction_result/prediction_result.csv', index=False, header=None)\r\n\r\n"
  },
  {
    "path": "main.sh",
    "content": "\r\n## Before started, make sure you put\r\n## 'underexpose_item_feat.csv' in ./user_data/dataset\r\n## 'w2v_txt_vec.txt' in ./user_data\r\n## 'w2v_img_vec.txt' in ./user_data\r\n\r\n# Generate dataset\r\npython ./code/1_DataPreprocessing/01_Generate_Offline_Dataset_origin.py\r\npython ./code/1_DataPreprocessing/02_Generate_Model1_Dataset_origin.py\r\npython ./code/1_DataPreprocessing/03_Create_Model1_Answer.py\r\npython ./code/1_DataPreprocessing/04_TransformDateTime-Copy1.py\r\n\r\n# Generate Similarity\r\npython ./code/2_Similarity/deep_node_model.py\r\npython ./code/2_Similarity/01_itemCF_Mundane_model1.py\r\npython ./code/2_Similarity/01_itemCF_Mundane_offline.py\r\npython ./code/2_Similarity/01_itemCF_Mundane_online.py\r\npython ./code/2_Similarity/RA_Wu_model1.py\r\npython ./code/2_Similarity/RA_Wu_offline.py\r\npython ./code/2_Similarity/RA_Wu_online.py\r\n\r\n\r\n# Generate candidates\r\npython ./code/3_Recall/01_Recall-Wu-model1.py\r\npython ./code/3_Recall/01_Recall-Wu-offline.py\r\npython ./code/3_Recall/01_Recall-Wu-online.py\r\n\r\n# NN model\r\npython ./code/3_NN/ItemFeat2.py\r\n# train 1 online\r\npython ./code/3_NN/sas_rec.py --kind 1 --train 1\r\n# train 2 offline\r\npython ./code/3_NN/sas_rec.py --kind 2 --train 1\r\n# train 3 model\r\npython ./code/3_NN/sas_rec.py --kind 3 --train 1\r\n# test 1\r\npython ./code/3_NN/sas_rec.py --kind 1 --test 1\r\n# tets 2\r\npython ./code/3_NN/sas_rec.py --kind 2 --test 1\r\n# test 3\r\npython ./code/3_NN/sas_rec.py --kind 3 --test 1 \r\n\r\n# Generate feature\r\npython ./code/4_RankFeature/01_sim_feature_model1.py\r\npython ./code/4_RankFeature/01_sim_feature_model1_RA_AA.py\r\npython ./code/4_RankFeature/01_sim_feature_offline.py\r\npython ./code/4_RankFeature/01_sim_feature_offline_RA_AA.py\r\npython ./code/4_RankFeature/01_sim_feature_online.py\r\npython ./code/4_RankFeature/01_sim_feature_online_RA_AA.py\r\n\r\npython ./code/4_RankFeature/02_itemtime_feature_model1.py\r\npython ./code/4_RankFeature/02_itemtime_feature_offline.py\r\npython ./code/4_RankFeature/02_itemtime_feature_online.py\r\n\r\npython ./code/4_RankFeature/03_count_feature_model1.py\r\npython ./code/4_RankFeature/03_count_feature_offline.py\r\npython ./code/4_RankFeature/03_count_feature_online.py\r\n\r\npython ./code/4_RankFeature/04_NN_feature_model1.py\r\npython ./code/4_RankFeature/04_NN_feature_offline.py\r\npython ./code/4_RankFeature/04_NN_feature_online.csv.py\r\n\r\npython ./code/4_RankFeature/05_txt_feature_model1.py\r\npython ./code/4_RankFeature/05_txt_feature_offline.py\r\npython ./code/4_RankFeature/05_txt_feature_online.py\r\n\r\npython ./code/4_RankFeature/06_interactive_model1.py\r\npython ./code/4_RankFeature/06_interactive_offline.py\r\npython ./code/4_RankFeature/06_interactive_online.py\r\n\r\npython ./code/4_RankFeature/07_count_detail_model1.py\r\npython ./code/4_RankFeature/07_count_detail_offline.py\r\npython ./code/4_RankFeature/07_count_detail_online.py\r\n\r\npython ./code/4_RankFeature/08_user_feature_model1.py\r\npython ./code/4_RankFeature/08_user_feature_offline.py\r\npython ./code/4_RankFeature/08_user_feature_online.py\r\n\r\npython ./code/4_RankFeature/09_partial_sim_feature_model1.py\r\npython ./code/4_RankFeature/09_partial_sim_feature_offline.py\r\npython ./code/4_RankFeature/09_partial_sim_feature_online.py\r\n\r\npython ./code/4_RankFeature/10_emergency_feature_model1.py\r\npython ./code/4_RankFeature/10_emergency_feature_offline.py\r\npython ./code/4_RankFeature/10_emergency_feature_online.py\r\n\r\n# Build model\r\npython ./code/5_Modeling/Model_Online.py\r\n\r\n"
  },
  {
    "path": "project_structure.txt",
    "content": "  feature_list.csv\r\n  main.sh\r\n  project_structure.txt\r\n  \r\ncode\r\n    __init__.py\r\n    \r\n  1_DataPreprocessing\r\n        01_Generate_Offline_Dataset_origin.py\r\n        02_Generate_Model1_Dataset_origin.py\r\n        03_Create_Model1_Answer.py\r\n        03_Create_Offline_Answer.py\r\n        04_TransformDateTime-Copy1.py\r\n        05_Generate_img_txt_vec.py\r\n        ipynb_file.zip\r\n        \r\n  2_Similarity\r\n        01_itemCF_Mundane_model1.py\r\n        01_itemCF_Mundane_offline.py\r\n        01_itemCF_Mundane_online.py\r\n        deep_node_model.py\r\n        ipynb_file.zip\r\n        RA_Wu_model1.py\r\n        RA_Wu_offline.py\r\n        RA_Wu_online.py\r\n        \r\n  3_NN\r\n        config.py\r\n        ItemFeat2.py\r\n        model2.py\r\n        modules.py\r\n        Readme\r\n        sampler2.py\r\n        sas_rec.py\r\n        util.py\r\n        \r\n  3_Recall\r\n        01_Recall-Wu-model1.py\r\n        01_Recall-Wu-offline.py\r\n        01_Recall-Wu-online.py\r\n        ipynb_file.zip\r\n        \r\n  4_RankFeature\r\n        01_sim_feature_model1.py\r\n        01_sim_feature_model1_RA_AA.py\r\n        01_sim_feature_offline.py\r\n        01_sim_feature_offline_RA_AA.py\r\n        01_sim_feature_online.py\r\n        01_sim_feature_online_RA_AA.py\r\n        02_itemtime_feature_model1.py\r\n        02_itemtime_feature_offline.py\r\n        02_itemtime_feature_online.py\r\n        03_count_feature_model1.py\r\n        03_count_feature_offline.py\r\n        03_count_feature_online.py\r\n        04_NN_feature_model1.py\r\n        04_NN_feature_offline.py\r\n        04_NN_feature_online.csv.py\r\n        05_txt_feature_model1.py\r\n        05_txt_feature_offline.py\r\n        05_txt_feature_online.py\r\n        06_interactive_model1.py\r\n        06_interactive_offline.py\r\n        06_interactive_online.py\r\n        07_count_detail_model1.py\r\n        07_count_detail_offline.py\r\n        07_count_detail_online.py\r\n        08_user_feature_model1.py\r\n        08_user_feature_offline.py\r\n        08_user_feature_online.py\r\n        09_partial_sim_feature_model1.py\r\n        09_partial_sim_feature_offline.py\r\n        09_partial_sim_feature_online.py\r\n        10_emergency_feature_model1.py\r\n        10_emergency_feature_offline.py\r\n        10_emergency_feature_online.py\r\n        4_RankFeature.zip\r\n        \r\n  5_Modeling\r\n          ipynb_file.zip\r\n          Model_Offline.py\r\n          Model_Online.py\r\n          \r\ndata\r\n  underexpose_test\r\n  underexpose_train\r\nprediction_result\r\nuser_data\r\n    dataset\r\n      new_recall\r\n      new_similarity\r\n      nn\r\n    model_1\r\n      new_recall\r\n      new_similarity\r\n      nn\r\n    offline\r\n        new_recall\r\n        new_similarity\r\n        nn\r\n"
  },
  {
    "path": "requirements.txt",
    "content": "lightgbm==2.2.1\r\ntensorflow==2.5.1\r\njoblib==0.15.1\r\ngensim==3.4.0\r\npandas==0.25.1\r\nnumpy==1.16.3\r\nnetworkx==2.4\r\ntqdm==4.46.0\r\n"
  }
]