[
  {
    "path": "README.md",
    "content": "# ABSA\n\nAspect Based Sentiment Analysis\n\n虽说是基于观点的分析，但也是基于句子层的分析，因为需要按句子进行分析。\n\n![](https://github.com/YZHANG1270/Aspect-Based-Sentiment-Analysis/blob/master/img/absa.png?raw=true)\n\n\n\n\n##### 概念参考\n\n- ABSA refer presentation [[ppt](https://www.iaria.org/conferences2016/filesHUSO16/OrpheeDeClercq_Keynote_ABSA.pdf)]\n- 阿里云的商品评价解析 [[link](https://help.aliyun.com/document_detail/64231.html?spm=5176.12095382.1232858.4.739e3b24xUnvbZ)]\n\n| 参数名         | 值                                                           |\n| -------------- | ------------------------------------------------------------ |\n| textPolarity   | 整条文本情感极性：正、中、负，text字段输入非法时返回-100     |\n| textIntensity  | 整条文本情感程度(取值范围[-1,1]，越大代表越正向，越小代表越负向，接近0代表中性) |\n| aspectItem     | 属性情感列表，每个元素是一个json字段                         |\n| aspectCategory | 属性类别                                                     |\n| aspectIndex    | 属性词所在的起始位置，终结位置                               |\n| aspectTerm     | 属性词                                                       |\n| opinionTerm    | 情感词                                                       |\n| aspectPolarity | 属性片段极性（正、中、负）                                   |\n\n\n\n##### Task Process\n\n1. 按句 提取 属性词\n2. 按句 提取 情感词\n3. 属性词所在起始位置，终止位置\n4. 属性词 -> EA分类\n5. 情感词 -> 极性分类\n6. 整条文本的感情极性（正、负、中） 及其概率值\n\n\n\n##### Done Tasks\n\n根据现有数据集，实际完成的任务\n\n- [x] 按句进行 EA 分类\n- [x] 按句进行情感极性分析\n\n\n\n##### To do\n\n- [ ] 观点过滤：文字噪音处理、虚假评论、水军、广告、不含观点、无意义文本\n- [ ] negation 否定处理\n\n\n\n##### SemEval ABSA\n\n- NLP的 SemEval 论文合辑 [[ACL](https://www.aclweb.org/anthology/)]\n- SemEval - 2014 - ABSA [[competition](http://alt.qcri.org/semeval2014/task4/)] [[data](http://alt.qcri.org/semeval2014/task4/index.php?id=data-and-tools)] \n- SemEval - 2015 - ABSA [[competition](http://alt.qcri.org/semeval2015/task12/)] [[data](http://alt.qcri.org/semeval2015/task12/index.php?id=data-and-tools)] [[paper](https://www.aclweb.org/anthology/S15-2082)] \n- SemEval - 2016 - ABSA [[competition](http://alt.qcri.org/semeval2016/task5/)] [[data](http://alt.qcri.org/semeval2016/task5/index.php?id=data-and-tools)] [[guideline](http://alt.qcri.org/semeval2016/task5/data/uploads/absa2016_annotationguidelines.pdf)] [[paper](https://www.aclweb.org/anthology/S16-1002)]\n- bonus: CodaLab Competitions [[intro](https://www.hse.ru/data/2017/05/31/1171931089/CodaLabCompetitions.pdf)] \n\n\n\n##### 可参考的GitHub项目\n\n数据集基本都基于 2014-2016 SemEval 比赛\n\n- [data: self data] [Unsupervised-Aspect-Extraction](https://github.com/ruidan/Unsupervised-Aspect-Extraction) \n- [data: SemEval-2016] [aspect-extraction](https://github.com/soujanyaporia/aspect-extraction) \n- [data: SemEval-2015] [AspectBasedSentimentAnalysis](https://github.com/yardstick17/AspectBasedSentimentAnalysis) 跑了下这个项目，其中结合了语法分析和机器学习，按照语法规则抽取的属性词。代码嵌套逻辑比较强，不建议套用。\n- [data: SemEval-2016] [Review_aspect_extraction](https://github.com/yafangy/Review_aspect_extraction) \n- [data: SemEval-2014, 2016] [DE-CNN](https://github.com/howardhsu/DE-CNN) \n- [data: SemEval-2015] [Coupled-Multi-layer-Attentions](https://github.com/happywwy/Coupled-Multi-layer-Attentions) \n- [data: SemEval-2016 laptop] [mem_absa](https://github.com/ganeshjawahar/mem_absa) \n- [data: SemEval-2014] [ABSA-PyTorch](https://github.com/songyouwei/ABSA-PyTorch) \n- [data: SemEval-2014, 2016] [Attention_Based_LSTM_AspectBased_SA](https://github.com/gangeshwark/Attention_Based_LSTM_AspectBased_SA) \n- [data: SemEval-2014] [ABSA_Keras](https://github.com/AlexYangLi/ABSA_Keras) 利用了tensorflow hub，适用hub时出现了版本问题未跑通。\n- [data: SemEval-2016] [ABSA](https://github.com/LingxB/ABSA/tree/master/Data/SemEval) \n\n \n\n##### paper\n\n- Deep Learning for Aspect-Based Sentiment Analysis [[paper](https://cs224d.stanford.edu/reports/WangBo.pdf)]\n- Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings [[paper](https://www.aclweb.org/anthology/D15-1168)]\n- Encoding Conversation Context for Neural Keyphrase Extraction from Microblog Posts [[paper](https://ai.tencent.com/ailab/media/publications/naacl2018/Encoding_Conversation_Context_for_Neural_Keyphrase_Extraction_from_Microblog_Posts.pdf)]\n- End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF [[paper](https://arxiv.org/pdf/1603.01354.pdf)]\n- [2012] 用户评论中的标签抽取以及排序 [[paper](http://lipiji.com/docs/li2011opinion.pdf)] \n\n\n\n##### 数据集\n\n###### 中文\n\n- AI-Challenge [[data](https://drive.google.com/file/d/1OInXRx_OmIJgK3ZdoFZnmqUi0rGfOaQo/view)] \n- SemEval ABSA 2016 [[data](http://alt.qcri.org/semeval2016/task5/index.php?id=data-and-tools)] \n\n\n###### 英文\n\n- Amazon product data [[data](http://jmcauley.ucsd.edu/data/amazon/)]\n- Web data: Amazon reviews [[data](https://snap.stanford.edu/data/web-Amazon.html)]\n- Amazon Fine Food Reviews [[kaggle](https://www.kaggle.com/snap/amazon-fine-food-reviews)]\n- SemEval ABSA\n\n\n\n#### 优化方向\n\n##### 字/词/句 文本嵌入Embedding\n\n###### 中文\n\n- Chinese Word Vectors [[github](https://github.com/Embedding/Chinese-Word-Vectors)] \n- nlp_chinese_corpus [[github](https://github.com/brightmart/nlp_chinese_corpus)] \n- 泛化语料、专业语料、向量化时，如何整合，还是两者独立向量化\n\n\n\n\n\nABSA书的目录，可以学习逻辑\n\n#### ABSA Book Outline\n\n1. Introduction\n2. Aspect-Based Sentiment Analysis (ABSA)\n   - 2.1. The three tasks of ABSA\n   - 2.2. Domain and benchmark datasets\n   - 2.3. Previous approaches to ABSA tasks\n   - 2.4. Evaluation measures of ABSA tasks\n3. Deep Learning for ABSA\n   - 3.1. Multiple layers of DNN\n   - 3.2. Initialization of input vectors\n     - 3.2.1. Word embeddings vectors\n     - 3.2.2. Featuring vectors\n     - 3.2.3. Part-Of-Speech (POS) and chunk tags\n     - 3.2.4. Commonsense knowledge\n   - 3.3. Training process of DNNs\n   - 3.4. Convolutional Neural Network Model (CNN)\n     - 3.4.1. Architecture\n     - 3.4.2. Application in consumer review domain\n   - 3.5. Recurrent Neural Network Models (RNN)\n     - 3.5.1. Computation of RNN models\n     - 3.5.2. Bidirectional RNN\n     - 3.5.3. Attention mechanism and memory networks\n     - 3.5.4. Application in the consumer review domain\n     - 3.5.5. Application in targeted sentiment analysis\n   - 3.6. Recursive Neural Network Model (RecNN)\n     - 3.6.1. Architecture\n     - 3.6.2. Application\n   - 3.7. Hybrid models\n4. Comparison of performance on benchmark datasets\n   - 4.1. Opinion target extraction\n   - 4.2. Aspect category detection\n   - 4.3. Sentiment polarity of aspect-based consumer reviews\n   - 4.4. Sentiment polarity of targeted text\n5. Challenges\n   - 5.1. Domain adaptation\n   - 5.2. Multilingual application\n   - 5.3. Technical requirements\n   - 5.4. Linguistic complications\n6. Conclusion\n7. Appendix: List of Abbreviations\n8. References\n"
  },
  {
    "path": "ai_challenge_sentiment/code/sentiment_analysis2018_baseline/README.md",
    "content": "\nAI Challenger Sentiment Analysis Baseline\n=========================================\n功能描述\n---\n本工程主要用于为参赛者提供一个baseline，方便参赛者快速上手比赛，主要功能涵盖完成比赛的全流程，如数据读取、分词、特征提取、模型定义以及封装、\n模型训练、模型验证、模型存储以及模型预测等。baseline仅是一个简单的参考，希望参赛者能够充分发挥自己的想象，构建在该任务上更加强大的模型。\n\n开发环境\n---\n* 主要依赖工具包以及版本，详情见requirements.txt\n\n项目结构\n---\n* src/config.py 项目配置信息模块，主要包括文件读取或存储路径信息\n* src/data_process.py 数据处理模块，主要包括数据的读取以及处理等功能\n* src/model.py 模型定义模块，主要包括模型的定义以及使用封装\n* src/main_train.py 模型训练模块，模型训练流程包括 数据读取、分词、特征提取、模型训练、模型验证、模型存储等步骤\n* src/main_predict.py 模型预测模块，模型预测流程包括 数据和模型的读取、分词、模型预测、预测结果存储等步骤 \n\n\n使用方法\n---\n* 配置 在config.py中配置好文件存储路径\n* 训练 运行nohup python main_train.py -mn your_model_name & 训练模型并保存，同时通过日志可以得到验证集的F1_score指标\n* 预测 运行nohup python main_predict.py -mn your_model_name $ 通过加载上一步的模型，在测试集上做预测\n\n"
  },
  {
    "path": "ai_challenge_sentiment/code/sentiment_analysis2018_baseline/__init__.py",
    "content": "#!/user/bin/env python\n# -*- coding:utf-8 -*-\n\n"
  },
  {
    "path": "ai_challenge_sentiment/code/sentiment_analysis2018_baseline/data_process.py",
    "content": "#!/user/bin/env python\n# -*- coding:utf-8 -*-\n\nimport pandas as pd\nimport jieba\n\n\n# 加载数据\ndef load_data_from_csv(file_name, header=0, encoding=\"utf-8\"):\n\n    data_df = pd.read_csv(file_name, header=header, encoding=encoding)\n\n    return data_df\n\n\n# 分词\ndef seg_words(contents):\n    contents_segs = list()\n    for content in contents:\n        segs = jieba.lcut(content)\n        contents_segs.append(\" \".join(segs))\n\n    return contents_segs\n"
  },
  {
    "path": "ai_challenge_sentiment/code/sentiment_analysis2018_baseline/main_predict.py",
    "content": "#!/user/bin/env python\n# -*- coding:utf-8 -*-\n\nfrom data_process import seg_words, load_data_from_csv\nimport config\nimport logging\nimport argparse\nfrom sklearn.externals import joblib\n\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s [%(levelname)s] <%(processName)s> (%(threadName)s) %(message)s')\nlogger = logging.getLogger(__name__)\n\nif __name__ == '__main__':\n\n    parser = argparse.ArgumentParser()\n    parser.add_argument('-mn', '--model_name', type=str, nargs='?',\n                        help='the name of model')\n\n    args = parser.parse_args()\n    model_name = args.model_name\n    if not model_name:\n        model_name = \"model_dict.pkl\"\n\n    # load data\n    logger.info(\"start load data\")\n    test_data_df = load_data_from_csv(config.test_data_path)\n\n    # load model\n    logger.info(\"start load model\")\n    classifier_dict = joblib.load(config.model_save_path + model_name)\n\n    columns = test_data_df.columns.tolist()\n    # seg words\n    logger.info(\"start seg test data\")\n    content_test = test_data_df.iloc[:, 1]\n    content_test = seg_words(content_test)\n    logger.info(\"complete seg test data\")\n\n    # model predict\n    logger.info(\"start predict test data\")\n    for column in columns[2:]:\n        test_data_df[column] = classifier_dict[column].predict(content_test)\n        logger.info(\"compete %s predict\" % column)\n\n    test_data_df.to_csv(config.test_data_predict_out_path, encoding=\"utf_8_sig\", index=False)\n    logger.info(\"compete predict test data\")\n"
  },
  {
    "path": "ai_challenge_sentiment/code/sentiment_analysis2018_baseline/main_train.py",
    "content": "#!/user/bin/env python\n# -*- coding:utf-8 -*-\n\nfrom data_process import load_data_from_csv, seg_words\nfrom model import TextClassifier\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nimport config\nimport logging\nimport numpy as np\nfrom sklearn.externals import joblib\nimport os\nimport argparse\n\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s [%(levelname)s] <%(processName)s> (%(threadName)s) %(message)s')\nlogger = logging.getLogger(__name__)\n\nif __name__ == '__main__':\n\n    parser = argparse.ArgumentParser()\n    parser.add_argument('-mn', '--model_name', type=str, nargs='?',\n                        help='the name of model')\n\n    args = parser.parse_args()\n    model_name = args.model_name\n    if not model_name:\n        model_name = \"model_dict.pkl\"\n\n    # load train data\n    logger.info(\"start load data\")\n    train_data_df = load_data_from_csv(config.train_data_path)\n    validate_data_df = load_data_from_csv(config.validate_data_path)\n\n    content_train = train_data_df.iloc[:, 1]\n\n    logger.info(\"start seg train data\")\n    content_train = seg_words(content_train)\n    logger.info(\"complete seg train data\")\n\n    columns = train_data_df.columns.values.tolist()\n\n    logger.info(\"start train feature extraction\")\n    vectorizer_tfidf = TfidfVectorizer(analyzer='word', ngram_range=(1, 5), min_df=5, norm='l2')\n    vectorizer_tfidf.fit(content_train)\n    logger.info(\"complete train feature extraction models\")\n    logger.info(\"vocab shape: %s\" % np.shape(vectorizer_tfidf.vocabulary_.keys()))\n\n    # model train\n    logger.info(\"start train model\")\n    classifier_dict = dict()\n    for column in columns[2:]:\n        label_train = train_data_df[column]\n        text_classifier = TextClassifier(vectorizer=vectorizer_tfidf)\n        logger.info(\"start train %s model\" % column)\n        text_classifier.fit(content_train, label_train)\n        logger.info(\"complete train %s model\" % column)\n        classifier_dict[column] = text_classifier\n\n    logger.info(\"complete train model\")\n\n    # validate model\n    content_validate = validate_data_df.iloc[:, 1]\n\n    logger.info(\"start seg validate data\")\n    content_validate = seg_words(content_validate)\n    logger.info(\"complete seg validate data\")\n\n    logger.info(\"start validate model\")\n    f1_score_dict = dict()\n    for column in columns[2:]:\n        label_validate = validate_data_df[column]\n        text_classifier = classifier_dict[column]\n        f1_score = text_classifier.get_f1_score(content_validate, label_validate)\n        f1_score_dict[column] = f1_score\n\n    f1_score = np.mean(list(f1_score_dict.values()))\n    str_score = \"\\n\"\n    for column in columns[2:]:\n        str_score = str_score + column + \":\" + str(f1_score_dict[column]) + \"\\n\"\n\n    logger.info(\"f1_scores: %s\\n\" % str_score)\n    logger.info(\"f1_score: %s\" % f1_score)\n    logger.info(\"complete validate model\")\n\n    # save model\n    logger.info(\"start save model\")\n    model_save_path = config.model_save_path\n    if not os.path.exists(model_save_path):\n        os.makedirs(model_save_path)\n\n    joblib.dump(classifier_dict, model_save_path + model_name)\n    logger.info(\"complete save model\")\n\n\n"
  },
  {
    "path": "ai_challenge_sentiment/code/sentiment_analysis2018_baseline/model.py",
    "content": "#!/user/bin/env python\n# -*- coding:utf-8 -*-\n\nfrom sklearn.svm import SVC\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.metrics import f1_score\nimport logging\n\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s [%(levelname)s] <%(processName)s> (%(threadName)s) %(message)s')\nlogger = logging.getLogger(__name__)\n\n\nclass TextClassifier():\n\n    def __init__(self, vectorizer, classifier=MultinomialNB()):\n        classifier = SVC(kernel=\"rbf\")\n        # classifier = SVC(kernel=\"linear\")\n        self.classifier = classifier\n        self.vectorizer = vectorizer\n\n    def features(self, x):\n        return self.vectorizer.transform(x)\n\n    def fit(self, x, y):\n\n        self.classifier.fit(self.features(x), y)\n\n    def predict(self, x):\n\n        return self.classifier.predict(self.features(x))\n\n    def score(self, x, y):\n        return self.classifier.score(self.features(x), y)\n\n    def get_f1_score(self, x, y):\n        return f1_score(y, self.predict(x), average='macro')\n\n\n\n"
  },
  {
    "path": "ai_challenge_sentiment/code/sentiment_analysis2018_baseline/requirements.txt",
    "content": "python==2.7.13\nnumpy==1.13.1\npandas==0.20.3\njieba==0.39\nsklearn==0.19.2\n"
  },
  {
    "path": "ai_challenge_sentiment/model.py",
    "content": "#!/user/bin/env python\n# -*- coding:utf-8 -*-\n\nfrom sklearn.svm import SVC\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.metrics import f1_score\nimport logging\n\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s [%(levelname)s] <%(processName)s> (%(threadName)s) %(message)s')\nlogger = logging.getLogger(__name__)\n\n\nclass TextClassifier():\n\n    def __init__(self, vectorizer, classifier=MultinomialNB()):\n        classifier = SVC(kernel=\"rbf\")\n        # classifier = SVC(kernel=\"linear\")\n        self.classifier = classifier\n        self.vectorizer = vectorizer\n\n    def features(self, x):\n        return self.vectorizer.transform(x)\n\n    def fit(self, x, y):\n\n        self.classifier.fit(self.features(x), y)\n\n    def predict(self, x):\n\n        return self.classifier.predict(self.features(x))\n\n    def score(self, x, y):\n        return self.classifier.score(self.features(x), y)\n\n    def get_f1_score(self, x, y):\n        return f1_score(y, self.predict(x), average='macro')\n\n\n\n"
  },
  {
    "path": "ai_challenge_sentiment/train.py",
    "content": "# -*- coding: utf-8 -*-\n\"\"\"\nSpyder Editor\n\nThis is a temporary script file.\n\"\"\"\n\n\nimport os\nos.chdir(\"C:/Users/LUMI/Desktop/sentiment\")\n\nimport pandas as pd\nimport jieba\nfrom model import TextClassifier\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nimport numpy as np\nfrom sklearn.externals import joblib\n\n\ndef seg_words(contents):\n    contents_segs = list()\n    for content in contents:\n        segs = jieba.lcut(content)\n        contents_segs.append(\" \".join(segs))\n\n    return contents_segs\n\n# load train data\ntrain_data_df = pd.read_csv('data/train/train.csv')\nvalidate_data_df = pd.read_csv('data/validation/validation.csv')\n\ncontent_train = train_data_df.iloc[:, 1]\ncontent_train = seg_words(content_train)\n\ncolumns = train_data_df.columns.values.tolist()\n\nvectorizer_tfidf = TfidfVectorizer(analyzer='word', ngram_range=(1, 5), min_df=5, norm='l2')\nvectorizer_tfidf.fit(content_train)\n\n# model train\nclassifier_dict = dict()\nfor column in columns[2:]:\n    label_train = train_data_df[column]\n    text_classifier = TextClassifier(vectorizer=vectorizer_tfidf)\n    text_classifier.fit(content_train, label_train)\n    classifier_dict[column] = text_classifier\n\n\n# validate model\ncontent_validate = validate_data_df.iloc[:, 1]\n\ncontent_validate = seg_words(content_validate)\n\n\nf1_score_dict = dict()\nfor column in columns[2:]:\n    label_validate = validate_data_df[column]\n    text_classifier = classifier_dict[column]\n    f1_score = text_classifier.get_f1_score(content_validate, label_validate)\n    f1_score_dict[column] = f1_score\n\nf1_score = np.mean(list(f1_score_dict.values()))\nstr_score = \"\\n\"\nfor column in columns[2:]:\n    str_score = str_score + column + \":\" + str(f1_score_dict[column]) + \"\\n\"\n\n# save model\njoblib.dump(classifier_dict, model_save_path + model_name)\n\n"
  },
  {
    "path": "aspect_predict.py",
    "content": "# -*- coding: utf-8 -*-\n\n__author__ = 'ZhangYi'\n\nimport os\nfrom sklearn.externals import joblib\n\nfrom utils.utils import delimiter\nfrom utils.data_process import seg_words,load_aspect_list\n\nclass AspectPredict(object):\n    def __init__(self):\n        path_delimiter = delimiter()\n        path_absa = os.path.abspath('.')\n\n        # config\n        model_name = 'aspect_svc' # todo: add to config\n        path_config = path_absa + path_delimiter + 'config.json'\n\n        # load model\n        path_model = path_absa + path_delimiter + 'model' + path_delimiter + '{}.mdl'.format(model_name)\n        self.model = joblib.load(path_model)\n\n        # load aspect list\n        self.aspect_list = load_aspect_list(path_config)\n\n\n    def predict(self, text):\n        # 1. generate result\n        result = dict()\n        result['text'] = text\n        result['aspectCategory'] = []\n\n        # 2. seg words\n        content_test = seg_words([text])\n\n        # 3. predict\n        all_result = dict()\n        for column in self.aspect_list:\n            all_result[column] = self.model[column].predict(content_test)[0]\n            if all_result[column]>0.5:\n                result['aspectCategory'].append(column)\n        result['all_result'] = all_result\n\n        print('PREDICT RESULT:',result)\n        print('PREDICT ASPECT:', result['aspectCategory'])\n        return result\n\nif __name__==\"__main__\":\n    aspect = AspectPredict()\n    aspect.predict('这块屏幕不错')"
  },
  {
    "path": "config.json",
    "content": "{\"aspect_list\": [\"HARDWARE#USABILITY\", \"BATTERY#USABILITY\", \"HARDWARE#QUALITY\", \"MEMORY#GENERAL\", \"OS#PRICE\", \"MULTIMEDIA_DEVICES#QUALITY\", \"MULTIMEDIA_DEVICES#OPERATION_PERFORMANCE\", \"PORTS#DESIGN_FEATURES\", \"MULTIMEDIA_DEVICES#USABILITY\", \"OS#GENERAL\", \"SUPPORT#MISCELLANEOUS\", \"KEYBOARD#GENERAL\", \"POWER_SUPPLY#OPERATION_PERFORMANCE\", \"PHONE#QUALITY\", \"MEMORY#DESIGN_FEATURES\", \"CPU#USABILITY\", \"OS#CONNECTIVITY\", \"SOFTWARE#MISCELLANEOUS\", \"CPU#OPERATION_PERFORMANCE\", \"KEYBOARD#USABILITY\", \"PORTS#USABILITY\", \"KEYBOARD#QUALITY\", \"HARD_DISC#QUALITY\", \"MULTIMEDIA_DEVICES#CONNECTIVITY\", \"SOFTWARE#OPERATION_PERFORMANCE\", \"MEMORY#USABILITY\", \"PHONE#CONNECTIVITY\", \"DISPLAY#OPERATION_PERFORMANCE\", \"PHONE#DESIGN_FEATURES\", \"KEYBOARD#OPERATION_PERFORMANCE\", \"HARDWARE#OPERATION_PERFORMANCE\", \"POWER_SUPPLY#CONNECTIVITY\", \"PHONE#USABILITY\", \"OS#QUALITY\", \"BATTERY#OPERATION_PERFORMANCE\", \"HARDWARE#CONNECTIVITY\", \"POWER_SUPPLY#QUALITY\", \"HARD_DISC#OPERATION_PERFORMANCE\", \"SUPPORT#QUALITY\", \"PHONE#OPERATION_PERFORMANCE\", \"CPU#GENERAL\", \"SUPPORT#USABILITY\", \"DISPLAY#QUALITY\", \"OS#DESIGN_FEATURES\", \"POWER_SUPPLY#USABILITY\", \"HARDWARE#DESIGN_FEATURES\", \"CPU#QUALITY\", \"PHONE#MISCELLANEOUS\", \"SOFTWARE#QUALITY\", \"OS#OPERATION_PERFORMANCE\", \"WARRANTY#OPERATION_PERFORMANCE\", \"PHONE#GENERAL\", \"PHONE#PRICE\", \"MULTIMEDIA_DEVICES#GENERAL\", \"PORTS#OPERATION_PERFORMANCE\", \"POWER_SUPPLY#GENERAL\", \"KEYBOARD#DESIGN_FEATURES\", \"MEMORY#QUALITY\", \"SOFTWARE#USABILITY\", \"DISPLAY#DESIGN_FEATURES\", \"BATTERY#QUALITY\", \"PORTS#CONNECTIVITY\", \"PORTS#QUALITY\", \"HARDWARE#GENERAL\", \"OS#USABILITY\", \"SOFTWARE#GENERAL\", \"DISPLAY#USABILITY\", \"DISPLAY#GENERAL\", \"MULTIMEDIA_DEVICES#DESIGN_FEATURES\", \"BATTERY#DESIGN_FEATURES\", \"OTHERS\", \"SOFTWARE#CONNECTIVITY\", \"SOFTWARE#DESIGN_FEATURES\"]}"
  },
  {
    "path": "polarity_predict.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n__author__ = 'ZhangYi'\n\nimport os\nfrom sklearn.externals import joblib\n\nfrom utils.utils import delimiter\nfrom utils.grammar import chinese_only\nfrom utils.data_process import seg_words, gen_text_vec\n\nclass PolarityClassifier(object):\n    \"\"\"\n    text classification\n    \"\"\"\n    def __init__(self):\n        # config\n        model_name = 'polarity_doc' # doc-based\n\n        # path\n        path_delimiter = delimiter()\n        if 'absa' in os.path.abspath('.').split(path_delimiter):\n            path_absa = os.path.abspath('.')\n        else:\n            # 被调用路径=path_comment\n            path_absa = os.path.abspath('.') + path_delimiter + 'train' \\\n                        + path_delimiter + 'sentiment' + path_delimiter + 'absa'\n\n        # model path\n        path_model_dir = path_absa + path_delimiter + 'model'\n\n        # load tokenizer\n        path_tokenizer = path_model_dir + path_delimiter + '{}.tk'.format(model_name)\n        self.tokenizer = joblib.load(path_tokenizer)\n\n        # load model\n        path_model = path_model_dir + path_delimiter + '{}.mdl'.format(model_name)\n        self.model = joblib.load(path_model)\n        self.model._make_predict_function()\n\n    def predict(self, comment):\n\n        # 1. chinese only\n        cmt = chinese_only([comment])\n\n        # 2. jieba token\n        cmt = seg_words(cmt)[0]\n\n        # 3. gen word vector\n        _cmt = gen_text_vec(self.tokenizer, cmt, maxlen = 200)\n\n        # token observation\n        # split_tokens = []\n        # for token in str(_cmt).split(\" \"):\n        #     if token.isdigit():\n        #         split_tokens.append(token)\n        # print(\"len(split_tokens):{}\".format(len(split_tokens)))\n\n        # 4. predict\n        neg_prob = self.model.predict(_cmt)[0][0]\n        # neg_prob = (neg_prob > 0.5)\n\n        # 5. json result output\n        result = {'items':[{'negative_prob': 0,'sentiment': 0}], 'log_id': '', 'text': ''}\n        result['items'][0]['negative_prob'] = neg_prob\n        result['items'][0]['sentiment'] = int(round(neg_prob))  # 1表示差评；0表示好评\n        result['text'] = comment\n        print(\"SENTIMENT RESULT: \",result)\n        return result\n\n\nif __name__==\"__main__\":\n    t = PolarityClassifier()\n    t.predict('这块电池好看')\n"
  },
  {
    "path": "train/aspect_classifier.py",
    "content": "# -*- coding: utf-8 -*-\n__author__ = 'ZhangYi'\n\nimport os\nimport ast\nimport json\nimport pandas as pd\nimport numpy as np\nfrom sklearn.externals import joblib\nfrom sklearn.feature_extraction.text import TfidfVectorizer\n\nfrom train.model.model import TextClassifier\nfrom utils.utils import delimiter\nfrom utils.data_process import nan_to_others,category_transpose,seg_words,load_aspect_list\n\nclass AspectClassifier(object):\n    \"\"\"\n    Aspect(=EA) Classifier Train Part\n    \"\"\"\n    def __init__(self):\n        path_delimiter = delimiter()\n        path_absa = os.path.abspath('..')\n\n        # config\n        task_tag = 'aspect_'\n        model_name = task_tag + 'svc'\n\n        # config path\n        self.path_config = path_absa + path_delimiter + 'config.json'\n\n        # model path\n        self.model_path = path_absa + path_delimiter + 'model' + path_delimiter + '{}.mdl'.format(model_name)\n\n        # data path\n        self.path_data = path_absa + path_delimiter +'data'\n        self.path_data_ch = path_absa + path_delimiter +'data' + path_delimiter + 'chinese' + path_delimiter\n        self.path_train_df = self.path_data + path_delimiter + 'aspect' + path_delimiter + '{}_train.xlsx'.format(model_name)\n        self.path_test_df = self.path_data + path_delimiter + 'aspect' + path_delimiter + '{}_test.xlsx'.format(model_name)\n\n    def data_process(self):\n        if os.path.isfile(self.path_train_df) \\\n                and os.path.isfile(self.path_test_df) \\\n                and os.path.isfile(self.path_config):\n\n            train_df = pd.read_excel(self.path_train_df)\n            test_df = pd.read_excel(self.path_test_df)\n            self.category_list = load_aspect_list(self.path_config)\n\n        else:\n            # 1. load data\n            train = pd.read_excel(self.path_data_ch+'Chinese_phones_training.xlsx')\n            test = pd.read_excel(self.path_data_ch+'CH_PHNS_SB1_TEST.xlsx')\n\n            # 2. mark NaN as 'OTHERS'\n            _data = []\n            for data in [train, test]:\n                df = nan_to_others(data)\n                _data.append(df)\n\n            # 3. generate category list\n            self.category_list = list(set(_data[0]['category']))  # len = 73\n\n            # 4. save category list to config\n            cate_dict = {'aspect_list':self.category_list}\n            with open(self.path_config, \"w\") as f:\n                f.write(json.dumps(cate_dict))\n            f.close()\n\n            # 5. generate df by category transpose\n            all_data = []\n            for d in _data:\n                df = category_transpose(d, self.category_list)\n                all_data.append(df)\n\n            # 6. save data\n            train_df, test_df = all_data[0], all_data[1]\n            train_df.to_excel(self.path_train_df, index=False)\n            test_df.to_excel(self.path_test_df, index=False)\n\n        return train_df, test_df\n\n    def train(self, train_df):\n        content_train = seg_words(train_df['text'])\n        vectorizer_tfidf = TfidfVectorizer(analyzer='word', ngram_range=(1, 5), min_df=5, norm='l2')\n        vectorizer_tfidf.fit(content_train)\n\n        # model train\n        classifier_dict = dict()\n        for column in self.category_list:\n            print(column)\n            label_train = train_df[column]\n            text_classifier = TextClassifier(vectorizer=vectorizer_tfidf)\n            text_classifier.fit(content_train, label_train)\n            classifier_dict[column] = text_classifier\n\n        # save model\n        if os.path.isfile(self.model_path):\n            pass\n        else:\n            joblib.dump(classifier_dict, self.model_path)\n\n    def test(self, test_df):\n        classifier = joblib.load(self.model_path)\n        content_test = seg_words(test_df['text'])\n\n        f1_score_dict = dict()\n        for column in self.category_list:\n            label_validate = test_df[column]\n            text_classifier = classifier[column]\n            f1_score = text_classifier.get_f1_score(content_test, label_validate)\n            f1_score_dict[column] = f1_score\n\n        f1_score = np.mean(list(f1_score_dict.values()))\n        print('F1-SCORE-DICT: ', f1_score_dict)\n        print('MEAN OF F1-SCORE-DICT: ', f1_score)\n\n        return f1_score_dict\n\n\nif __name__==\"__main__\":\n    aspect = AspectClassifier()\n    train_df, test_df = aspect.data_process()\n\n    aspect.train(train_df)\n    aspect.test(test_df)"
  },
  {
    "path": "train/model/bilstm.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n__author__ = 'ZhangYi'\n\nfrom sklearn.metrics import accuracy_score, f1_score, confusion_matrix\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM, Embedding, Dropout,Bidirectional, GlobalMaxPool1D\n\n\nclass BiLSTM():\n    def __init__(self, max_features, embed_size):\n        model = Sequential()\n        model.add(Embedding(max_features, embed_size))\n        model.add(Bidirectional(LSTM(32, return_sequences=True)))\n        model.add(GlobalMaxPool1D())\n        model.add(Dense(20, activation=\"relu\"))\n        model.add(Dropout(0.05))\n        model.add(Dense(1, activation=\"sigmoid\"))\n        model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\n        self.classifier = model\n\n    def fit(self, x, y, batch_size, epochs, validation_split):\n        self.classifier.fit(x,y, batch_size=batch_size, epochs=epochs, validation_split=0.2)\n\n    def predict(self, x):\n        return self.classifier.predict(x)\n\n    def evaluate(self, y_true, y_pred):\n        acc = accuracy_score(y_pred, y_true)\n        f1 = f1_score(y_pred, y_true)\n        cfs_matrix = confusion_matrix(y_pred, y_true)\n        print('Accuracy Score:', acc)\n        print('F1-score: {0}'.format(f1))\n        print('Confusion matrix:\\n', cfs_matrix)\n\n        return acc, f1, cfs_matrix\n\n    def _make_predict_function(self):\n        self.classifier._make_predict_function()"
  },
  {
    "path": "train/model/model.py",
    "content": "#!/user/bin/env python\n# -*- coding:utf-8 -*-\n__author__ = 'ZhangYi'\n\nfrom sklearn.svm import SVC\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.metrics import f1_score\nimport logging\n\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s [%(levelname)s] <%(processName)s> (%(threadName)s) %(message)s')\nlogger = logging.getLogger(__name__)\n\n\nclass TextClassifier():\n\n    def __init__(self, vectorizer, classifier=MultinomialNB()):\n        classifier = SVC(kernel=\"rbf\")\n        # classifier = SVC(kernel=\"linear\")\n        self.classifier = classifier\n        self.vectorizer = vectorizer\n\n    def features(self, x):\n        return self.vectorizer.transform(x)\n\n    def fit(self, x, y):\n\n        self.classifier.fit(self.features(x), y)\n\n    def predict(self, x):\n\n        return self.classifier.predict(self.features(x))\n\n    def score(self, x, y):\n        return self.classifier.score(self.features(x), y)\n\n    def get_f1_score(self, x, y):\n        return f1_score(y, self.predict(x), average='macro')\n\n\n\n"
  },
  {
    "path": "train/polarity_classifier.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n__author__ = 'ZhangYi'\n\nimport os\nimport pandas as pd\nfrom sklearn.externals import joblib\nfrom sklearn.model_selection import train_test_split\nfrom keras.preprocessing.text import Tokenizer\n\n\nfrom train.model.bilstm import BiLSTM\nfrom utils.utils import delimiter\nfrom utils.grammar import chinese_only\nfrom utils.data_process import merge_excel,seg_words,remove_empty_row,gen_text_vec\n\nclass PolarityClassifier(object):\n    \"\"\"\n    train sentiment model and generate model file\n    \"\"\"\n    def __init__(self):\n        path_delimiter = delimiter()\n        path_absa = os.path.abspath('..')\n\n        # config\n        self.maxlen = 200          # doc word length\n        task_tag = 'polarity_'\n        model_name = task_tag + 'docu'\n\n        # model path\n        path_model = path_absa + path_delimiter + 'model'\n        self.model_path = path_model + path_delimiter + '{}.mdl'.format(model_name)\n        self.path_tokenizer = path_model + path_delimiter + '{}.tk'.format(model_name)\n\n        # data path\n        path_data_doc_level = path_delimiter.join(path_absa.split(path_delimiter)[:-2]) + path_delimiter + \"data\" \\\n                              + path_delimiter + 'sentiment' + path_delimiter + 'document_level'\n        self.path_train_data = path_data_doc_level + path_delimiter + 'train_data'\n\n        self.path_data = path_absa + path_delimiter + 'data'\n        self.path_corpus = self.path_data + path_delimiter + 'polarity' + path_delimiter + '{}.xlsx'.format(model_name)\n\n        # generate tokenizer\n        self.data = self.data_process()\n        self.tokenizer = self.gen_tokenizer(self.data['cmt_split'])\n\n    def data_process(self):\n        if os.path.isfile(self.path_corpus):\n            data = pd.read_excel(self.path_corpus)\n\n        else:\n            # 1. merge data\n            data = merge_excel(self.path_train_data)\n\n            # 2. Chinese character only\n            data['cmt_zh'] = chinese_only(data['comment_content'])\n\n            # 3. jieba token for dictionary\n            data['cmt_split'] = seg_words(data['cmt_zh'])\n\n            # 4. remove empty comment\n            data = remove_empty_row(data, 'cmt_split')\n\n            # 5. save data\n            data.to_excel(self.path_corpus)\n        return data\n\n    def gen_tokenizer(self, cut_corpus_list):\n        if os.path.isfile(self.path_tokenizer):\n            tokenizer = joblib.load(self.path_tokenizer)\n        else:\n            tokenizer = Tokenizer()\n            tokenizer.fit_on_texts(cut_corpus_list.astype(str))\n            joblib.dump(tokenizer, self.path_tokenizer)\n        return tokenizer\n\n    def gen_train_test(self, x, y):\n        X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1)\n        return X_train, X_test, y_train, y_test\n\n    def train(self, X_train, y_train):\n        embed_size = 256\n        max_features = 66000  # dictionary size\n        classifier = BiLSTM(max_features, embed_size)\n\n        epochs = 2\n        batch_size = 100\n        X_tr = gen_text_vec(self.tokenizer, X_train, self.maxlen)\n        classifier.fit(X_tr, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.2)\n\n        # save model\n        if os.path.isfile(self.model_path):\n            print('model exist already')\n        else:\n            joblib.dump(classifier, self.model_path)\n\n    def test(self, X_test, y_test):\n        X_te = gen_text_vec(self.tokenizer, X_test, self.maxlen)\n\n        # load model\n        model = joblib.load(self.model_path)\n        pred_prob = model.predict(X_te)\n\n        # pred = (pred_prob > 0.65)\n        pred = [int(round(i[0])) for i in pred_prob]\n        y_test = [int(i) for i in y_test]\n\n        # evaluate\n        eval = model.evaluate(y_test, pred)\n        return eval\n\n\n    def batch_predict(self, batch_cmt_df):\n\n        # 1. chinese only\n        batch_cmt_df['cmt_zh'] = chinese_only(batch_cmt_df['comment_content'])\n\n        # 2. token cut\n        batch_cmt_df['cmt_split'] = seg_words(batch_cmt_df['cmt_zh'])\n\n        # 暂时没有remove empty环节\n\n        # 3. predict\n        self.test(batch_cmt_df['cmt_split'],batch_cmt_df['label'])\n\n        # # save result\n        # result = pd.DataFrame(np.array([self.X_test,self.y_test,pred]).T,columns=['comment_zh','GroundTruth','bilstm'])\n        # result.to_excel('data/sentiment/result_.xlsx')\n\n\nif __name__==\"__main__\":\n    pc = PolarityClassifier()\n    data = pc.data\n    X_train, X_test, y_train, y_test = pc.gen_train_test(data['cmt_split'], data['label'])\n    pc.train(X_train, y_train)\n    pc.test(X_test, y_test)"
  },
  {
    "path": "utils/__init__.py",
    "content": ""
  },
  {
    "path": "utils/baidu_tagging.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom  aip  import  AipNlp\nimport  pandas  as  pd\nimport  time\n\n\"\"\"  你的  APPID  AK  SK  \"\"\"\nAPP_ID  =  '155934'\nAPI_KEY  =  'PBW2w1dveS7x3YcKSZW0V7'\nSECRET_KEY  =  'AOE75EWZqeI6kM7Kesq8i6FzQruDI'\nclient  =  AipNlp(APP_ID,  API_KEY,  SECRET_KEY)\n\n# 请求文件\nsource_file  =  \"请求文件路径\"\nsource_df  =  pd.read_excel(source_file)\n\n\ncomments  =  []\nneg_probs  =  []\npos_probs  =  []\nconfidences  =  []\nsentiments  =  []\ncomplete_count  =  0\n#  请求错误统计\nerr_count  =  0\nerr_comment  =  []\nstart_time  =  time.time()\n#  循环请求\ni  =  0\nwhile  i  <  len(source_df):\n        comment  =  source_df[\"comment_content\"][i]\n        try:\n                query_result  =  client.sentimentClassify(comment[:1024])\n        except  Exception  as  e:\n                print(\"query_result:{}\".format(query_result))\n                print(\"#######请求过程存在问题#######\")\n                err_count  +=  1\n                err_comment.append(comment)\n                i  +=  1\n                continue\n        try:\n                result  =  query_result['items'][0]\n                neg_prob  =  result['negative_prob']\n                pos_prob  =  result['positive_prob']\n                confidence  =  result['confidence']\n                sentiment  =  result['sentiment']\n        except  KeyError  as  e:\n                print(\"#######请求QPS限制#######\")\n                print(\"i={}\".format(i))\n                continue\n        i  +=  1\n        comments.append(comment)\n        neg_probs.append(neg_prob)\n        pos_probs.append(pos_prob)\n        confidences.append(confidence)\n        sentiments.append(sentiment)\n        complete_count  +=  1\n        print(\"总共：{}条\".format(len(source_df)))\n        print(\"请求完成:  {}条\".format(complete_count))\n        print(\"完成进度：{}%\".format(round(complete_count  /  len(source_df)  *  100,  2)))\n        cost_mins  =  (time.time()  -  start_time)  /  60\n        print(\"累计用时：{}分钟\".format(round(cost_mins,  2)))\n        avg_query_time  =  complete_count  /  cost_mins\n        #  print(\"每条请求平均用时：{}\".format(avg_query_time))\n        left_mins  =  (len(source_df)  -  complete_count  -  err_count)  /  avg_query_time\n        print(\"预计还需：{}分钟\".format(round(left_mins,  2)))\n        print(\"\\n\")\n\nprint(\"所有请求完成！\")\nprint(\"请求总数量：{}\".format(len(source_df)))\nprint(\"请求过程中存在问题的数量：{}\".format(err_count))\n\n\n\n#  保存结果\n\n# 请求成功的结果保存\ndesti_df  =  pd.DataFrame()\ndesti_df[\"comment\"]  =  comments\ndesti_df[\"neg_probs\"]  =  neg_probs\ndesti_df[\"pos_probs\"]  =  pos_probs\ndesti_df[\"confidences\"]  =  confidences\ndesti_df[\"sentiments\"]  =  sentiments\ndesti_file  =  \"请求结果保存路径\"\ndesti_df.to_excel(desti_file, engine='xlsxwriter')\n\n# 请求失败的结果保存\nerr_df  =  pd.DataFrame()\nerr_file  =  \"请求结果报错保存路径\"\nerr_df[\"comment\"] = err_comment\nerr_df.to_excel(err_file, engine='xlsxwriter') # 如果请求接口里有奇怪字符，保存文件时就使用, engine='xlsxwriter'"
  },
  {
    "path": "utils/data_process.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n__author__ = 'ZhangYi'\n\nimport ast\nimport jieba\nimport itertools\nimport pandas as pd\nimport numpy as np\nfrom keras.preprocessing.sequence import pad_sequences\n\n# mark NaN as 'OTHERS'\ndef nan_to_others(df):\n    new_cate = []\n    new_polarity = []\n\n    # dataframe必须含有列：['text', 'category', 'polarity']\n    for idx, i in enumerate(df['polarity']):\n        if i in ['negative', 'positive', 'neutral', 'conflict']:\n            new_cate.append(df['category'][idx])\n            new_polarity.append(i)\n        else:\n            new_cate.append('OTHERS')\n            new_polarity.append('OTHERS')\n    _df = pd.DataFrame(np.array([df['text'], new_cate, new_polarity]).T, columns=['text', 'category', 'polarity'])\n    return _df\n\n# tokenize\ndef seg_words(contents):\n    contents_segs = list()\n    for content in contents:\n        segs = jieba.lcut(content)\n        contents_segs.append(\" \".join(segs))\n    return contents_segs\n\n# get text vector\ndef gen_text_vec(tokenizer, cut_corpus_list, maxlen):\n    text_vec = tokenizer.texts_to_sequences(cut_corpus_list)\n    t_vec = pad_sequences(text_vec, maxlen=maxlen)\n    return t_vec\n\n# category transpose\ndef category_transpose(df, category_list):\n    for i in category_list:\n        l_ist = []\n        # dataframe必须含有列：['category']\n        for cate in df['category']:\n            if cate == i:\n                l_ist.append(1)\n            else:\n                l_ist.append(0)\n        df[i] = l_ist\n    return df\n\n# load config: aspect_list\ndef load_aspect_list(path_config):\n    # only one param in config: aspect_list\n    a = 0\n    with open(path_config, \"r\", encoding='utf-8') as f:\n        for i in f:\n            category_list = ast.literal_eval(i)['aspect_list']\n            a = a + 1\n            if a == 1:\n                break\n    f.close()\n    return category_list\n\n# merge excel\ndef merge_excel(path_data_dir):\n    cmt_l = []\n    scr_l = []\n\n    # 被merge的df都必须有['comment_content', 'label']\n    data_source = ['/2019-04-12_lock_comment_jd_spider_baidu_sentiment.xlsx', \\\n                   '/20190329_train_lock_comments_document_level_with_label.xls', \\\n                   '/all_comments_document_level_without_lock_comments.xls', \\\n                   '/bad_comments_in_forum_mi.com_youpin.xls']\n    for i in data_source:\n        path_data = path_data_dir + i\n        _data = pd.read_excel(path_data)\n\n        cmt_l.append(_data['comment_content'])\n        scr_l.append(_data['label'])\n\n    comment = list(itertools.chain.from_iterable(cmt_l))\n    score = list(itertools.chain.from_iterable(scr_l))\n\n    data = pd.DataFrame(np.array([comment, score]).T, columns=['comment_content', 'label'])\n    return data\n\n# remove row by column with empty value\ndef remove_empty_row(df, column_name):\n    row_to_delete = []\n    for idx, i in enumerate(df[column_name]):\n        if not bool(i):\n            row_to_delete.append(idx)\n    df = df.drop(df.index[row_to_delete])\n    return df.reset_index(drop=True)"
  },
  {
    "path": "utils/grammar.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n__author__ = 'ZhangYi'\n\nimport re\n\ndef chinese_only(txt_list):\n    cmt_zh = []\n    for cmt in txt_list:\n        line = cmt.strip()\n        p2 = re.compile(u'[^\\u4e00-\\u9fa5]')\n        zh = \" \".join(p2.split(line)).strip()\n        cmt_zh.append(\",\".join(zh.split()))\n    return cmt_zh"
  },
  {
    "path": "utils/utils.py",
    "content": "# -*- coding: utf-8 -*-\n\n__author__ = 'ZhangYi'\n\nimport sys\n\ndef delimiter():\n    path_delimiter = '/'\n    if 'win' in sys.platform:\n        path_delimiter = '\\\\'\n    return path_delimiter"
  }
]