[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nenv/\nbin/\nbuild/\ndevelop-eggs/\ndist/\neggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\n*.egg-info/\n.installed.cfg\n*.egg\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.cache\nnosetests.xml\ncoverage.xml\n\n# Translations\n*.mo\n\n# Mr Developer\n.mr.developer.cfg\n.project\n.pydevproject\n\n# Rope\n.ropeproject\n\n# Django stuff:\n*.log\n*.pot\n\n# Sphinx documentation\ndocs/_build/\n\n"
  },
  {
    "path": "README.md",
    "content": "text-similarity\n===============\nBy max.zhang@2013-11-06\n\n说明：本项目为python语言实现的文本相似度检测工具\n\n# 环境依赖\n*\tpython\n*\tpython-jieba\n*\tbash\n\n# 目录说明\ndata 文件夹\n\n\t-stopwords.txt （停用词表）\n\ndata/temp 文件夹 （存放中间结果文件和文件夹，文件中每一行均表示一个文档）\n\n\t-*.content\t网页解析后的原始文本（有噪声）\n\n\t-*.ori\t\t经过预处理后的，可用于检测的原始文本（去噪）\n\n\t-*.token\t\t中文分词结果\n\n\t-word.dict\t根据分词结果生成的特征词典\n\n\t-*.feat\t\t特征向量文件\n\n\t-*.fprint\t\tSimhash信息指纹文件\n\nsrc/ 文件夹  \n\n\t源程序\n\n\n# 代码使用说明\n\n## 判断两个文档的重复度（整合）\n\n### 生成特征词典 (preprocess.py)\n\nbrief: 对原始文本进行分词并将结果添加到特征词典中\n\nINPUT: 原始文本 + 停用词表 + 特征词典\n\nOUTPUT: 将分词结果保存到.token中，并更新特征词典文件\n\nusage:\n\n\tsrc/preprocess.py <*.ori> <stopword_path> <word_dict>\n\ne.g.\n\n\tsrc/preprocess.py data/temp/doc1.ori data/stopwords.txt data/word.dict\n\n{Note: 需对待比较的两个文档分别运行一次, i.e. 两个文档的分词结果都应添加到特征词典中}\n\n\n### 判断文档重复度 (isSimilar.py)\n\nbrief: 判断两个文档是否重复\n\nINPUT: 文档1 + 文档2 + 停用词表 + 特征词典 + 模式选择 + 阈值\n\nOUTPUT: 输出两篇文档是否重复及相似度\n\nusage:\n\n\tsrc/isSimilar.py <doc1> <doc2> <stopword_path> <word_dict> <-c/-s> <threshold>\n\n\t-c/-s\t选择采用VSM+CosineDistance或是Simhash+HammingDistance方法进行重复判断\n\ne.g.\n\n\tsrc/isSimilar.py data/temp/doc1.ori data/temp/doc2.ori data/stopwords.txt data/word.dict -c 0.8\n\n\n## 详细处理流程（单步）\n\n### 去噪 (webcontent-filter.sh)\n\nbrief: 原始文本的初步去噪（去特殊符号、英文字母、数字 ...），消除连续空格以及删除空白行\n\nINPUT: 待去噪文本 (.content)\n\nOUTPUT: 去噪后的文本 (.ori)\n\nusage:\n\n\tsrc/webcontent_filter.sh <*.content> <*.ori>\n\t\ne.g.\n\n\tsrc/webcontent-filter.sh data/temp/all.content data/temp/all.ori\n\t\n\n### 预处理\n\n#### 中文分词(tokens.py)\n\nbrief: 采用Jieba分词器对去噪后的原始文本进行中文分词\n\nINPUT: 去噪后的文本 (.ori)\n\nOUTPUT: 中文分词结果 (.token)\n\nusage:\n\n\t./tokens.py  -s/-m <*.ori/inputfolder> <*.token/outputfolder> c/s[mode] <stopword.list>\n\n\t-s[single]/-m[multiple]  对单个文本文件 (*.ori) 或对文本文件目录进行分词\n\n\t\t-s <*.ori> <*.token>\n\n\t\t-m <inputfolder> <outputfolder> {Note: 采用-m模式时，原始文本名最好以.ori结尾}\n\n\tc/s[mode]\tJieba分词器模式选择\n\n\t\tc模式\tjieba.cut(...)\n\n\t\ts模式\tjieba.cut_for_search()\n\ne.g.\n\n\tsrc/tokens.py  -s  data/temp/all.ori data/temp/all.token c data/stopwords.txt \n\n\n#### 生成特征词典 (DictBuilder.py)\n\nbrief: 根据分词结果文件或目录，生成以词频降序排列的特征词典\n\nINPUT: 中文分词结果 (.token)\n\nOUTPUT:生成的特征词典，词典格式如下：ID + 特征词 + 词频\n\nusage:\n\n\tsrc/DictBuilder.py <input_folder/*.token> <output_file>\n\ne.g.\n\n\tsrc/DictBuilder.py data/temp/all.token data/temp/word.dict\n\n\n#### 生成特征向量 (features.py)\n\nbrief: 根据分词结果和特征词典，生成特征向量文件\n\nINPUT: 第一步处理中分词后的文本 + 第二步生成的特征词典\n\nOUTPUT: 以行为单位生成各文档的特征向量：id1:nonzero-tf id2:nonzero-tf ...\n\nusage:\n\n\tsrc/feature.py -s/-m <word_dict_path> <tokens_file/tokens_folder> <feature_file/feature_folder>\n\n\t-s[single]/-m[multiple]  对单个分词文件 (*.token) 或对分词文件目录生成特征向量\n\t\ne.g.\n\n\tsrc/feature.py -s data/temp/word.dict data/temp/all.token data/temp/all.feat\n\n\n#### 生成Simhash指纹 (simhash_imp.py)\n\nbrief: 根据分词结果和特征词典，生成信息指纹文件\n\nINPUT: 特征词典 + 特征向量文件\n\nOUTPUT: 信息指纹文件\n\nusage:\n\n\tsrc/simhash_imp.py <word_dict_path> <*.feat> <*.fprint>\n\ne.g.\n\n\tsrc/simhash_imp.py data/temp/word.dict data/temp/all.feat data/temp/all.fprint\n\n## 单元测试\n\n    cd test\n    python test_token.py\n"
  },
  {
    "path": "data/stopwords.txt",
    "content": ",\r\n?\r\n、\r\n。\r\n《\r\n》\r\n！\r\n，\r\n：\r\n；\r\n？\r\n人民\r\n末##末\r\n啊\r\n阿\r\n哎\r\n哎呀\r\n哎哟\r\n唉\r\n俺\r\n俺们\r\n按\r\n按照\r\n吧\r\n吧哒\r\n把\r\n罢了\r\n被\r\n本\r\n本着\r\n比\r\n比方\r\n比如\r\n鄙人\r\n彼\r\n彼此\r\n边\r\n别\r\n别的\r\n别说\r\n并\r\n并且\r\n不比\r\n不成\r\n不单\r\n不但\r\n不独\r\n不管\r\n不光\r\n不过\r\n不仅\r\n不拘\r\n不论\r\n不怕\r\n不然\r\n不如\r\n不特\r\n不惟\r\n不问\r\n不只\r\n朝\r\n朝着\r\n趁\r\n趁着\r\n乘\r\n冲\r\n除\r\n除此之外\r\n除非\r\n除了\r\n此\r\n此间\r\n此外\r\n从\r\n从而\r\n打\r\n待\r\n但\r\n但是\r\n当\r\n当着\r\n到\r\n得\r\n的\r\n的话\r\n等\r\n等等\r\n地\r\n第\r\n叮咚\r\n对\r\n对于\r\n多\r\n多少\r\n而\r\n而况\r\n而且\r\n而是\r\n而外\r\n而言\r\n而已\r\n尔后\r\n反过来\r\n反过来说\r\n反之\r\n非但\r\n非徒\r\n否则\r\n嘎\r\n嘎登\r\n该\r\n赶\r\n个\r\n各\r\n各个\r\n各位\r\n各种\r\n各自\r\n给\r\n根据\r\n跟\r\n故\r\n故此\r\n固然\r\n关于\r\n管\r\n归\r\n果然\r\n果真\r\n过\r\n哈\r\n哈哈\r\n呵\r\n和\r\n何\r\n何处\r\n何况\r\n何时\r\n嘿\r\n哼\r\n哼唷\r\n呼哧\r\n乎\r\n哗\r\n还是\r\n还有\r\n换句话说\r\n换言之\r\n或\r\n或是\r\n或者\r\n极了\r\n及\r\n及其\r\n及至\r\n即\r\n即便\r\n即或\r\n即令\r\n即若\r\n即使\r\n几\r\n几时\r\n己\r\n既\r\n既然\r\n既是\r\n继而\r\n加之\r\n假如\r\n假若\r\n假使\r\n鉴于\r\n将\r\n较\r\n较之\r\n叫\r\n接着\r\n结果\r\n借\r\n紧接着\r\n进而\r\n尽\r\n尽管\r\n经\r\n经过\r\n就\r\n就是\r\n就是说\r\n据\r\n具体地说\r\n具体说来\r\n开始\r\n开外\r\n靠\r\n咳\r\n可\r\n可见\r\n可是\r\n可以\r\n况且\r\n啦\r\n来\r\n来着\r\n离\r\n例如\r\n哩\r\n连\r\n连同\r\n两者\r\n了\r\n临\r\n另\r\n另外\r\n另一方面\r\n论\r\n嘛\r\n吗\r\n慢说\r\n漫说\r\n冒\r\n么\r\n每\r\n每当\r\n们\r\n莫若\r\n某\r\n某个\r\n某些\r\n拿\r\n哪\r\n哪边\r\n哪儿\r\n哪个\r\n哪里\r\n哪年\r\n哪怕\r\n哪天\r\n哪些\r\n哪样\r\n那\r\n那边\r\n那儿\r\n那个\r\n那会儿\r\n那里\r\n那么\r\n那么些\r\n那么样\r\n那时\r\n那些\r\n那样\r\n乃\r\n乃至\r\n呢\r\n能\r\n你\r\n你们\r\n您\r\n宁\r\n宁可\r\n宁肯\r\n宁愿\r\n哦\r\n呕\r\n啪达\r\n旁人\r\n呸\r\n凭\r\n凭借\r\n其\r\n其次\r\n其二\r\n其他\r\n其它\r\n其一\r\n其余\r\n其中\r\n起\r\n起见\r\n岂但\r\n恰恰相反\r\n前后\r\n前者\r\n且\r\n然而\r\n然后\r\n然则\r\n让\r\n人家\r\n任\r\n任何\r\n任凭\r\n如\r\n如此\r\n如果\r\n如何\r\n如其\r\n如若\r\n如上所述\r\n若\r\n若非\r\n若是\r\n啥\r\n上下\r\n尚且\r\n设若\r\n设使\r\n甚而\r\n甚么\r\n甚至\r\n省得\r\n时候\r\n什么\r\n什么样\r\n使得\r\n是\r\n是的\r\n首先\r\n谁\r\n谁知\r\n顺\r\n顺着\r\n似的\r\n虽\r\n虽然\r\n虽说\r\n虽则\r\n随\r\n随着\r\n所\r\n所以\r\n他\r\n他们\r\n他人\r\n它\r\n它们\r\n她\r\n她们\r\n倘\r\n倘或\r\n倘然\r\n倘若\r\n倘使\r\n腾\r\n替\r\n通过\r\n同\r\n同时\r\n哇\r\n万一\r\n往\r\n望\r\n为\r\n为何\r\n为了\r\n为什么\r\n为着\r\n喂\r\n嗡嗡\r\n我\r\n我们\r\n呜\r\n呜呼\r\n乌乎\r\n无论\r\n无宁\r\n毋宁\r\n嘻\r\n吓\r\n相对而言\r\n像\r\n向\r\n向着\r\n嘘\r\n呀\r\n焉\r\n沿\r\n沿着\r\n要\r\n要不\r\n要不然\r\n要不是\r\n要么\r\n要是\r\n也\r\n也罢\r\n也好\r\n一\r\n一般\r\n一旦\r\n一方面\r\n一来\r\n一切\r\n一样\r\n一则\r\n依\r\n依照\r\n矣\r\n以\r\n以便\r\n以及\r\n以免\r\n以至\r\n以至于\r\n以致\r\n抑或\r\n因\r\n因此\r\n因而\r\n因为\r\n哟\r\n用\r\n由\r\n由此可见\r\n由于\r\n有\r\n有的\r\n有关\r\n有些\r\n又\r\n于\r\n于是\r\n于是乎\r\n与\r\n与此同时\r\n与否\r\n与其\r\n越是\r\n云云\r\n哉\r\n再说\r\n再者\r\n在\r\n在下\r\n咱\r\n咱们\r\n则\r\n怎\r\n怎么\r\n怎么办\r\n怎么样\r\n怎样\r\n咋\r\n照\r\n照着\r\n者\r\n这\r\n这边\r\n这儿\r\n这个\r\n这会儿\r\n这就是说\r\n这里\r\n这么\r\n这么点儿\r\n这么些\r\n这么样\r\n这时\r\n这些\r\n这样\r\n正如\r\n吱\r\n之\r\n之类\r\n之所以\r\n之一\r\n只是\r\n只限\r\n只要\r\n只有\r\n至\r\n至于\r\n诸位\r\n着\r\n着呢\r\n自\r\n自从\r\n自个儿\r\n自各儿\r\n自己\r\n自家\r\n自身\r\n综上所述\r\n总的来看\r\n总的来说\r\n总的说来\r\n总而言之\r\n总之\r\n纵\r\n纵令\r\n纵然\r\n纵使\r\n遵照\r\n作为\r\n兮\r\n呃\r\n呗\r\n咚\r\n咦\r\n喏\r\n啐\r\n喔唷\r\n嗬\r\n嗯\r\n嗳\r\n~\r\n!\r\n.\r\n:\r\n(\r\n)\r\n*\r\nA\r\n白\r\n社会主义\r\n--\r\n..\r\n>>\r\n [\r\n ]\r\n\r\n<\r\n>\r\n/\r\n\\\r\n|\r\n-\r\n_\r\n+\r\n=\r\n&\r\n^\r\n%\r\n#\r\n@\r\n`\r\n;\r\n$\r\n（\r\n）\r\n——\r\n—\r\n￥\r\n·\r\n...\r\n〉\r\n〈\r\n…\r\n　\r\n0\r\n1\r\n2\r\n3\r\n4\r\n5\r\n6\r\n7\r\n8\r\n9\r\n二\r\n三\r\n四\r\n五\r\n六\r\n七\r\n八\r\n九\r\n零\r\n＞\r\n＜\r\n＠\r\n＃\r\n＄\r\n％\r\n︿\r\n＆\r\n＊\r\n＋\r\n～\r\n｜\r\n［\r\n］\r\n｛\r\n｝\r\n啊哈\r\n啊呀\r\n啊哟\r\n挨次\r\n挨个\r\n挨家挨户\r\n挨门挨户\r\n挨门逐户\r\n挨着\r\n按理\r\n按期\r\n按时\r\n按说\r\n暗地里\r\n暗中\r\n暗自\r\n昂然\r\n八成\r\n白白\r\n半\r\n梆\r\n保管\r\n保险\r\n饱\r\n背地里\r\n背靠背\r\n倍感\r\n倍加\r\n本人\r\n本身\r\n甭\r\n比起\r\n比如说\r\n比照\r\n毕竟\r\n必\r\n必定\r\n必将\r\n必须\r\n便\r\n别人\r\n并非\r\n并肩\r\n并没\r\n并没有\r\n并排\r\n并无\r\n勃然\r\n不\r\n不必\r\n不常\r\n不大\r\n不但...而且\r\n不得\r\n不得不\r\n不得了\r\n不得已\r\n不迭\r\n不定\r\n不对\r\n不妨\r\n不管怎样\r\n不会\r\n不仅...而且\r\n不仅仅\r\n不仅仅是\r\n不经意\r\n不可开交\r\n不可抗拒\r\n不力\r\n不了\r\n不料\r\n不满\r\n不免\r\n不能不\r\n不起\r\n不巧\r\n不然的话\r\n不日\r\n不少\r\n不胜\r\n不时\r\n不是\r\n不同\r\n不能\r\n不要\r\n不外\r\n不外乎\r\n不下\r\n不限\r\n不消\r\n不已\r\n不亦乐乎\r\n不由得\r\n不再\r\n不择手段\r\n不怎么\r\n不曾\r\n不知不觉\r\n不止\r\n不止一次\r\n不至于\r\n才\r\n才能\r\n策略地\r\n差不多\r\n差一点\r\n常\r\n常常\r\n常言道\r\n常言说\r\n常言说得好\r\n长此下去\r\n长话短说\r\n长期以来\r\n长线\r\n敞开儿\r\n彻夜\r\n陈年\r\n趁便\r\n趁机\r\n趁热\r\n趁势\r\n趁早\r\n成年\r\n成年累月\r\n成心\r\n乘机\r\n乘胜\r\n乘势\r\n乘隙\r\n乘虚\r\n诚然\r\n迟早\r\n充分\r\n充其极\r\n充其量\r\n抽冷子\r\n臭\r\n初\r\n出\r\n出来\r\n出去\r\n除此\r\n除此而外\r\n除此以外\r\n除开\r\n除去\r\n除却\r\n除外\r\n处处\r\n川流不息\r\n传\r\n传说\r\n传闻\r\n串行\r\n纯\r\n纯粹\r\n此后\r\n此中\r\n次第\r\n匆匆\r\n从不\r\n从此\r\n从此以后\r\n从古到今\r\n从古至今\r\n从今以后\r\n从宽\r\n从来\r\n从轻\r\n从速\r\n从头\r\n从未\r\n从无到有\r\n从小\r\n从新\r\n从严\r\n从优\r\n从早到晚\r\n从中\r\n从重\r\n凑巧\r\n粗\r\n存心\r\n达旦\r\n打从\r\n打开天窗说亮话\r\n大\r\n大不了\r\n大大\r\n大抵\r\n大都\r\n大多\r\n大凡\r\n大概\r\n大家\r\n大举\r\n大略\r\n大面儿上\r\n大事\r\n大体\r\n大体上\r\n大约\r\n大张旗鼓\r\n大致\r\n呆呆地\r\n带\r\n殆\r\n待到\r\n单\r\n单纯\r\n单单\r\n但愿\r\n弹指之间\r\n当场\r\n当儿\r\n当即\r\n当口儿\r\n当然\r\n当庭\r\n当头\r\n当下\r\n当真\r\n当中\r\n倒不如\r\n倒不如说\r\n倒是\r\n到处\r\n到底\r\n到了儿\r\n到目前为止\r\n到头\r\n到头来\r\n得起\r\n得天独厚\r\n的确\r\n等到\r\n叮当\r\n顶多\r\n定\r\n动不动\r\n动辄\r\n陡然\r\n都\r\n独\r\n独自\r\n断然\r\n顿时\r\n多次\r\n多多\r\n多多少少\r\n多多益善\r\n多亏\r\n多年来\r\n多年前\r\n而后\r\n而论\r\n而又\r\n尔等\r\n二话不说\r\n二话没说\r\n反倒\r\n反倒是\r\n反而\r\n反手\r\n反之亦然\r\n反之则\r\n方\r\n方才\r\n方能\r\n放量\r\n非常\r\n非得\r\n分期\r\n分期分批\r\n分头\r\n奋勇\r\n愤然\r\n风雨无阻\r\n逢\r\n弗\r\n甫\r\n嘎嘎\r\n该当\r\n概\r\n赶快\r\n赶早不赶晚\r\n敢\r\n敢情\r\n敢于\r\n刚\r\n刚才\r\n刚好\r\n刚巧\r\n高低\r\n格外\r\n隔日\r\n隔夜\r\n个人\r\n各式\r\n更\r\n更加\r\n更进一步\r\n更为\r\n公然\r\n共\r\n共总\r\n够瞧的\r\n姑且\r\n古来\r\n故而\r\n故意\r\n固\r\n怪\r\n怪不得\r\n惯常\r\n光\r\n光是\r\n归根到底\r\n归根结底\r\n过于\r\n毫不\r\n毫无\r\n毫无保留地\r\n毫无例外\r\n好在\r\n何必\r\n何尝\r\n何妨\r\n何苦\r\n何乐而不为\r\n何须\r\n何止\r\n很\r\n很多\r\n很少\r\n轰然\r\n后来\r\n呼啦\r\n忽地\r\n忽然\r\n互\r\n互相\r\n哗啦\r\n话说\r\n还\r\n恍然\r\n会\r\n豁然\r\n活\r\n伙同\r\n或多或少\r\n或许\r\n基本\r\n基本上\r\n基于\r\n极\r\n极大\r\n极度\r\n极端\r\n极力\r\n极其\r\n极为\r\n急匆匆\r\n即将\r\n即刻\r\n即是说\r\n几度\r\n几番\r\n几乎\r\n几经\r\n既...又\r\n继之\r\n加上\r\n加以\r\n间或\r\n简而言之\r\n简言之\r\n简直\r\n见\r\n将才\r\n将近\r\n将要\r\n交口\r\n较比\r\n较为\r\n接连不断\r\n接下来\r\n皆可\r\n截然\r\n截至\r\n藉以\r\n借此\r\n借以\r\n届时\r\n仅\r\n仅仅\r\n谨\r\n进来\r\n进去\r\n近\r\n近几年来\r\n近来\r\n近年来\r\n尽管如此\r\n尽可能\r\n尽快\r\n尽量\r\n尽然\r\n尽如人意\r\n尽心竭力\r\n尽心尽力\r\n尽早\r\n精光\r\n经常\r\n竟\r\n竟然\r\n究竟\r\n就此\r\n就地\r\n就算\r\n居然\r\n局外\r\n举凡\r\n据称\r\n据此\r\n据实\r\n据说\r\n据我所知\r\n据悉\r\n具体来说\r\n决不\r\n决非\r\n绝\r\n绝不\r\n绝顶\r\n绝对\r\n绝非\r\n均\r\n喀\r\n看\r\n看来\r\n看起来\r\n看上去\r\n看样子\r\n可好\r\n可能\r\n恐怕\r\n快\r\n快要\r\n来不及\r\n来得及\r\n来讲\r\n来看\r\n拦腰\r\n牢牢\r\n老\r\n老大\r\n老老实实\r\n老是\r\n累次\r\n累年\r\n理当\r\n理该\r\n理应\r\n历\r\n立\r\n立地\r\n立刻\r\n立马\r\n立时\r\n联袂\r\n连连\r\n连日\r\n连日来\r\n连声\r\n连袂\r\n临到\r\n另方面\r\n另行\r\n另一个\r\n路经\r\n屡\r\n屡次\r\n屡次三番\r\n屡屡\r\n缕缕\r\n率尔\r\n率然\r\n略\r\n略加\r\n略微\r\n略为\r\n论说\r\n马上\r\n蛮\r\n满\r\n没\r\n没有\r\n每逢\r\n每每\r\n每时每刻\r\n猛然\r\n猛然间\r\n莫\r\n莫不\r\n莫非\r\n莫如\r\n默默地\r\n默然\r\n呐\r\n那末\r\n奈\r\n难道\r\n难得\r\n难怪\r\n难说\r\n内\r\n年复一年\r\n凝神\r\n偶而\r\n偶尔\r\n怕\r\n砰\r\n碰巧\r\n譬如\r\n偏偏\r\n乒\r\n平素\r\n颇\r\n迫于\r\n扑通\r\n其后\r\n其实\r\n奇\r\n齐\r\n起初\r\n起来\r\n起首\r\n起头\r\n起先\r\n岂\r\n岂非\r\n岂止\r\n迄\r\n恰逢\r\n恰好\r\n恰恰\r\n恰巧\r\n恰如\r\n恰似\r\n千\r\n千万\r\n千万千万\r\n切\r\n切不可\r\n切莫\r\n切切\r\n切勿\r\n窃\r\n亲口\r\n亲身\r\n亲手\r\n亲眼\r\n亲自\r\n顷\r\n顷刻\r\n顷刻间\r\n顷刻之间\r\n请勿\r\n穷年累月\r\n取道\r\n去\r\n权时\r\n全都\r\n全力\r\n全年\r\n全然\r\n全身心\r\n然\r\n人人\r\n仍\r\n仍旧\r\n仍然\r\n日复一日\r\n日见\r\n日渐\r\n日益\r\n日臻\r\n如常\r\n如此等等\r\n如次\r\n如今\r\n如期\r\n如前所述\r\n如上\r\n如下\r\n汝\r\n三番两次\r\n三番五次\r\n三天两头\r\n瑟瑟\r\n沙沙\r\n上\r\n上来\r\n上去\r\nw\r\ne\r\nr\r\nt\r\ny\r\nu\r\ni\r\no\r\np\r\ns\r\nd\r\nf\r\ng\r\nh\r\nj\r\nk\r\nl\r\nz\r\nx\r\nc\r\nv\r\nb\r\nn\r\nm\r\n“\r\n”\r\n恩\r\n\"\r\n'\r\n(\r\n)\r\n*\r\nA\r\n白\r\n--\r\n..\r\n>>\r\n [\r\n ]\r\n\r\n<\r\n>\r\n/\r\n\\\r\n|\r\n-\r\n_\r\n+\r\n=\r\n&\r\n^\r\n%\r\n#\r\n@\r\n`\r\n（\r\n）\r\n——\r\n—\r\n￥\r\n·\r\n...\r\n‘\r\n’\r\n〉\r\n〈\r\n…\r\n＞\r\n＜\r\n＠\r\n＃\r\n＄\r\n％\r\n︿\r\n＆\r\n＊\r\n＋\r\n～\r\n｜\r\n［\r\n］\r\n｛\r\n｝\r\n!\r\n#\r\n%\r\n&\r\n'\r\n(\r\n)\r\n*\r\n+\r\n,\r\n-\r\n.\r\n/\r\n100%\r\n100％\r\n10元\r\n:\r\n;\r\n=\r\n?\r\n@\r\n[\r\n\\\r\n]\r\n^\r\n_\r\n`\r\na\r\namp\r\nb\r\nc\r\ncm\r\nd\r\ne\r\nf\r\ng\r\ngt\r\nh\r\ni\r\nj\r\nk\r\nl\r\nldquo\r\nlove\r\nlt\r\nm\r\nmdash\r\nmiddot\r\nmm\r\nn\r\nno\r\no\r\nquot\r\nr\r\nrarr\r\nrdquo\r\ns\r\nsect\r\nt\r\ntimes\r\nv\r\nw\r\nx\r\ny\r\nz\r\n{\r\n|\r\n}\r\n~\r\n　\r\n、\r\n。\r\n～\r\n‖\r\n“\r\n”\r\n「\r\n」\r\n『\r\n』\r\n〖\r\n〗\r\n【\r\n】\r\n⊙\r\n≮\r\n≯\r\n☆\r\n★\r\n●\r\n◎\r\n◇\r\n◆\r\n■\r\n▲\r\n※\r\n→\r\n〓\r\n！\r\n￥\r\n＆\r\n（\r\n）\r\n＊\r\n＋\r\n，\r\n－\r\n．\r\n／\r\n：\r\n；\r\n＞\r\n？\r\n［\r\n＼\r\n］\r\n｛\r\n｝\r\nの\r\n◢\r\n◣\r\n◤\r\n◥\r\n㊣\r\n\"\r\n“\r\n”\r\n\"\r\n\"\r\n‘\r\n’\r\n'\r\n'\r\n〇\r\n\u0011\r\n－\r\n–\r\n—\r\n―\r\n︱\r\n゛\r\n＂\r\n＃\r\n＄\r\n＆\r\n︶\r\n＊\r\n﹐\r\n﹑\r\n．\r\n／\r\n﹕\r\n；\r\n＠\r\n［\r\n＼\r\n］\r\n＾\r\n＿\r\n﹍\r\n﹎\r\n﹏\r\n｛\r\n｜\r\n｝\r\n～\r\n¨\r\nˉ\r\nˇ\r\n˙\r\n‖\r\n‘\r\n’\r\n′\r\n″\r\n﹉\r\n﹊\r\n﹋\r\n﹌\r\n︴\r\n〈\r\n︿\r\n〉\r\n﹀\r\n《\r\n》\r\n「\r\n」\r\n『\r\n﹃\r\n』\r\n【\r\n︻\r\n】\r\n〔\r\n〕\r\n〖\r\n〗\r\n〝\r\n〞\r\n〃\r\n〆\r\n＋\r\n∕\r\n⊙\r\n＜\r\n＝\r\n＞\r\n±\r\n×\r\n÷\r\n∈\r\n∏\r\n∑\r\n√\r\n∝\r\n∟\r\n∠\r\n∣\r\n∧\r\n∨\r\n∩\r\n∪\r\n∫\r\n∮\r\n∴\r\n∵\r\n∶\r\n∷\r\n∽\r\n≈\r\n≌\r\n≒\r\n≠\r\n≡\r\n≤\r\n≥\r\n≦\r\n≮\r\n≯\r\n⊥\r\n⊿\r\n⌒\r\n□\r\n△\r\n▼\r\n▽\r\n◇\r\n○\r\n◎\r\n◢\r\n◣\r\n◤\r\n◥\r\n↑\r\n↗\r\n→\r\n↘\r\n↓\r\n↙\r\n←\r\n↖\r\n─\r\n━\r\n┄\r\n┅\r\n┈\r\n┉\r\n═\r\n│\r\n┃\r\n┆\r\n┇\r\n┊\r\n┋\r\n║\r\n┌\r\n┍\r\n┎\r\n┏\r\n╒\r\n╓\r\n╔\r\n╭\r\n┐\r\n┑\r\n┒\r\n┓\r\n╕\r\n╖\r\n╗\r\n╮\r\n└\r\n┕\r\n┖\r\n┗\r\n╘\r\n╙\r\n╚\r\n╰\r\n┘\r\n┙\r\n┚\r\n┛\r\n╛\r\n╜\r\n╝\r\n╯\r\n├\r\n┝\r\n┞\r\n┟\r\n┠\r\n┡\r\n┢\r\n┣\r\n╞\r\n╟\r\n╠\r\n┤\r\n┥\r\n┦\r\n┧\r\n┨\r\n┩\r\n┪\r\n┫\r\n╡\r\n╢\r\n╣\r\n┬\r\n┭\r\n┮\r\n┯\r\n┰\r\n┱\r\n┲\r\n┳\r\n╤\r\n╥\r\n╦\r\n┴\r\n┵\r\n┶\r\n┷\r\n┸\r\n┹\r\n┺\r\n┻\r\n╧\r\n╨\r\n╩\r\n┼\r\n┽\r\n┾\r\n┿\r\n╀\r\n╁\r\n╂\r\n╄\r\n╅\r\n╆\r\n╇\r\n╈\r\n╉\r\n╊\r\n╋\r\n╪\r\n╫\r\n╬\r\n╱\r\n╲\r\n╳\r\n▁\r\n▏\r\n▔\r\n▕\r\n▂\r\n▎\r\n▃\r\n▍\r\n▄\r\n▌\r\n▅\r\n▋\r\n▆\r\n▇\r\n▉\r\n█\r\n▓\r\n￠\r\n￡\r\n¤\r\n￥\r\n§\r\n°\r\n·\r\n…\r\n‰\r\n※\r\n〓\r\n☆\r\n♀\r\n♂\r\n"
  },
  {
    "path": "src/DictBuilder.py",
    "content": "#!/usr/bin/python\n# -*-coding:utf8-*-\n'''\nCreated on 2013-10-12\n@author:   zyy_max\n@brief: build word, idf dict from input_folder\n@modified: 2013-10-15 ==> check whether input a folder or a file\n@modified: 2013-11-06 ==> build dict from token list, load ori_dict\n'''\nfrom collections import defaultdict\nimport os\nimport sys\n\n\nclass WordDictBuilder:\n    def __init__(self, ori_path='', filelist=[], tokenlist=[]):\n        self.word_dict = defaultdict(int)\n        if ori_path != '' and os.path.exists(ori_path):\n            with open(ori_path) as ins:\n                for line in ins.readlines():\n                    self.word_dict[line.split('\\t')[1]] = int(line.split('\\t')[2])\n        self.filelist = filelist\n        self.tokenlist = tokenlist\n\n    def run(self):\n        for filepath in self.filelist:\n            self._updateDict(filepath)\n        self._updateDictByTokenList()\n        return self\n\n    def _updateDict(self, filepath):\n        with open(filepath, 'r') as ins:\n            for line in ins.readlines():\n                for word in line.rstrip().split():\n                    self.word_dict[word] += 1\n\n    def _updateDictByTokenList(self):\n        for token in self.tokenlist:\n            if isinstance(token, unicode):\n                token = token.encode('utf8')\n            self.word_dict[token] += 1\n\n    def save(self, filepath):\n        l = [(value, key) for key, value in self.word_dict.items()]\n        l = sorted(l, reverse=True)\n        result_lines = []\n        for idx, (value, key) in enumerate(l):\n            result_lines.append('%s\\t%s\\t%s%s' % (idx, key, value, os.linesep))\n        with open(filepath, 'w') as outs:\n            outs.writelines(result_lines)\n\n\nif __name__ == \"__main__\":\n    if len(sys.argv) < 3:\n        print \"Usage:\\tWordDictBuilder.py <input_folder/file> <output_file>\"\n        exit(-1)\n    if not os.path.isfile(sys.argv[1]):\n        filelist = [sys.argv[1] + os.sep + f for f in os.listdir(sys.argv[1])]\n    else:\n        filelist = [sys.argv[1]]\n    builder = WordDictBuilder(filelist=filelist)\n    builder.run()\n    builder.save(sys.argv[2])\n"
  },
  {
    "path": "src/DictUtils.py",
    "content": "#!/usr/bin/env python\n'''\nCreated on 2013-11-14\n@author zyy_max\n@brief utils for word dictionary\n'''\n\nclass WordDict(dict):\n    \"\"\"\n    @brief init, update and save word dictionary\n    \"\"\"\n    def __init__(self, dict_path=None):\n        if dict_path is not None:\n            self.load_dict(dict_path)\n    def load_dict(self, dict_path):\n        self.dict_path = dict_path\n        print 'Loading word dictionary from %s...' % dict_path\n        self.clear()\n        with open(dict_path, 'r') as ins:\n            for line in ins.readlines():\n                wordid, word = line.strip().split()\n                if isinstance(word, str):\n                    word = word.decode('utf8')\n                self[word] = int(wordid)\n        return self\n    def add_one(self, word):\n        if isinstance(word, str):\n            word = word.decode('utf8')\n        if not word in self:\n            max_id = max([0] + self.values())\n            self[word] = max_id+1\n        return self\n    def save_dict(self, dict_path):\n        print 'Saving word dictionary to %s...' % dict_path\n        word_list = self.items()\n        with open(dict_path, 'w') as outs:\n            for word, wordid in sorted(word_list):\n                outs.write('%s\\t%s\\n' % (wordid, word)) \n    def __del__(self):\n        self.save_dict(self.dict_path)\n       \n"
  },
  {
    "path": "src/DocUtils.py",
    "content": "#!/usr/bin/env python\n'''\nCreated on 2013-11-14\n@author zyy_max\n@brief DocDict for loading docs from db or file, update and save them\n'''\n\nclass DocDict(dict):\n    \"\"\"\n    @brief load docs, update and \n    \"\"\"\n    def __init__(self, fpath=None):\n        self.fpath = fpath\n        if fpath is not None:\n            self.load_from_file(fpath)\n    def load_from_db(self):\n        print 'Loading from db' \n        self.clear()\n    def load_from_file(self, fpath):\n        print 'Loading documents from file:',fpath\n        self.fpath = fpath\n        self.clear()\n        with open(fpath, 'r') as ins:\n            for line in ins.readlines():\n                docid, doc_str = line.strip().split('\\t')\n                self[int(docid)] = doc_str\n        return self\n    def update(self, docid, doc_str):\n        if not docid in self:\n            self[docid] = doc_str\n        return self\n    def save_to_file(self, fpath):\n        with open(fpath, 'w') as outs:\n            for key in sorted(self.keys()):\n                outs.write('%s\\t%s\\n' %(key, self[key]))\n    def __del__(self):\n        self.save_to_file(self.fpath)\n\n\n"
  },
  {
    "path": "src/Utils.py",
    "content": "#!/usr/bin/env python\n#-*-coding:utf8-*-\n'''\n@Created on 2013-10-21\n@author zyy_max\n@brief utils of common methods\n@modified on 2013-10-23 ==> change break condition of cosine(euclidean)_distance_nonzero\n'''\n\nimport math\n\ndef norm_vector_nonzero(ori_vec):\n    ori_sum = math.sqrt(sum([math.pow(float(value),2) for (idx,value) in ori_vec]))\n    if ori_sum < 1e-6:\n        return ori_vec\n    result_vec = []\n    for idx, ori_value in ori_vec:\n        result_vec.append((idx, float(ori_value)/ori_sum))\n    #print ori_sum\n    return result_vec\n\ndef cosine_distance_nonzero(feat_vec1, feat_vec2, norm=True):\n    if True == norm:\n        feat_vec1 = norm_vector_nonzero(feat_vec1)\n        feat_vec2 = norm_vector_nonzero(feat_vec2)\n    dist = 0\n    idx1 = 0\n    idx2 = 0\n    while idx1 < len(feat_vec1) and idx2 < len(feat_vec2):\n        if feat_vec1[idx1][0] == feat_vec2[idx2][0]:\n            dist += float(feat_vec1[idx1][1])*float(feat_vec2[idx2][1])\n            idx1 += 1\n            idx2 += 1\n        elif feat_vec1[idx1][0] > feat_vec2[idx2][0]:\n            idx2 += 1\n        else:\n            idx1 += 1\n    return dist\n\ndef euclidean_distance_nonzero(feat_vec1, feat_vec2, norm=True):\n    if True == norm:\n        feat_vec1 = norm_vector_nonzero(feat_vec1)\n        feat_vec2 = norm_vector_nonzero(feat_vec2)\n    dist = 0\n    length = min(len(feat_vec1), len(feat_vec2))\n    idx1 = 0\n    idx2 = 0\n    while idx1 < len(feat_vec1) and idx2 < len(feat_vec2):\n        if feat_vec1[idx1][0] > feat_vec2[idx2][0]:\n            dist += math.pow(float(feat_vec2[idx2][1]), 2)\n            idx2 += 1\n        elif feat_vec1[idx1][0] < feat_vec2[idx2][0]:\n            dist += math.pow(float(feat_vec1[idx1][1]), 2)\n            idx1 += 1\n        else:\n            dist += math.pow(float(feat_vec1[idx1][1])-float(feat_vec2[idx2][1]), 2)\n            idx2 += 1\n            idx1 += 1\n    return math.sqrt(dist)\n\ndef norm_vector(ori_vec):\n    ori_sum = math.sqrt(sum([math.pow(float(x),2) for x in ori_vec]))\n    if ori_sum < 1e-6:\n        return ori_vec\n    result_vec = []\n    for ori_value in ori_vec:\n        result_vec.append(float(ori_value)/ori_sum)\n    #print ori_sum\n    return result_vec\n\ndef cosine_distance(feat_vec1, feat_vec2, norm=True):\n    dist = 0\n    if True == norm:\n        feat_vec1 = norm_vector(feat_vec1)\n        feat_vec2 = norm_vector(feat_vec2)\n    for idx, feat1 in enumerate(feat_vec1):\n        if idx >= len(feat_vec2):\n            break\n        if abs(float(feat1)) < 1e-6  or abs(float(feat_vec2[idx])) < 1e-6:\n            continue\n        dist += float(feat1)*float(feat_vec2[idx])\n        #print dist\n    return dist\n\ndef euclidean_distance(feat_vec1, feat_vec2, norm=True):\n    dist = 0\n    if True == norm:\n        feat_vec1 = norm_vector(feat_vec1)\n        feat_vec2 = norm_vector(feat_vec2)\n    len1 = len(feat_vec1)\n    len2 = len(feat_vec2)\n    for idx in xrange(min(len2,len2)):\n        dist += math.pow(float(feat_vec1[idx])-float(feat_vec2[idx]),2)\n    if len1 < len2:\n        dist += sum([math.pow(float(feat),2) for feat in feat_vec2[len1-len2:]])\n    if len1 > len2:\n        dist += sum([math.pow(float(feat),2) for feat in feat_vec1[len2-len1:]])\n    return math.sqrt(dist)\n\n\n"
  },
  {
    "path": "src/__init__.py",
    "content": "__author__ = 'max.zhang'\n"
  },
  {
    "path": "src/features.py",
    "content": "#!/usr/bin/python\n#-*-coding:utf8-*-\n'''\nCreated on 2013-10-13\n@author: zyy_max\n@brief: build feature vector with word_dict and token_list\n@modified: 2013-10-15 ==> add upate_word for FeatureBuilder\n@modified: 2013-11-06 ==> add feature_nonzero\n@modified: 2013-11-15 ==> add FeatureBuilderUpdate\n                          word_dict is WordDict in DictUtils\n'''\nimport os,sys\nclass FeatureBuilder:\n    def __init__(self, word_dict):\n        self.word_dict = word_dict\n    \n    def compute(self, token_list):\n        feature = [0]*len(self.word_dict)\n        for token in token_list:\n            feature[self.word_dict[token]] += 1\n        feature_nonzero = [(idx,value) for idx, value in enumerate(feature) if value > 0]\n        return feature_nonzero\n\n    def _add_word(self, word):\n        if not word in self.word_dict:\n            self.word_dict[word] = len(self.word_dict)\n\n    def update_words(self, word_list=[]):\n        for word in word_list:\n            self._add_word(word)\n\nclass FeatureBuilderUpdate(FeatureBuilder):\n    def _add_word(self, word):\n        self.word_dict.add_one(word)\n\n\ndef feature_single(inputfile, outputfile):\n    print inputfile,outputfile\n    result_lines = []\n    with open(inputfile, 'r') as ins:\n        for lineidx, line in enumerate(ins.readlines()):\n            feature = fb.compute([token.decode('utf8') for token in line.strip().split()])\n            l = []\n            for idx,f in feature:\n                if f > 1e-6:\n                    l.append('%s:%s' %(idx,f))\n            result_lines.append(' '.join(l) + os.linesep)\n            print 'Finished\\r', lineidx,\n    with open(outputfile, 'w') as outs:\n        outs.writelines(result_lines)\n    print 'Wrote to ', outputfile\n\nif __name__==\"__main__\":\n    if len(sys.argv) < 5:\n        print \"Usage:\\tfeature.py -s/-m <word_dict_path> <tokens_file/tokens_folder> <feature_file/feature_folder>\"\n        exit(-1)\n    word_dict = {}\n    with open(sys.argv[2], 'r') as ins:\n        for line in ins.readlines():\n            l = line.split()\n            word_dict[l[1].decode('utf8')] = int(l[0])\n    fb = FeatureBuilder(word_dict)\n    print 'Loaded', len(word_dict), 'words'\n    if sys.argv[1] == '-s':\n        feature_single(sys.argv[3], sys.argv[4])\n    elif sys.argv[1] == '-m':\n        for inputfile in os.listdir(sys.argv[3]):\n            feature_single(os.path.join(sys.argv[3],inputfile), os.path.join(sys.argv[4],inputfile.replace('.token','.feat')))\n"
  },
  {
    "path": "src/isSimilar.py",
    "content": "#!/usr/bin/env python\n# -*-coding:utf8-*-\n'''\nCreated on 2013-11-06\n@author zyy_max\n@brief check the similarity of 2 documents by VSM+cosine distance or simhash+hamming distance\n'''\nimport sys\nfrom simhash_imp import SimhashBuilder, hamming_distance\nfrom tokens import JiebaTokenizer\nfrom features import FeatureBuilder\nfrom Utils import norm_vector_nonzero, cosine_distance_nonzero\n\n\nclass DocFeatLoader:\n    def __init__(self, simhash_builder, feat_nonzero):\n        self.feat_vec = feat_nonzero\n        self.feat_vec = norm_vector_nonzero(self.feat_vec)\n        self.fingerprint = simhash_builder.sim_hash_nonzero(self.feat_vec)\n\n\nif __name__ == \"__main__\":\n    if len(sys.argv) < 7:\n        print \"Usage:\\tisSimilar.py <doc1> <doc2> <stopword_path> <word_dict> <-c/-s> <threshold>\"\n        exit(-1)\n    doc_path_1, doc_path_2, stopword_path, word_dict, mode, threshold = sys.argv[1:]\n    print 'Arguments:', sys.argv[1:]\n    with open(doc_path_1) as ins:\n        doc_data_1 = ins.read().decode('utf8')\n        print 'Loaded', doc_path_1\n    with open(doc_path_2) as ins:\n        doc_data_2 = ins.read().decode('utf8')\n        print 'Loaded', doc_path_2\n\n    # Init tokenizer\n    jt = JiebaTokenizer(stopword_path, 'c')\n\n    # Tokenization\n    doc_token_1 = jt.tokens(doc_data_1)\n    doc_token_2 = jt.tokens(doc_data_2)\n\n    print 'Loading word dict...'\n    # Load word list from word_dict\n    word_list = []\n    with open(word_dict, 'r') as ins:\n        for line in ins.readlines():\n            word_list.append(line.split()[1])\n\n    # Build unicode string word dict\n    word_dict = {}\n    for idx, ascword in enumerate(word_list):\n        word_dict[ascword.decode('utf8')] = idx\n        # Build nonzero-feature\n    fb = FeatureBuilder(word_dict)\n    doc_feat_1 = fb.compute(doc_token_1)\n    doc_feat_2 = fb.compute(doc_token_2)\n\n    # Init simhash_builder\n    smb = SimhashBuilder(word_list)\n\n    doc_fl_1 = DocFeatLoader(smb, doc_feat_1)\n    doc_fl_2 = DocFeatLoader(smb, doc_feat_2)\n\n    if mode == '-c':\n        print 'Matching by VSM + cosine distance'\n        dist = cosine_distance_nonzero(doc_fl_1.feat_vec, doc_fl_2.feat_vec, norm=False)\n        if dist > float(threshold):\n            print 'Matching Result:\\t<True:%s>' % dist\n        else:\n            print 'Matching Result:\\t<False:%s>' % dist\n    elif mode == '-s':\n        print 'Matching by Simhash + hamming distance'\n        dist = hamming_distance(doc_fl_1.fingerprint, doc_fl_2.fingerprint)\n        if dist < float(threshold):\n            print 'Matching Result:\\t<True:%s>' % dist\n        else:\n            print 'Matching Result:\\t<False:%s>' % dist\n"
  },
  {
    "path": "src/launch.py",
    "content": "#!/usr/bin/env python\n#-*-coding:utf8-*-\n'''\nCreated on 2013-10-14\n@author: zyy_max\n@brief: launch entry of near-duplicate detection system\n'''\n\nimport os\nimport sys\nfrom tokens import JiebaTokenizer\nfrom simhash_imp import SimhashBuilder, hamming_distance\nfrom features import FeatureBuilder\n\nif __name__==\"__main__\":\n    if len(sys.argv) < 7:\n        print \"Usage:\\tlaunch.py word_dict_path stop_words_path fingerprint_path documents_path test_path result_path\"\n        exit(-1)\n    # Load word list\n    word_list = []\n    with open(sys.argv[1], 'r') as ins:\n        for line in ins.readlines():\n            word_list.append(line.split()[1])\n    # Init tokenizer\n    jt = JiebaTokenizer(sys.argv[2], 'c')\n    # Init feature_builder\n    word_dict = {}\n    for idx, ascword in enumerate(word_list):\n        word_dict[ascword.decode('utf8')] = idx\n    fb = FeatureBuilder(word_dict)\n    # Init simhash_builder\n    smb = SimhashBuilder(word_list)\n    # Load fingerprint list\n    fingerprint_list = []\n    with open(sys.argv[3], 'r') as ins:\n        for line in ins.readlines():\n            fingerprint_list.append(int(line))\n    # For exp: load document content\n    doc_list = []\n    with open(sys.argv[4], 'r') as ins:\n        for line in ins.readlines():\n            doc_list.append(line.strip())\n    # Detection process begins\n    min_sim = 64\n    min_docid = 0\n    with open(sys.argv[5], 'r') as ins:\n        for lineidx, line in enumerate(ins.readlines()):\n            if lineidx != 642:\n                continue\n            # Tokenize\n            tokens = jt.tokens(line.strip().decode('utf8'))\n            # Compute text feature\n            feature = fb.compute(tokens)\n            # Compute simhash\n            fingerprint = smb.sim_hash(feature)\n            result_list = []\n            for idx, fp in enumerate(fingerprint_list):\n                sim = hamming_distance(fingerprint, fp, 64)\n                result_list.append((sim, idx))\n            result_list = sorted(result_list, cmp=lambda x,y: cmp(x[0],y[0]))\n            if result_list[0][0] < min_sim:\n                min_sim, min_docid = result_list[0][0], lineidx\n            #'''\n            with open(sys.argv[6], 'w') as outs:\n                outs.write(line.strip()+os.linesep)\n                for sim, idx in result_list:\n                    outs.write('%s\\t%s%s' %(sim, doc_list[idx], os.linesep)) \n            #'''\n            #if lineidx == 2:\n            #    break           \n    print min_sim, min_docid\n\n"
  },
  {
    "path": "src/launch_incre.py",
    "content": "#!/usr/bin/env python\n#-*-coding:utf8-*-\n'''\nCreated on 2013-10-15\n@author: zyy_max\n@brief: incremental-version launch entry of near-duplicate detection system\n'''\n\nimport os\nimport sys\nfrom tokens import JiebaTokenizer\nfrom simhash_imp import SimhashBuilder, hamming_distance\nfrom features import FeatureBuilder\n\n\nclass FeatureContainer:\n    def __init__(self, word_dict_path):\n        # Load word list\n        self.word_dict_path = word_dict_path\n        self.word_list = []\n        with open(word_dict_path, 'r') as ins:\n            for line in ins.readlines():\n                self.word_list.append(line.split()[1])\n        self.word_dict = {}\n        for idx, ascword in enumerate(self.word_list):\n            self.word_dict[ascword.decode('utf8')] = idx\n        self.fb = FeatureBuilder(self.word_dict)\n        self.smb = SimhashBuilder(self.word_list)\n        print 'Loaded ', len(self.word_list), 'words'\n\n    def compute_feature(self, token_list):\n        new_words = []\n        for token in token_list:\n            if not token in self.word_dict:\n                new_words.append(token)\n        if len(new_words) != 0:\n            # Update word_list and word_dict\n            self.fb.update_words(new_words)\n            self.smb.update_words([word.encode('utf8') for word in new_words])\n            self.word_dict = self.fb.word_dict\n            self.word_list.extend([word.encode('utf8') for word in new_words])\n        feature_vec = self.fb.compute(token_list)\n        return feature_vec, self.smb.sim_hash(feature_vec)\n'''\n    def __del__(self):\n        with open(self.word_dict_path, 'w') as outs:\n            for idx, word in enumerate(self.word_list):\n                outs.write('%s\\t%s%s'%(idx, word, os.linesep))\n'''\nif __name__==\"__main__\":\n    if len(sys.argv) < 7:\n        print \"Usage:\\tlaunch_inc.py <word_dict_path> <stop_words_path> <fingerprint_path> <documents_path> <test_path> <result_path>\"\n        exit(-1)\n    # Init tokenizer\n    jt = JiebaTokenizer(sys.argv[2], 'c')\n    # Init feature_builder and simhash_builder \n    fc = FeatureContainer(sys.argv[1])\n    # Load fingerprint list\n    fingerprint_list = []\n    with open(sys.argv[3], 'r') as ins:\n        for line in ins.readlines():\n            fingerprint_list.append(int(line))\n    # For exp: load document content\n    doc_list = []\n    with open(sys.argv[4], 'r') as ins:\n        for line in ins.readlines():\n            doc_list.append(line.strip())\n    # Detection process begins\n    min_sim = 64\n    min_docid = 0\n    with open(sys.argv[5], 'r') as ins:\n        for lineidx, line in enumerate(ins.readlines()):\n            # Tokenize\n            tokens = jt.tokens(line.strip().decode('utf8'))\n            feature, fingerprint = fc.compute_feature(tokens)\n            result_list = []\n            for idx, fp in enumerate(fingerprint_list):\n                sim = hamming_distance(fingerprint, fp, 64)\n                result_list.append((sim, idx))\n            result_list = sorted(result_list, cmp=lambda x,y: cmp(x[0],y[0]))\n            if result_list[0][0] < min_sim:\n                min_sim, min_docid = result_list[0][0], lineidx\n            #'''\n            with open(sys.argv[6], 'w') as outs:\n                outs.write(line.strip()+os.linesep)\n                for sim, idx in result_list:\n                    outs.write('%s\\t%s%s' %(sim, doc_list[idx], os.linesep)) \n            #'''\n            #if lineidx == 2:\n            #    break   \n    with open('word_dict_new.txt', 'w') as outs:\n        for idx, word in enumerate(fc.word_list):\n            outs.write('%s\\t%s%s'%(idx, word, os.linesep))\n            \n"
  },
  {
    "path": "src/preprocess.py",
    "content": "#!/usr/bin/env python\r\n#-*-coding:utf8-*-\r\n'''\r\nCreated on 2013-11-06\r\n@author zyy_max\r\n@brief update word_dict by token result of document\r\n'''\r\nimport os\r\nimport sys\r\nimport time\r\nfrom tokens import JiebaTokenizer\r\nfrom DictBuilder import WordDictBuilder\r\n\r\nif __name__==\"__main__\":\r\n    if len(sys.argv) < 4:\r\n        print \"Usage:\\tpreprocess.py <docpath> <stopword_path> <worddict_path>\"\r\n        exit(-1)\r\n    doc_path, stopword_path, worddict_path = sys.argv[1:]\r\n    print 'Arguments:',sys.argv[1:]\r\n    \r\n    # Init tokenizer\r\n    jt = JiebaTokenizer(stopword_path, 'c')\r\n    # Load doc data\r\n    with open(doc_path) as ins:\r\n        doc_data = ins.read().decode('utf8')\r\n    # Tokenization\r\n    doc_tokens = jt.tokens(doc_data)\r\n    # Write to token file\r\n    with open(doc_path[:doc_path.rfind('.')]+'.token', 'w') as outs:\r\n        outs.write('/'.join([token.encode('utf8') for token in doc_tokens]))\r\n    \r\n    # Load original word dict, update and save\r\n    wdb = WordDictBuilder(worddict_path, tokenlist=doc_tokens)\r\n    wdb.run()\r\n    wdb.save(worddict_path)\r\n    print 'Totally', len(wdb.word_dict), 'words'\r\n    \r\n"
  },
  {
    "path": "src/simhash_imp.py",
    "content": "#!/usr/bin/env python\n# -*- coding=utf-8 -*-\n'''\nCreated on 2013-10-13\n@author: zyy_max\n@brief: build simhash and compute hamming_distance\n@modified: 2013-10-15 ==> add update_word for SimhashBuilder\n'''\n\n# Implementation of Charikar simhashes in Python\n# See: http://dsrg.mff.cuni.cz/~holub/sw/shash/#a1\n\nimport os, sys\n\ndef hamming_distance(hash_a, hash_b, hashbits=128):\n    x = (hash_a ^ hash_b) & ((1 << hashbits) - 1)\n    tot = 0\n    while x:\n        tot += 1\n        x &= x-1\n    return tot\nclass SimhashBuilder:\n    def __init__(self, word_list=[], hashbits=128):\n        self.hashbits = hashbits\n        self.hashval_list = [self._string_hash(word) for word in word_list]\n        print 'Totally: %s words' %(len(self.hashval_list),)\n        \"\"\"\n        with open('word_hash.txt', 'w') as outs:\n            for word in word_list:\n                outs.write(word+'\\t'+str(self._string_hash(word))+os.linesep)\n        \"\"\"\n\n    def _string_hash(self, word):\n        # A variable-length version of Python's builtin hash\n        if word == \"\":\n            return 0\n        else:\n            x = ord(word[0])<<7\n            m = 1000003\n            mask = 2**self.hashbits-1\n            for c in word:\n                x = ((x*m)^ord(c)) & mask\n            x ^= len(word)\n            if x == -1:\n                x = -2\n            return x\n\n    def sim_hash_nonzero(self, feature_vec):\n        finger_vec = [0]*self.hashbits\n        # Feature_vec is like [(idx,nonzero-value),(idx,nonzero-value)...]\n        for idx, feature in feature_vec:\n            hashval = self.hashval_list[int(idx)]\n            for i in range(self.hashbits):\n                bitmask = 1<<i\n                if bitmask&hashval != 0:\n                    finger_vec[i] += float(feature)\n                else:\n                    finger_vec[i] -= float(feature)\n        #print finger_vec\n        fingerprint = 0\n        for i in range(self.hashbits):\n            if finger_vec[i] >= 0:\n                fingerprint += 1 << i\n#整个文档的fingerprint为最终各个位大于等于0的位的和\n        return fingerprint    \n    \n    def sim_hash(self, feature_vec):\n        finger_vec = [0]*self.hashbits\n        for idx, feature in enumerate(feature_vec):\n            if float(feature) < 1e-6:\n                continue\n            hashval = self.hashval_list[idx]\n            for i in range(self.hashbits):\n                bitmask = 1<<i\n                if bitmask&hashval != 0:\n                    finger_vec[i] += float(feature)\n                else:\n                    finger_vec[i] -= float(feature)\n        #print finger_vec\n        fingerprint = 0\n        for i in range(self.hashbits):\n            if finger_vec[i] >= 0:\n                fingerprint += 1 << i\n#整个文档的fingerprint为最终各个位大于等于0的位的和\n        return fingerprint\n\n    def _add_word(self, word):\n        self.hashval_list.append(self._string_hash(word))\n\n    def update_words(self, word_list=[]):\n        for word in word_list:\n            self._add_word(word)\n\nclass simhash():\n    def __init__(self, tokens='', hashbits=128):\n        self.hashbits = hashbits\n        self.hash = self.simhash(tokens)\n\n    def __str__(self):\n        return str(self.hash)\n\n    def __long__(self):\n        return long(self.hash)\n\n    def __float__(self):\n        return float(self.hash)\n\n    def simhash(self, tokens):\n        # Returns a Charikar simhash with appropriate bitlength\n        v = [0]*self.hashbits\n\n        for t in [self._string_hash(x) for x in tokens]:\n            bitmask = 0\n            #print (t)\n            for i in range(self.hashbits):\n                bitmask = 1 << i\n                #print(t,bitmask, t & bitmask)\n                if t & bitmask:\n                    v[i] += 1 #查看当前bit位是否为1，是的话则将该位+1\n                else:\n                    v[i] += -1 #否则得话，该位减1\n\n        fingerprint = 0\n        for i in range(self.hashbits):\n            if v[i] >= 0:\n                fingerprint += 1 << i\n#整个文档的fingerprint为最终各个位大于等于0的位的和\n        return fingerprint\n\n    def _string_hash(self, v):\n        # A variable-length version of Python's builtin hash\n        if v == \"\":\n            return 0\n        else:\n            x = ord(v[0])<<7\n            m = 1000003\n            mask = 2**self.hashbits-1\n            for c in v:\n                x = ((x*m)^ord(c)) & mask\n            x ^= len(v)\n            if x == -1:\n                x = -2\n            return x\n\n    def hamming_distance(self, other_hash):\n        x = (self.hash ^ other_hash.hash) & ((1 << self.hashbits) - 1)\n        tot = 0\n        while x:\n            tot += 1\n            x &= x-1\n        return tot\n\n    def similarity(self, other_hash):\n        a = float(self.hash)\n        b = float(other_hash)\n        if a>b: return b/a\n        return a/b\n\nif __name__ == '__main__':\n    #看看哪些东西google最看重？标点？\n    #s = '看看哪些东西google最看重？标点？'\n    #hash1 =simhash(s.split())\n    #print(\"0x%x\" % hash1)\n    #print (\"%s\\t0x%x\" % (s, hash1))\n\n    #s = '看看哪些东西google最看重！标点！'\n    #hash2 = simhash(s.split())\n    #print (\"%s\\t[simhash = 0x%x]\" % (s, hash2))\n\n    #print '%f%% percent similarity on hash' %(100*(hash1.similarity(hash2)))\n    #print hash1.hamming_distance(hash2),\"bits differ out of\", hash1.hashbits\n\n    if len(sys.argv) < 4:\n        print \"Usage:\\tsimhash_imp.py <word_dict_path> <feature_file> <finger_print_file>\"\n        exit(-1)\n    word_list = []\n    with open(sys.argv[1], 'r') as ins:\n        for idx, line in enumerate(ins.readlines()):\n            word_list.append(line.split()[1])\n            print '\\rloading word', idx,\n    sim_b = SimhashBuilder(word_list)\n    result_lines = []\n    print ''\n    with open(sys.argv[2], 'r') as ins:\n        for idx, line in enumerate(ins.readlines()):\n            print '\\rprocessing doc', idx,\n            feature_vec = line.strip().split()\n            feature_vec = [(int(item.split(':')[0]),float(item.split(':')[1])) for item in feature_vec]\n            fingerprint = sim_b.sim_hash_nonzero(feature_vec)\n            result_lines.append(str(fingerprint)+os.linesep)\n    with open(sys.argv[3], 'w') as outs:\n        outs.writelines(result_lines)\n\n\n\n"
  },
  {
    "path": "src/tokens.py",
    "content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n'''\nCreated on 20131012\n@author:    zyy_max\n\n@brief: get tokens from input file by jieba\n'''\nimport jieba\nimport os\nimport sys\n\n\nclass JiebaTokenizer:\n    def __init__(self, stop_words_path, mode='s'):\n        self.stopword_set = set()\n        # load stopwords\n        with open(stop_words_path) as ins:\n            for line in ins:\n                self.stopword_set.add(line.strip().decode('utf8'))\n        self.mode = mode\n\n    def tokens(self, intext):\n        intext = u' '.join(intext.split())\n        if self.mode == 's':\n            token_list = jieba.cut_for_search(intext)\n        else:\n            token_list = jieba.cut(intext)\n        return [token for token in token_list if token.strip() != u'' and not token in self.stopword_set]\n\n\ndef token_single_file(input_fname, output_fname):\n    result_lines = []\n    with open(input_fname) as ins:\n        for line in ins:\n            line = line.strip().decode('utf8')\n            tokens = jt.tokens(line)\n            result_lines.append(u' '.join(tokens).encode('utf8'))\n    open(output_fname, 'w').write(os.linesep.join(result_lines))\n    print 'Wrote to ', output_fname\n\n\nif __name__ == \"__main__\":\n    if len(sys.argv) < 6 or sys.argv[1] not in ['-s', '-m'] or sys.argv[4] not in ['c', 's']:\n        print \"Usage:\\ttokens.py <file_mode(-s/-m)> <input_file/input_folder> \" \\\n              \"<output_file/output_folder> <cut_mode(c/s)> <stopword.list>\"\n        print \"file_mode:\\t-s:\\tsingle file\"\n        print \"\\t\\t-m:\\tmultiple files\"\n        print \"cut_mode:\\tc:\\tnormal mode of Jieba\"\n        print \"\\t\\ts:\\tcut_for_search mode of Jieba\"\n        exit(-1)\n    file_mode, input_filepath, output_filepath, cut_mode, stopword_file = sys.argv[1:]\n    jt = JiebaTokenizer(stopword_file, cut_mode)\n    # extract tokens and filter by stopwords\n    if file_mode == '-s':\n        token_single_file(input_filepath, output_filepath)\n    elif file_mode == '-m':\n        for input_file in os.listdir(input_filepath):\n            prefix = input_file.rsplit(os.sep, 1)[0]\n            token_single_file(os.path.join(input_filepath, input_file),\n                              os.path.join(output_filepath, prefix+'.token'))\n"
  },
  {
    "path": "src/webcontent_filter.sh",
    "content": "#!/bin/bash\n# Delete nonprint characters\n# Delete 0-9a-zA-z and some useless characters\n# Turn sequence of empty char to single one\n# Delete empty lines\nsed 's/[^[:print:]]//g' $1 \\\n| sed 's/[0-9a-zA-Z+=\\./:\\\"<>|_&#]/ /g' \\\n| sed 's/  */ /g' > $2\n# sed '/^ *$/d' > $2\n"
  },
  {
    "path": "test/test_token.py",
    "content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n'''\nCreated on 20150825\n@author:    zyy_max\n\n@brief: unit test of src/tokens.py\n'''\nimport unittest\nimport sys\nsys.path.append('..')\nfrom src.tokens import JiebaTokenizer\n\n\nclass JiebaTokenizerTestCase(unittest.TestCase):\n\n    def setUp(self):\n        self.jt = JiebaTokenizer(\"../data/stopwords.txt\")\n\n    def testTokens(self):\n        in_text = u\"完整的单元测试很少只执行一个测试用例，\" \\\n                  u\"开发人员通常都需要编写多个测试用例才能\" \\\n                  u\"对某一软件功能进行比较完整的测试，这些\" \\\n                  u\"相关的测试用例称为一个测试用例集，在\" \\\n                  u\"PyUnit中是用TestSuite类来表示的。\"\n        tokens_text = u\"完整/单元/测试/单元测试/只/执行/\" \\\n                      u\"一个/测试/试用/测试用例/开发/发人/\" \\\n                      u\"人员/开发人员/通常/需要/编写/多个/\" \\\n                      u\"测试/试用/测试用例/软件/功能/进行/\" \\\n                      u\"比较/完整/测试/相关/测试/试用/测试用例/\" \\\n                      u\"称为/一个/测试/试用/测试用例/集/PyUnit/\" \\\n                      u\"中是/TestSuite/类来/表示\"\n        self.assertEqual(tokens_text, u'/'.join(self.jt.tokens(in_text)), \"Tokenization Results differ\")\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  }
]