[
  {
    "path": "README.md",
    "content": "# 爬虫Windows环境搭建\n## 安装需要的程序包\n- Python3.4.3 > https://pan.baidu.com/s/1pK8KDcv\n- pip9.0.1  > https://pan.baidu.com/s/1mhNdRN6\n- 编辑器pycharm > https://pan.baidu.com/s/1i4Nkdk5\n- pywin32 > http://pan.baidu.com/s/1pKZiZWZ\n- pyOpenSSL > http://pan.baidu.com/s/1hsgOQJq\n- windows_sdk > http://pan.baidu.com/s/1hrM6iRa\n- phantomjs > http://pan.baidu.com/s/1nvHm5AD\n\n## 安装过程\n\n### 安装基础环境\n1. 安装Python安装包，一路Next\n2. 将Python的安装目录添加到环境变量Path中\n3. win + r 输入Cmd打开命令行窗口，输入Python 测试是否安装成功\n\n### 安装pip\n> pip的作用相当于linux的yum，安装之后可以采用命令行的方式在线安装一些依赖包\n1. 解压pip压缩包到某一目录（推荐与Python基础环境目录同级）\n2. cmd窗口进入pip解压目录\n3. 输入 python setup.py install 进行安装，安装过程中将会在Python目录的scripts目录下进行\n4. 将pip的安装目录 C:\\Python34\\Scripts; 配置到环境变量path中\n5. cmd命令行输入pip list 或者 pip --version 进行检验\n\n### 安装Scrapy\n> Scrapy是一个比较成熟的爬虫框架，使用它可以进行网页内容的抓取，但是对于windows并不友好，我们需要一些类库去支持它\n1. 安装pywin32: 一路next即可\n2. 安装wheel：安装scrapy时需要一些whl文件的安装，whl文件的安装需要预先配置wheel文件。在cmd下使用pip安装 ： pip install wheel\n3. 安装PyOpenSSL：下载完成PyOpenSSL后，进入下载所在目录，执行安装：pip install pyOpenSSl (**注意，执行安装的wheel文件名一定要tab键自动弹出，不要手动敲入**)\n4. 安装lxml: 直接使用pip在线安装 pip install lxml\n> ***在Windows的安装过程中，一定会出现 “error: Microsoft Visual C++ 10.0 is required (Unable to find vcvarsall.bat).”的问题，也就是无法找到相对应的编译包。一般的做法是下载VisualStudio来获得Complier，但是我们不这样做。***\n\n> 下载windows-sdk后，执行安装操作，如果安装成功，那么这个问题就解决了。如果失败，那么需要先把安装失败过程中的2个编译包卸载。他们分别为：Microsoft Visual C++ 2010  x86 Redistributable、Microsoft Visual C++ 2010  x64 Redistributable（可以使用360或者腾讯管家来卸载）\n\n> 卸载完成之后，在安装确认过程中，不要勾选Visual C++ compiler，这样他第一次就能安装成功。安装成功之后，再次点击sdk进行安装，这时候又需要把Visual C++ compiler勾选上，再次执行安装。完成以上操作后，就不会出现Microsoft Visual C++ 10.0 is required的问题了。\n\n> 如果在安装过程中出现“failed building wheel for xxx”的问题，那么需要手动下载wheel包进行安装，所有的安装文件都可以在[http://www.lfd.uci.edu/~gohlke/pythonlibs/](http://www.lfd.uci.edu/~gohlke/pythonlibs/)里找到，找到需要的包并下载完成后执行pip install xxxx即可。\n\n5. 安装Scrapy：pip install Scrapy, 安装完成后可以再命令行窗口输入Scrapy进行验证。\n\n\n\n\n\n# 爬虫架构设计\n为了更好的扩展性和爬虫工作的易于监控，爬虫项目分成3个子项目，分别是url提取、内容爬取、内容更新（包括更新线上内容和定时审核）\n\n\t主要是采用 Python 编写的scrapy框架，scrapy是目前非常热门的一种爬虫框架，它把整个爬虫过程分为了多个独立的模块，并提供了多个基类可以供我们去自由扩展，让爬虫编写变得简单而有逻辑性。并且scrapy自带的多线程、异常处理、以及强大的自定义Settings也让整个数据抓取过程变得高效而稳定。\n\tscrapy-redis：一个三方的基于redis的分布式爬虫框架，配合scrapy使用，让爬虫具有了分布式爬取的功能。github地址： https://github.com/darkrho/scrapy-redis \n\tmongodb 、mysql 或其他数据库：针对不同类型数据可以根据具体需求来选择不同的数据库存储。结构化数据可以使用mysql节省空间，非结构化、文本等数据可以采用mongodb等非关系型数据提高访问速度。具体选择可以自行百度谷歌，有很多关于sql和nosql的对比文章。\n\n\t其实对于已有的scrapy程序，对其扩展成分布式程序还是比较容易的。总的来说就是以下几步：\n\n* 找一台高性能服务器，用于redis队列的维护以及数据的存储。\n* 扩展scrapy程序，让其通过服务器的redis来获取start_urls，并改写pipeline里数据\t存储部分，把存储地址改为服务器地址。\n* 在服务器上写一些生成url的脚本，并定期执行。\n\n# 1 url提取\n## 1.1 分布式抓取的原理\n\t采用scrapy-redis实现分布式，其实从原理上来说很简单，这里为描述方便，我们把自己的核心服务器称为master，而把用于跑爬虫程序的机器称为slave。\n\n\t我们知道，采用scrapy框架抓取网页，我们需要首先给定它一些start_urls，爬虫首先访问start_urls里面的url，再根据我们的具体逻辑，对里面的元素、或者是其他的二级、三级页面进行抓取。而要实现分布式，我们只需要在这个starts_urls里面做文章就行了。\n\n\t我们在master上搭建一个redis数据库（注意这个数据库只用作url的存储，不关心爬取的具体数据，不要和后面的mongodb或者mysql混淆），并对每一个需要爬取的网站类型，都开辟一个单独的列表字段。通过设置slave上scrapy-redis获取url的地址为master地址。这样的结果就是，尽管有多个slave，然而大家获取url的地方只有一个，那就是服务器master上的redis数据库。\n\n\t并且，由于scrapy-redis自身的队列机制，slave获取的链接不会相互冲突。这样各个slave在完成抓取任务之后，再把获取的结果汇总到服务器上（这时的数据存储不再在是redis，而是mongodb或者 mysql等存放具体内容的数据库了）\n\n\t这种方法的还有好处就是程序移植性强，只要处理好路径问题，把slave上的程序移植到另一台机器上运行，基本上就是复制粘贴的事情。\n\t\n## 1.2 url的提取\n\t首先明确一点，url是在master而不是slave上生成的。\n\t\n\t对于每一个门类的urls（每一个门类对应redis下的一个字段，表示一个url的列表），我们可以单独写一个生成url的脚本。这个脚本要做的事很简单，就是按照我们需要的格式，构造除url并添加到redis里面。\n\n\t对于slave，我们知道，scrapy可以通过Settings来让爬取结束之后不自动关闭，而是不断的去询问队列里有没有新的url，如果有新的url，那么继续获取url并进行爬取。利用这一特性，我们就可以采用控制url的生成的方法，来控制slave爬虫程序的爬取。\n\t\n## 1.3 url的处理\n\t1、判断URL指向网站的域名，如果指向外部网站，直接丢弃\n\t2、URL去重，然后URL地址存入redis和数据库；\n\n# 2 内容爬取\n## 2.1 定时爬取\n\t有了上面的介绍，定时抓取的实现就变得简单了，我们只需要定时的去执行url生成的脚本即可。这里推荐linux下的crontab指令，能够非常方便的制定定时任务，具体的介绍大家可以自行查看文档。\n\n## 2.2 \n# 3 内容更新\n## 3.1 表设计\n    帖子爬取表：\n    id          :自增主键\n    md5_url     :md5加密URL\n    url         :爬取目标URL\n    title       :爬取文章标题\n    content     :爬取文章内容（已处理）\n    user_id     :随机发帖的用户ID\n    spider_name :爬虫名\n    site        :爬取域名\n    gid         :灌入帖子的ID\n    module      :\n    status      :状态 （1：已爬取；0：未爬取）\n    use_time    :爬取时间\n    create_time :创建时间\n    CREATE TABLE `NewTable` (\n        `id`  bigint(20) NOT NULL AUTO_INCREMENT ,\n        `md5_url`  varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,\n        `url`  varchar(200) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,\n        `title`  varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,\n        `content`  mediumtext CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,\n        `user_id`  varchar(30) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,\n        `spider_name`  varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,\n        `site`  varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,\n        `gid`  varchar(10) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,\n        `module`  varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,\n        `status`  tinyint(4) NOT NULL DEFAULT 0 ,\n        `use_time`  datetime NOT NULL ,\n        `create_time`  datetime NOT NULL ,\n    PRIMARY KEY (`id`)\n    )\n    ENGINE=InnoDB\n    DEFAULT CHARACTER SET=utf8 COLLATE=utf8_general_ci\n    AUTO_INCREMENT=4120\n    ROW_FORMAT=COMPACT;\n\n\n\n# 4 系统优化\n## 4.1 防抓取方法\n* 设置download_delay，这个方法基本上属于万能的，理论上只要你的delay足够长，网站服务器都没办法判断你是正常浏览还是爬虫。但它带来的副作用也是显然的：大量降低爬取效率。因此这个我们可能需要进行多次测试来得到一个合适的值。有时候download_delay可以设为一个范围随机值。\n* 随机生成User-agent：更改User-agent能够防止一些403或者400的错误，基本上属于每个爬虫都会写的。这里我们可以重写scrapy 里的middleware，让程序每次请求都随机获取一个User-agent，增大隐蔽性。具体实现可以参考 http://www.sharejs.com/codes/python/8310\n* 设置代理IP池：网上有很多免费或收费的代理池，可以借由他们作为中介来爬。一个问题是速度不能保证，第二个问题是，这些代理很多可能本来就没办法用。因此如果要用这个方法，比较靠谱的做法是先用程序筛选一些好用的代理，再在这些代理里面去随机、或者顺序访问。\n* 设置好header里面的domian和host，有些网站，比如雪球网会根据这两项来判断请求来源，因此也是要注意的地方。\n\n## 4.2 程序化管理、web管理\n上述方法虽然能够实现一套完整的流程，但在具体操作过程中还是比较麻烦，可能的话还可以架构web服务器，通过web端来实现url的添加、爬虫状态的监控等，能够减轻非常大的工作量。这些内容如果要展开实在太多，这里就只提一下。\n\n\n\n# 5 scrapy部署\n## 5.1 安装python3.6\n\n```\n```\n\t1、下载源代码\n\t\twget https://www.python.org/ftp/python/3.6.1/Python-3.6.1.tgz\n\t\t\t  \n\t2、解压文件\n\t\tcp Python-3.6.1.tgz /usr/local/goldmine/\n\t\ttar -xvf Python-3.6.1.tgz\n\t\t\n\t3、编译\n\t\t./configure --prefix=/usr/local\n\t\t\n\t4、安装\n\t\tmake && make altinstall\n\t\t\n\t\t注意：这里使用的是make altinstall ，如果使用make install，会在系统中有两个版本的Python在/usr/bin/目录中，可能会导致问题。\n\t4.1 报错---zipimport.ZipImportError: can't decompress data; zlib not available\n\t\t# http://www.zlib.net/zlib-1.2.11.tar\n\t\t=============================================\n\t\t使用root用户：\n\t\t\n\t\twget http://www.zlib.net/zlib-1.2.11.tar\n\t\ttar -xvf zlib-1.2.11.tar.gz\n\t\tcd zlib-1.2.11\n\t\t./configure\n\t\tmake\n\t\tsudo make install\n\t\t=============================================\n\t\t安装完zlib，重新执行 Python-3.6.1中的 make && make altinstall 即可安装成功；\t\n\t\t\n\t\n\t\n\t\n\t\n\n\n# 5.2 服务安装虚拟环境【root安装】\n\t安装virtualenv可以搭建虚拟且独立的python环境，使每个项目环境和其他的项目独立开来，保持环境的干净，解决包冲突。\n\n### 5.2.1 安装virtualenv\t\n\t/usr/local/bin/pip3.6 install virtualenv\n\t\n\t结果报错了，\n\t===============\n\tpip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.\n\tCollecting virtualenv\n\tCould not fetch URL https://pypi.python.org/simple/virtualenv/: There was a problem confirming the ssl certificate: Can't connect to HTTPS URL because the SSL module is not available. - skipping\n\t===============\n\trpm -aq  | grep openssl ,发现缺少 openssl-devel ；\n\t【route add default gw 192.168.1.219】\n\tyum install openssl-devel -y\n\t然后，重新编译python，见 5.1 ；\n### 5.2.2 创建新的虚拟环境\n\tvirtualenv -p /usr/local/bin/python3.6 python3.6-env\n\t\n### 5.2.3 激活虚拟环境\n\tsource python3.6-env/bin/active\n\t\n\t5.2.3.1 虚拟环境中安装 python\n\t\n### \t5.2.4 退出虚拟环境\n\tdeactive \n\n# 5.2 安装scrapy\n\n# 5.3 安装配置redis\n\tyum install redis\n# 5.4 \n\n# 6 redis安装&配置\n## 6.1 安装\n\tmac ： sudo brew install redis \n\t/usr/local/bin/redis-server /usr/local/etc/redis.conf \n\n# 参考\n* 1.[基于Python，scrapy，redis的分布式爬虫实现框架](http://ju.outofmemory.cn/entry/206756)\n* 2.[小白进阶之Scrapy第三篇（基于Scrapy-Redis的分布式以及cookies池）](http://ju.outofmemory.cn/entry/299500)\n* 3.[CentOS中使用virtualenv搭建python3环境](http://www.jb51.net/article/67393.htm)\n* 4.[CentOS使用virtualenv搭建独立的Python环境](http://www.51ou.com/browse/linuxwt/60216.html)\n* 5.[python虚拟环境安装和配置](http://blog.csdn.net/pipisorry/article/details/39998317)\n"
  },
  {
    "path": "SpiderKeeper.py",
    "content": "# -*- coding: utf-8 -*-\r\n\r\nimport time\r\nimport threading\r\nfrom scrapy import cmdline\r\n\r\n# def ylbg():\r\n#     print(\">> thread.staring ylbg ...\")\r\n#     cmdline.execute(\"scrapy crawl UrlSpider_YLBG\".split())\r\n#     print(\">> thread.ending ylbg ...\")\r\n#\r\n# def sydw():\r\n#     print(\">> thread.starting sydw ...\")\r\n#     cmdline.execute(\"scrapy crawl UrlSpider_SYDW\".split())\r\n#     print(\">> thread.ending sydw ...\")\r\n#\r\n# threading._start_new_thread(ylbg())\r\n# threading._start_new_thread(sydw())\r\n\r\n# 配置 commands ,执行 scrapy list 下的所有spider\r\ncmdline.execute(\"scrapy crawlall\".split())\r\n\r\n\r\n\r\n"
  },
  {
    "path": "commands/crawlall.py",
    "content": "from scrapy.commands import ScrapyCommand\r\nfrom scrapy.crawler import CrawlerRunner\r\nfrom scrapy.utils.conf import arglist_to_dict\r\n\r\n\r\nclass Command(ScrapyCommand):\r\n\r\n    requires_project = True\r\n\r\n    def syntax(self):\r\n        return '[options]'\r\n\r\n    def short_desc(self):\r\n        return 'Runs all of the spiders'\r\n\r\n    def add_options(self, parser):\r\n        ScrapyCommand.add_options(self, parser)\r\n        parser.add_option(\"-a\", dest=\"spargs\", action=\"append\", default=[], metavar=\"NAME=VALUE\",\r\n                          help=\"set spider argument (may be repeated)\")\r\n        parser.add_option(\"-o\", \"--output\", metavar=\"FILE\", help=\"dump scraped items into FILE (use - for stdout)\")\r\n        parser.add_option(\"-t\", \"--output-format\", metavar=\"FORMAT\", help=\"format to use for dumping items with -o\")\r\n\r\n    def process_options(self, args, opts):\r\n        ScrapyCommand.process_options(self, args, opts)\r\n        # try:\r\n        opts.spargs = arglist_to_dict(opts.spargs)\r\n        # except ValueError:\r\n        #     raise UsageError(\"Invalid -a value, use -a NAME=VALUE\", print_help=False)\r\n\r\n    def run(self, args, opts):\r\n        # settings = get_project_settings()\r\n\r\n        spider_loader = self.crawler_process.spider_loader\r\n        for spidername in args or spider_loader.list():\r\n            print(\"*********cralall spidername************\" + spidername)\r\n            self.crawler_process.crawl(spidername, **opts.spargs)\r\n        self.crawler_process.start()\r\n"
  },
  {
    "path": "commonUtils.py",
    "content": "import random\r\nimport time\r\nimport datetime\r\nfrom hashlib import md5\r\n\r\n\r\n# 获取随机发帖ID\r\ndef get_random_user(user_str):\r\n    user_list = []\r\n    for user_id in str(user_str).split(','):\r\n        user_list.append(user_id)\r\n    userid_idx = random.randint(1, len(user_list))\r\n    user_chooesd = user_list[userid_idx-1]\r\n    return user_chooesd\r\n\r\n\r\n# 获取MD5加密URL\r\ndef get_linkmd5id(url):\r\n    # url进行md5处理，为避免重复采集设计\r\n    md5_url = md5(url.encode(\"utf8\")).hexdigest()\r\n    return md5_url\r\n\r\n\r\n# get unix time stamp\r\ndef get_time_stamp():\r\n    create_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\r\n    time_array = time.strptime(create_time, \"%Y-%m-%d %H:%M:%S\")\r\n    time_stamp = int(time.mktime(time_array))\r\n    return time_stamp\r\n\r\n"
  },
  {
    "path": "ghostdriver.log",
    "content": "[INFO  - 2017-06-28T00:22:35.372Z] GhostDriver - Main - running on port 9643\r\n[INFO  - 2017-06-28T00:22:38.400Z] Session [e424dd60-5b97-11e7-a0fa-fbfe1e4d560f] - page.settings - {\"XSSAuditingEnabled\":false,\"javascriptCanCloseWindows\":true,\"javascriptCanOpenWindows\":true,\"javascriptEnabled\":true,\"loadImages\":false,\"localToRemoteUrlAccessEnabled\":false,\"userAgent\":\"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)\",\"webSecurityEnabled\":true}\r\n[INFO  - 2017-06-28T00:22:38.400Z] Session [e424dd60-5b97-11e7-a0fa-fbfe1e4d560f] - page.customHeaders:  - {}\r\n[INFO  - 2017-06-28T00:22:38.400Z] Session [e424dd60-5b97-11e7-a0fa-fbfe1e4d560f] - Session.negotiatedCapabilities - {\"browserName\":\"phantomjs\",\"version\":\"2.1.1\",\"driverName\":\"ghostdriver\",\"driverVersion\":\"1.2.0\",\"platform\":\"windows-7-32bit\",\"javascriptEnabled\":true,\"takesScreenshot\":true,\"handlesAlerts\":false,\"databaseEnabled\":false,\"locationContextEnabled\":false,\"applicationCacheEnabled\":false,\"browserConnectionEnabled\":false,\"cssSelectorsEnabled\":true,\"webStorageEnabled\":false,\"rotatable\":false,\"acceptSslCerts\":false,\"nativeEvents\":true,\"proxy\":{\"proxyType\":\"direct\"},\"phantomjs.page.settings.userAgent\":\"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)\",\"phantomjs.page.settings.loadImages\":false}\r\n[INFO  - 2017-06-28T00:22:38.400Z] SessionManagerReqHand - _postNewSessionCommand - New Session Created: e424dd60-5b97-11e7-a0fa-fbfe1e4d560f\r\n[ERROR - 2017-06-28T00:22:38.410Z] RouterReqHand - _handle.error - {\"name\":\"Missing Command Parameter\",\"message\":\"{\\\"headers\\\":{\\\"Accept\\\":\\\"application/json\\\",\\\"Accept-Encoding\\\":\\\"identity\\\",\\\"Connection\\\":\\\"close\\\",\\\"Content-Length\\\":\\\"73\\\",\\\"Content-Type\\\":\\\"application/json;charset=UTF-8\\\",\\\"Host\\\":\\\"127.0.0.1:9643\\\",\\\"User-Agent\\\":\\\"Python http auth\\\"},\\\"httpVersion\\\":\\\"1.1\\\",\\\"method\\\":\\\"POST\\\",\\\"post\\\":\\\"{\\\\\\\"sessionId\\\\\\\": \\\\\\\"e424dd60-5b97-11e7-a0fa-fbfe1e4d560f\\\\\\\", \\\\\\\"pageLoad\\\\\\\": 180000}\\\",\\\"url\\\":\\\"/timeouts\\\",\\\"urlParsed\\\":{\\\"anchor\\\":\\\"\\\",\\\"query\\\":\\\"\\\",\\\"file\\\":\\\"timeouts\\\",\\\"directory\\\":\\\"/\\\",\\\"path\\\":\\\"/timeouts\\\",\\\"relative\\\":\\\"/timeouts\\\",\\\"port\\\":\\\"\\\",\\\"host\\\":\\\"\\\",\\\"password\\\":\\\"\\\",\\\"user\\\":\\\"\\\",\\\"userInfo\\\":\\\"\\\",\\\"authority\\\":\\\"\\\",\\\"protocol\\\":\\\"\\\",\\\"source\\\":\\\"/timeouts\\\",\\\"queryKey\\\":{},\\\"chunks\\\":[\\\"timeouts\\\"]},\\\"urlOriginal\\\":\\\"/session/e424dd60-5b97-11e7-a0fa-fbfe1e4d560f/timeouts\\\"}\",\"line\":546,\"sourceURL\":\"phantomjs://code/session_request_handler.js\",\"stack\":\"_postTimeout@phantomjs://code/session_request_handler.js:546:73\\n_handle@phantomjs://code/session_request_handler.js:148:25\\n_reroute@phantomjs://code/request_handler.js:61:20\\n_handle@phantomjs://code/router_request_handler.js:78:46\"}\r\n\r\n  phantomjs://platform/console++.js:263 in error\r\n[INFO  - 2017-06-28T00:27:35.412Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T00:32:35.411Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T00:37:35.416Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T00:42:35.418Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T00:47:35.418Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T00:52:35.423Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T00:57:35.423Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T01:02:35.427Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T01:07:35.431Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T01:12:35.470Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T01:17:35.469Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T01:22:35.469Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T01:27:35.477Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\nessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T01:29:06.882Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n 2017-06-28T01:18:20.002Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T01:23:20.005Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T01:28:20.013Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n2017-06-28T01:18:06.690Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T01:23:06.726Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n[INFO  - 2017-06-28T01:28:06.738Z] SessionManagerReqHand - _cleanupWindowlessSessions - Asynchronous Sessions clean-up phase starting NOW\r\n"
  },
  {
    "path": "items.py",
    "content": "# -*- coding: utf-8 -*-\n\n# Define here the models for your scraped items\n#\n# See documentation in:\n# http://doc.scrapy.org/en/latest/topics/items.html\n\n\nimport scrapy\n\n\nclass DgspiderUrlItem(scrapy.Item):\n    url = scrapy.Field()\n\n\nclass DgspiderPostItem(scrapy.Item):\n    url = scrapy.Field()\n    title = scrapy.Field()\n    text = scrapy.Field()"
  },
  {
    "path": "middlewares/middleware.py",
    "content": "# douguo request middleware\r\n# for the page which loaded by js/ajax\r\n# ang changes should be recored here:\r\n#\r\n# @author zhangjianfei\r\n# @date 2017/05/04\r\n\r\nfrom selenium import webdriver\r\nfrom scrapy.http import HtmlResponse\r\nfrom selenium.webdriver.common.desired_capabilities import DesiredCapabilities\r\nfrom DgSpiderPhantomJS import urlSettings\r\nimport time\r\nimport datetime\r\nimport random\r\nimport os\r\nimport execjs\r\nimport DgSpiderPhantomJS.settings as settings\r\n\r\n\r\nclass JavaScriptMiddleware(object):\r\n\r\n    def process_request(self, request, spider):\r\n\r\n        print(\"LOGS: Spider name in middleware - \" + spider.name)\r\n\r\n        # 开启虚拟浏览器参数\r\n        dcap = dict(DesiredCapabilities.PHANTOMJS)\r\n\r\n        # 设置agents\r\n        dcap[\"phantomjs.page.settings.userAgent\"] = (random.choice(settings.USER_AGENTS))\r\n\r\n        # 禁止加载图片\r\n        dcap[\"phantomjs.page.settings.loadImages\"] = False\r\n\r\n        driver = webdriver.PhantomJS(executable_path=r\"D:\\phantomjs-2.1.1\\bin\\phantomjs.exe\", desired_capabilities=dcap)\r\n\r\n        # 由于phantomjs路径已经增添在path中，path可以不写\r\n        # driver = webdriver.PhantomJS()\r\n\r\n        # 利用firfox\r\n        # driver = webdriver.Firefox(executable_path=r\"D:\\FireFoxBrowser\\firefox.exe\")\r\n\r\n        # 利用chrome\r\n        # chromedriver = \"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chromedriver.exe\"\r\n        # os.environ[\"webdriver.chrome.driver\"] = chromedriver\r\n        # driver = webdriver.Chrome(chromedriver)\r\n\r\n        # 模拟登陆\r\n        # driver.find_element_by_class_name(\"input_id\").send_keys(\"34563453\")\r\n        # driver.find_element_by_class_name(\"input_pwd\").send_keys(\"zjf%#￥&\")\r\n        # driver.find_element_by_class_name(\"btn btn_lightgreen btn_login\").click()\r\n        # driver.implicitly_wait(15)\r\n        # time.sleep(10)\r\n\r\n        # 模拟用户下拉\r\n        # js1 = 'return document.body.scrollHeight'\r\n        # js2 = 'window.scrollTo(0, document.body.scrollHeight)'\r\n        # js3 = \"document.body.scrollTop=1000\"\r\n        # old_scroll_height = 0\r\n        # while driver.execute_script(js1) > old_scroll_height:\r\n        #     old_scroll_height = driver.execute_script(js1)\r\n        #     driver.execute_script(js2)\r\n        #     time.sleep(3)\r\n\r\n        # 设置20秒页面超时返回\r\n        driver.set_page_load_timeout(180)\r\n        # 设置20秒脚本超时时间\r\n        driver.set_script_timeout(180)\r\n\r\n        # get time stamp\r\n\r\n        # get page screenshot\r\n        # driver.save_screenshot(\"D:\\p.jpg\")\r\n\r\n        # 模拟用户在同一个浏览器对象下刷新页面\r\n        # the whole page source\r\n        body = ''\r\n        for i in range(50):\r\n            print(\"SPider name: \" + spider.name)\r\n            # sleep in a random time for the ajax asynchronous request\r\n            # time.sleep(random.randint(5, 6))\r\n            time.sleep(random.randint(300, 600))\r\n\r\n            print(\"LOGS: freshing page \" + str(i) + \"...\")\r\n\r\n            # get page request\r\n            driver.get(request.url)\r\n\r\n            # waiting for response\r\n            driver.implicitly_wait(30)\r\n\r\n            # get page resource\r\n            body = body + driver.page_source\r\n\r\n        return HtmlResponse(driver.current_url, body=body, encoding='utf-8', request=request)\r\n\r\n\r\n"
  },
  {
    "path": "middlewares.py",
    "content": "# -*- coding: utf-8 -*-\n\n# Define here the models for your spider middleware\n#\n# See documentation in:\n# http://doc.scrapy.org/en/latest/topics/spider-middleware.html\n\nfrom scrapy import signals\n\n\nclass DgspiderphantomjsSpiderMiddleware(object):\n    # Not all methods need to be defined. If a method is not defined,\n    # scrapy acts as if the spider middleware does not modify the\n    # passed objects.\n\n    @classmethod\n    def from_crawler(cls, crawler):\n        # This method is used by Scrapy to create your spiders.\n        s = cls()\n        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)\n        return s\n\n    def process_spider_input(response, spider):\n        # Called for each response that goes through the spider\n        # middleware and into the spider.\n\n        # Should return None or raise an exception.\n        return None\n\n    def process_spider_output(response, result, spider):\n        # Called with the results returned from the Spider, after\n        # it has processed the response.\n\n        # Must return an iterable of Request, dict or Item objects.\n        for i in result:\n            yield i\n\n    def process_spider_exception(response, exception, spider):\n        # Called when a spider or process_spider_input() method\n        # (from other spider middleware) raises an exception.\n\n        # Should return either None or an iterable of Response, dict\n        # or Item objects.\n        pass\n\n    def process_start_requests(start_requests, spider):\n        # Called with the start requests of the spider, and works\n        # similarly to the process_spider_output() method, except\n        # that it doesn’t have a response associated.\n\n        # Must return only requests (not items).\n        for r in start_requests:\n            yield r\n\n    def spider_opened(self, spider):\n        spider.logger.info('Spider opened: %s' % spider.name)\n"
  },
  {
    "path": "mysqlUtils.py",
    "content": "import pymysql\r\nimport pymysql.cursors\r\nimport os\r\n\r\n\r\ndef dbhandle_online():\r\n    host = '192.168.1.235'\r\n    user = 'root'\r\n    passwd = 'douguo2015'\r\n    charset = 'utf8'\r\n    conn = pymysql.connect(\r\n        host=host,\r\n        user=user,\r\n        passwd=passwd,\r\n        charset=charset,\r\n        use_unicode=False\r\n    )\r\n    return conn\r\n\r\n\r\ndef dbhandle_local():\r\n    host = '192.168.1.235'\r\n    user = 'root'\r\n    passwd = 'douguo2015'\r\n    charset = 'utf8'\r\n    conn = pymysql.connect(\r\n        host=host,\r\n        user=user,\r\n        passwd=passwd,\r\n        charset=charset,\r\n        use_unicode=True\r\n        # use_unicode=False\r\n    )\r\n    return conn\r\n\r\n\r\ndef dbhandle_geturl(gid):\r\n    host = '192.168.1.235'\r\n    user = 'root'\r\n    passwd = 'douguo2015'\r\n    charset = 'utf8'\r\n    conn = pymysql.connect(\r\n        host=host,\r\n        user=user,\r\n        passwd=passwd,\r\n        charset=charset,\r\n        use_unicode=False\r\n    )\r\n    cursor = conn.cursor()\r\n    sql = 'select url,spider_name,site,gid,module from dg_spider.dg_spider_post where status=0 and gid=%s limit 1' % gid\r\n    try:\r\n        cursor.execute(sql)\r\n        result = cursor.fetchone()\r\n        conn.commit()\r\n    except Exception as e:\r\n        print(\"***** exception\")\r\n        print(e)\r\n        conn.rollback()\r\n\r\n    if result is None:\r\n        os._exit(0)\r\n    else:\r\n        url = result[0]\r\n        spider_name = result[1]\r\n        site = result[2]\r\n        gid = result[3]\r\n        module = result[4]\r\n        return url.decode(), spider_name.decode(), site.decode(), gid.decode(), module.decode()\r\n\r\n\r\ndef dbhandle_insert_content(url, title, content, user_id, has_img):\r\n    host = '192.168.1.235'\r\n    user = 'root'\r\n    passwd = 'douguo2015'\r\n    charset = 'utf8'\r\n    conn = pymysql.connect(\r\n        host=host,\r\n        user=user,\r\n        passwd=passwd,\r\n        charset=charset,\r\n        use_unicode=False\r\n    )\r\n    cur = conn.cursor()\r\n\r\n    # 如果标题或者内容为空，那么程序将退出，篇文章将会作废并将status设置为1，爬虫继续向下运行获得新的URl\r\n    if content.strip() == '' or title.strip() == '':\r\n        sql_fail = 'update dg_spider.dg_spider_post set status=\"%s\" where url=\"%s\" ' % ('1', url)\r\n        try:\r\n            cur.execute(sql_fail)\r\n            result = cur.fetchone()\r\n            conn.commit()\r\n        except Exception as e:\r\n            print(e)\r\n            conn.rollback()\r\n        os._exit(0)\r\n\r\n    sql = 'update dg_spider.dg_spider_post set title=\"%s\",content=\"%s\",user_id=\"%s\",has_img=\"%s\" where url=\"%s\" ' \\\r\n          % (title, content, user_id, has_img, url)\r\n\r\n    try:\r\n        cur.execute(sql)\r\n        result = cur.fetchone()\r\n        conn.commit()\r\n    except Exception as e:\r\n        print(e)\r\n        conn.rollback()\r\n    return result\r\n\r\n\r\ndef dbhandle_update_status(url, status):\r\n    host = '192.168.1.235'\r\n    user = 'root'\r\n    passwd = 'douguo2015'\r\n    charset = 'utf8'\r\n    conn = pymysql.connect(\r\n        host=host,\r\n        user=user,\r\n        passwd=passwd,\r\n        charset=charset,\r\n        use_unicode=False\r\n    )\r\n    cur = conn.cursor()\r\n    sql = 'update dg_spider.dg_spider_post set status=\"%s\" where url=\"%s\" ' \\\r\n          % (status, url)\r\n    try:\r\n        cur.execute(sql)\r\n        result = cur.fetchone()\r\n        conn.commit()\r\n    except Exception as e:\r\n        print(e)\r\n        conn.rollback()\r\n    return result\r\n\r\n\r\ndef dbhandle_get_content(url):\r\n    host = '192.168.1.235'\r\n    user = 'root'\r\n    passwd = 'douguo2015'\r\n    charset = 'utf8'\r\n    conn = pymysql.connect(\r\n        host=host,\r\n        user=user,\r\n        passwd=passwd,\r\n        charset=charset,\r\n        use_unicode=False\r\n    )\r\n    cursor = conn.cursor()\r\n    sql = 'select title,content,user_id,gid from dg_spider.dg_spider_post where status=1 and url=\"%s\" limit 1' % url\r\n    try:\r\n        cursor.execute(sql)\r\n        result = cursor.fetchone()\r\n        conn.commit()\r\n    except Exception as e:\r\n        print(\"***** exception\")\r\n        print(e)\r\n        conn.rollback()\r\n\r\n    if result is None:\r\n        os._exit(1)\r\n\r\n    title = result[0]\r\n    content = result[1]\r\n    user_id = result[2]\r\n    gid = result[3]\r\n    return title.decode(), content.decode(), user_id.decode(), gid.decode()\r\n\r\n\r\n# 获取爬虫初始化参数\r\ndef dbhandle_get_spider_param(url):\r\n    host = '192.168.1.235'\r\n    user = 'root'\r\n    passwd = 'douguo2015'\r\n    charset = 'utf8'\r\n    conn = pymysql.connect(\r\n        host=host,\r\n        user=user,\r\n        passwd=passwd,\r\n        charset=charset,\r\n        use_unicode=False\r\n    )\r\n    cursor = conn.cursor()\r\n    sql = 'select title,content,user_id,gid from dg_spider.dg_spider_post where status=0 and url=\"%s\" limit 1' % url\r\n    result = ''\r\n    try:\r\n        cursor.execute(sql)\r\n        result = cursor.fetchone()\r\n        conn.commit()\r\n    except Exception as e:\r\n        print(\"***** exception\")\r\n        print(e)\r\n        conn.rollback()\r\n    title = result[0]\r\n    content = result[1]\r\n    user_id = result[2]\r\n    gid = result[3]\r\n    return title.decode(), content.decode(), user_id.decode(), gid.decode()\r\n"
  },
  {
    "path": "notusedspiders/ContentSpider.py",
    "content": "# -*- coding: utf-8 -*-\r\n\r\nimport scrapy\r\nfrom scrapy.selector import Selector\r\n\r\nfrom DgSpiderPhantomJS import urlSettings\r\nfrom DgSpiderPhantomJS.items import DgspiderPostItem\r\nfrom DgSpiderPhantomJS.mysqlUtils import dbhandle_geturl\r\nfrom DgSpiderPhantomJS.mysqlUtils import dbhandle_update_status\r\nfrom DgSpiderPhantomJS.notusedspiders import contentSettings\r\n\r\n\r\nclass DgContentSpider(scrapy.Spider):\r\n    print('>>> Spider DgContentPhantomJSSpider Staring  ...')\r\n\r\n    # get url from db\r\n    result = dbhandle_geturl(urlSettings.GROUP_ID)\r\n    url = result[0]\r\n    spider_name = result[1]\r\n    site = result[2]\r\n    gid = result[3]\r\n    module = result[4]\r\n\r\n    # set spider name\r\n    name = contentSettings.SPIDER_NAME\r\n    # name = 'DgUrlSpiderPhantomJS'\r\n\r\n    # set domains\r\n    allowed_domains = [contentSettings.DOMAIN]\r\n\r\n    # set scrapy url\r\n    start_urls = [url]\r\n\r\n    # change status\r\n    \"\"\"对于爬去网页，无论是否爬取成功都将设置status为1，避免死循环\"\"\"\r\n    dbhandle_update_status(url, 1)\r\n\r\n    # scrapy crawl\r\n    def parse(self, response):\r\n\r\n        # init the item\r\n        item = DgspiderPostItem()\r\n\r\n        # get the page source\r\n        sel = Selector(response)\r\n\r\n        print(sel)\r\n\r\n        # get post title\r\n        title_date = sel.xpath(contentSettings.POST_TITLE_XPATH)\r\n        item['title'] = title_date.xpath('string(.)').extract()\r\n\r\n        # get post page source\r\n        item['text'] = sel.xpath(contentSettings.POST_CONTENT_XPATH).extract()\r\n\r\n        # get url\r\n        item['url'] = DgContentSpider.url\r\n\r\n        yield item\r\n\r\n"
  },
  {
    "path": "notusedspiders/ContentSpider_real.py",
    "content": "# -*- coding: utf-8 -*-\r\n\r\nimport scrapy\r\nfrom scrapy.selector import Selector\r\n\r\nfrom DgSpiderPhantomJS import urlSettings\r\nfrom DgSpiderPhantomJS.items import DgspiderPostItem\r\nfrom DgSpiderPhantomJS.mysqlUtils import dbhandle_geturl\r\nfrom DgSpiderPhantomJS.mysqlUtils import dbhandle_update_status\r\nfrom DgSpiderPhantomJS.notusedspiders import contentSettings\r\n\r\n\r\nclass DgContentSpider(scrapy.Spider):\r\n    print('LOGS: Spider DgContentPhantomSpider Staring  ...')\r\n\r\n    # get url from db\r\n    result = dbhandle_geturl(urlSettings.GROUP_ID)\r\n    url = result[0]\r\n    spider_name = result[1]\r\n    site = result[2]\r\n    gid = result[3]\r\n    module = result[4]\r\n\r\n    # set spider name\r\n    name = contentSettings.SPIDER_NAME\r\n    # name = 'DgUrlSpiderPhantomJS'\r\n\r\n    # set domains\r\n    allowed_domains = [contentSettings.DOMAIN]\r\n\r\n    # set scrapy url\r\n    start_urls = [url]\r\n\r\n    # change status\r\n    \"\"\"对于爬去网页，无论是否爬取成功都将设置status为1，避免死循环\"\"\"\r\n    dbhandle_update_status(url, 1)\r\n\r\n    # scrapy crawl\r\n    def parse(self, response):\r\n\r\n        # init the item\r\n        item = DgspiderPostItem()\r\n\r\n        # get the page source\r\n        sel = Selector(response)\r\n\r\n        print(sel)\r\n\r\n        # get post title\r\n        title_date = sel.xpath(contentSettings.POST_TITLE_XPATH)\r\n        item['title'] = title_date.xpath('string(.)').extract()\r\n\r\n        # get post page source\r\n        item['text'] = sel.xpath(contentSettings.POST_CONTENT_XPATH).extract()\r\n\r\n        # get url\r\n        item['url'] = DgContentSpider.url\r\n\r\n        yield item\r\n\r\n"
  },
  {
    "path": "notusedspiders/DgContentSpider_PhantomJS.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport scrapy\nfrom scrapy.selector import Selector\n\nfrom DgSpiderPhantomJS import urlSettings\nfrom DgSpiderPhantomJS.items import DgspiderPostItem\nfrom DgSpiderPhantomJS.mysqlUtils import dbhandle_geturl\nfrom DgSpiderPhantomJS.mysqlUtils import dbhandle_update_status\nfrom DgSpiderPhantomJS.notusedspiders import contentSettings\n\n\nclass DgcontentspiderPhantomjsSpider(scrapy.Spider):\n    print('>>> Spider DgContentPhantomJSSpider Staring  ...')\n\n    # get url from db\n    result = dbhandle_geturl(urlSettings.GROUP_ID)\n    url = result[0]\n    spider_name = result[1]\n    site = result[2]\n    gid = result[3]\n    module = result[4]\n\n    # set spider name\n    name = contentSettings.SPIDER_NAME\n    # name = 'DgUrlSpiderPhantomJS'\n\n    # set domains\n    allowed_domains = [contentSettings.DOMAIN]\n\n    # set scrapy url\n    start_urls = [url]\n\n    # change status\n    \"\"\"对于爬去网页，无论是否爬取成功都将设置status为1，避免死循环\"\"\"\n    dbhandle_update_status(url, 1)\n\n    # scrapy crawl\n    def parse(self, response):\n\n        # init the item\n        item = DgspiderPostItem()\n\n        # get the page source\n        sel = Selector(response)\n\n        print(sel)\n\n        # get post title\n        title_date = sel.xpath(contentSettings.POST_TITLE_XPATH)\n        item['title'] = title_date.xpath('string(.)').extract()\n\n        # get post page source\n        item['text'] = sel.xpath(contentSettings.POST_CONTENT_XPATH).extract()\n\n        # get url\n        item['url'] = self.url\n\n        yield item\n\n"
  },
  {
    "path": "notusedspiders/DgUrlSpider_PhantomJS.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport scrapy\nfrom DgSpiderPhantomJS.items import DgspiderUrlItem\nfrom scrapy.selector import Selector\nfrom DgSpiderPhantomJS import urlSettings\n\n\nclass DgurlspiderPhantomjsSpider(scrapy.Spider):\n    print('>>> Spider DgUrlPhantomJSSpider Staring  ...')\n\n    # set your spider name\n    # name = urlSettings.SPIDER_NAME\n    name = urlSettings.SPIDER_NAME\n\n    # set your allowed domain\n    allowed_domains = [urlSettings.DOMAIN]\n\n    # set spider start url\n    start_urls = [urlSettings.URL_START]\n\n    # scrapy crawl\n    def parse(self, response):\n\n        # init the item\n        item = DgspiderUrlItem()\n\n        # get the page source\n        sel = Selector(response)\n\n        # page_source = self.page\n        url_list = sel.xpath(urlSettings.POST_URL_PHANTOMJS_XPATH).extract()\n\n        # if the url you got had some prefix, it will works, such as 'http://'\n        url_item = []\n        for url in url_list:\n            url = url.replace(urlSettings.URL_PREFIX, '')\n            url_item.append(urlSettings.URL_PREFIX + url)\n\n        # use set to del repeated urls\n        url_item = list(set(url_item))\n\n        item['url'] = url_item\n\n        yield item\n\n"
  },
  {
    "path": "notusedspiders/PostHandle.py",
    "content": "# -*- coding: utf-8 -*-\r\n\r\nimport json\r\n\r\nfrom DgSpiderPhantomJS.mysqlUtils import dbhandle_get_content\r\nfrom DgSpiderPhantomJS.mysqlUtils import dbhandle_update_status\r\nfrom DgSpiderPhantomJS.notusedspiders.uploadUtils import upload_post\r\n\r\n\r\ndef post_handel(url):\r\n    result = dbhandle_get_content(url)\r\n\r\n    title = result[0]\r\n    content = result[1]\r\n    user_id = result[2]\r\n    gid = result[3]\r\n    cs = []\r\n\r\n    text_list = content.split('[dgimg]')\r\n    for text_single in text_list:\r\n        text_single_c = text_single.split('[/dgimg]')\r\n        if len(text_single_c) == 1:\r\n            cs_json = {\"c\": text_single_c[0], \"i\": '', \"w\": '', \"h\": ''}\r\n            cs.append(cs_json)\r\n        else:\r\n            # tmp_img_upload_json = upload_img_result.pop()\r\n            pic_flag = text_single_c[1]\r\n            img_params = text_single_c[0].split(';')\r\n            i = img_params[0]\r\n            w = img_params[1]\r\n            h = img_params[2]\r\n            cs_json = {\"c\": pic_flag, \"i\": i, \"w\": w, \"h\": h}\r\n            cs.append(cs_json)\r\n\r\n    strcs = json.dumps(cs)\r\n    json_data = {\"apisign\": \"99ea3eda4b45549162c4a741d58baa60\",\r\n                 \"user_id\": user_id,\r\n                 \"gid\": gid,\r\n                 \"t\": title,\r\n                 \"cs\": strcs}\r\n    # 上传帖子\r\n    result_uploadpost = upload_post(json_data)\r\n\r\n    # 更新状态2，成功上传帖子\r\n    result_updateresult = dbhandle_update_status(url, 2)\r\n#\r\n# if __name__ == '__main__':\r\n#     post_handel('http://www.mama.cn/baby/art/20140523/773474.html')\r\n"
  },
  {
    "path": "notusedspiders/UrlSpider.py",
    "content": "# -*- coding: utf-8 -*-\r\n\r\nimport scrapy\r\nfrom scrapy.selector import Selector\r\n\r\nfrom DgSpiderPhantomJS import urlSettings\r\nfrom DgSpiderPhantomJS.items import DgspiderUrlItem\r\nfrom DgSpiderPhantomJS.notusedspiders import contentSettings\r\n\r\n\r\nclass DgUrlSpider(scrapy.Spider):\r\n\r\n    print('LOGS: Spider DgUrlPhantomSpider Staring  ...')\r\n\r\n    # set your spider name\r\n    name = contentSettings.SPIDER_NAME\r\n\r\n    # set your allowed domain\r\n    allowed_domains = [urlSettings.DOMAIN]\r\n\r\n    # set spider start url\r\n    start_urls = [urlSettings.URL_START_JFSS]\r\n\r\n    # scrapy crawl\r\n    def parse(self, response):\r\n\r\n        # init the item\r\n        item = DgspiderUrlItem()\r\n\r\n        # get the page source\r\n        sel = Selector(response)\r\n\r\n        # page_source = self.page\r\n        url_list = sel.xpath(urlSettings.POST_URL_PHANTOMJS_XPATH).extract()\r\n\r\n        # if the url you got had some prefix, it will works, such as 'http://'\r\n        url_item = []\r\n        for url in url_list:\r\n            url = url.replace(urlSettings.URL_PREFIX, '')\r\n            url_item.append(urlSettings.URL_PREFIX + url)\r\n\r\n        # use set to del repeated urls\r\n        url_item = list(set(url_item))\r\n\r\n        item['url'] = url_item\r\n\r\n        # transer item to pipeline\r\n        yield item\r\n\r\n        # for i in range(5):\r\n        #     yield Request(self.start_urls[0], callback=self.parse)\r\n"
  },
  {
    "path": "notusedspiders/check_post.py",
    "content": "import requests, re\r\nimport http\r\nimport urllib\r\n\r\n# 圈圈：孕妈育儿 4\r\n# 圈圈：减肥瘦身 33\r\n# 圈圈：情感生活 30\r\n\r\n\r\ndef checkPost():\r\n    # CREATE_POST_URL = \"http://api.qa.douguo.net/robot/handlePost\"\r\n    CREATE_POST_URL = \"http://api.douguo.net/robot/handlePost\"\r\n\r\n    fields={'group_id': '35',\r\n            'type': 1,\r\n            'apisign':'99ea3eda4b45549162c4a741d58baa60'}\r\n\r\n    r = requests.post(CREATE_POST_URL, data=fields)\r\n\r\n    print(r.json())\r\n\r\n\r\nif __name__ == '__main__':\r\n    #for i in range(1,50):\r\n    #checkPost()\r\n    checkPost()\r\n    #    print(i),\r\n    #print(testText('aaaa\\001'))"
  },
  {
    "path": "notusedspiders/contentSettings.py",
    "content": "# -*- coding: utf-8 -*-\n\n# Scrapy settings for DgSpider project\n\n# 图片储存\nIMAGES_STORE = 'D:\\\\pics\\\\jfss\\\\'\n\n# 爬取域名\nDOMAIN = 'toutiao.com'\n\n# 图片域名前缀\nDOMAIN_HTTP = \"http:\"\n\n# 随机发帖用户\nCREATE_POST_USER = '37619,18441390,18441391,18441392,18441393,18441394,18441395,18441396,18441397,18441398,18441399,'\\\n                   '18441400,18441401,18441402,18441403,18441404, 18441405,18441406,18441407,18441408,18441409,' \\\n                   '18441410,18441411,18441412,18441413,18441414,18441415,18441416,18441417,18441418,18441419,' \\\n                   '18441420,18441421,18441422,18441423,18441424,18441425,18441426,18441427,18441428,18441429,' \\\n                   '18441430,18441431,18441432,18441433,18441434,18441435,18441436,18441437,18441438,18441439,' \\\n                   '18441440,18441441,18441442,18441443,18441444,18441445,18441446,18441447,18441448,18441449,' \\\n                   '18441450,18441451,18441452,18441453,18441454,18441455,18441456,18441457,18441458,18441460,' \\\n                   '18441461,18441462,18441463,18441464,18441465,18441466,18441467,18441468,18441469,18441470,' \\\n                   '18441471,18441472,18441473,18441474,18441475,18441476,18441477,18441478,18441479,18441481,' \\\n                   '18441482,18441483,18441484,18441485,18441486,18441487,18441488,18441489,18441490'\n\n# 爬虫名\nSPIDER_NAME = 'DgContentSpider_PhantomJS'\n\n# 文章URL爬取规则XPATH\nPOST_TITLE_XPATH = '//h1[@class=\"article-title\"]'\nPOST_CONTENT_XPATH = '//div[@class=\"article-content\"]'\n\n"
  },
  {
    "path": "notusedspiders/params.js",
    "content": "function getParam(){\r\n    var asas;\r\n    var cpcp;\r\n    var t = Math.floor((new Date).getTime() / 1e3)\r\n      , e = t.toString(16).toUpperCase()\r\n      , i = md5(t).toString().toUpperCase();\r\n    if (8 != e.length){\r\n        asas = \"479BB4B7254C150\";\r\n        cpcp = \"7E0AC8874BB0985\";\r\n    }else{\r\n        for (var n = i.slice(0, 5), o = i.slice(-5), a = \"\", s = 0; 5 > s; s++){\r\n            a += n[s] + e[s];\r\n        }\r\n        for (var r = \"\", c = 0; 5 > c; c++){\r\n            r += e[c + 3] + o[c];\r\n        }\r\n        asas = \"A1\" + a + e.slice(-3);\r\n        cpcp= e.slice(0, 3) + r + \"E1\";\r\n    }\r\n    return '{\"as\":\"'+asas+'\",\"cp\":\"'+cpcp+'\"}';\r\n}\r\n!function(e) {\r\n    \"use strict\";\r\n    function t(e, t) {\r\n        var n = (65535 & e) + (65535 & t)\r\n          , r = (e >> 16) + (t >> 16) + (n >> 16);\r\n        return r << 16 | 65535 & n\r\n    }\r\n    function n(e, t) {\r\n        return e << t | e >>> 32 - t\r\n    }\r\n    function r(e, r, o, i, a, u) {\r\n        return t(n(t(t(r, e), t(i, u)), a), o)\r\n    }\r\n    function o(e, t, n, o, i, a, u) {\r\n        return r(t & n | ~t & o, e, t, i, a, u)\r\n    }\r\n    function i(e, t, n, o, i, a, u) {\r\n        return r(t & o | n & ~o, e, t, i, a, u)\r\n    }\r\n    function a(e, t, n, o, i, a, u) {\r\n        return r(t ^ n ^ o, e, t, i, a, u)\r\n    }\r\n    function u(e, t, n, o, i, a, u) {\r\n        return r(n ^ (t | ~o), e, t, i, a, u)\r\n    }\r\n    function s(e, n) {\r\n        e[n >> 5] |= 128 << n % 32,\r\n        e[(n + 64 >>> 9 << 4) + 14] = n;\r\n        var r, s, c, l, f, p = 1732584193, d = -271733879, h = -1732584194, m = 271733878;\r\n        for (r = 0; r < e.length; r += 16)\r\n            s = p,\r\n            c = d,\r\n            l = h,\r\n            f = m,\r\n            p = o(p, d, h, m, e[r], 7, -680876936),\r\n            m = o(m, p, d, h, e[r + 1], 12, -389564586),\r\n            h = o(h, m, p, d, e[r + 2], 17, 606105819),\r\n            d = o(d, h, m, p, e[r + 3], 22, -1044525330),\r\n            p = o(p, d, h, m, e[r + 4], 7, -176418897),\r\n            m = o(m, p, d, h, e[r + 5], 12, 1200080426),\r\n            h = o(h, m, p, d, e[r + 6], 17, -1473231341),\r\n            d = o(d, h, m, p, e[r + 7], 22, -45705983),\r\n            p = o(p, d, h, m, e[r + 8], 7, 1770035416),\r\n            m = o(m, p, d, h, e[r + 9], 12, -1958414417),\r\n            h = o(h, m, p, d, e[r + 10], 17, -42063),\r\n            d = o(d, h, m, p, e[r + 11], 22, -1990404162),\r\n            p = o(p, d, h, m, e[r + 12], 7, 1804603682),\r\n            m = o(m, p, d, h, e[r + 13], 12, -40341101),\r\n            h = o(h, m, p, d, e[r + 14], 17, -1502002290),\r\n            d = o(d, h, m, p, e[r + 15], 22, 1236535329),\r\n            p = i(p, d, h, m, e[r + 1], 5, -165796510),\r\n            m = i(m, p, d, h, e[r + 6], 9, -1069501632),\r\n            h = i(h, m, p, d, e[r + 11], 14, 643717713),\r\n            d = i(d, h, m, p, e[r], 20, -373897302),\r\n            p = i(p, d, h, m, e[r + 5], 5, -701558691),\r\n            m = i(m, p, d, h, e[r + 10], 9, 38016083),\r\n            h = i(h, m, p, d, e[r + 15], 14, -660478335),\r\n            d = i(d, h, m, p, e[r + 4], 20, -405537848),\r\n            p = i(p, d, h, m, e[r + 9], 5, 568446438),\r\n            m = i(m, p, d, h, e[r + 14], 9, -1019803690),\r\n            h = i(h, m, p, d, e[r + 3], 14, -187363961),\r\n            d = i(d, h, m, p, e[r + 8], 20, 1163531501),\r\n            p = i(p, d, h, m, e[r + 13], 5, -1444681467),\r\n            m = i(m, p, d, h, e[r + 2], 9, -51403784),\r\n            h = i(h, m, p, d, e[r + 7], 14, 1735328473),\r\n            d = i(d, h, m, p, e[r + 12], 20, -1926607734),\r\n            p = a(p, d, h, m, e[r + 5], 4, -378558),\r\n            m = a(m, p, d, h, e[r + 8], 11, -2022574463),\r\n            h = a(h, m, p, d, e[r + 11], 16, 1839030562),\r\n            d = a(d, h, m, p, e[r + 14], 23, -35309556),\r\n            p = a(p, d, h, m, e[r + 1], 4, -1530992060),\r\n            m = a(m, p, d, h, e[r + 4], 11, 1272893353),\r\n            h = a(h, m, p, d, e[r + 7], 16, -155497632),\r\n            d = a(d, h, m, p, e[r + 10], 23, -1094730640),\r\n            p = a(p, d, h, m, e[r + 13], 4, 681279174),\r\n            m = a(m, p, d, h, e[r], 11, -358537222),\r\n            h = a(h, m, p, d, e[r + 3], 16, -722521979),\r\n            d = a(d, h, m, p, e[r + 6], 23, 76029189),\r\n            p = a(p, d, h, m, e[r + 9], 4, -640364487),\r\n            m = a(m, p, d, h, e[r + 12], 11, -421815835),\r\n            h = a(h, m, p, d, e[r + 15], 16, 530742520),\r\n            d = a(d, h, m, p, e[r + 2], 23, -995338651),\r\n            p = u(p, d, h, m, e[r], 6, -198630844),\r\n            m = u(m, p, d, h, e[r + 7], 10, 1126891415),\r\n            h = u(h, m, p, d, e[r + 14], 15, -1416354905),\r\n            d = u(d, h, m, p, e[r + 5], 21, -57434055),\r\n            p = u(p, d, h, m, e[r + 12], 6, 1700485571),\r\n            m = u(m, p, d, h, e[r + 3], 10, -1894986606),\r\n            h = u(h, m, p, d, e[r + 10], 15, -1051523),\r\n            d = u(d, h, m, p, e[r + 1], 21, -2054922799),\r\n            p = u(p, d, h, m, e[r + 8], 6, 1873313359),\r\n            m = u(m, p, d, h, e[r + 15], 10, -30611744),\r\n            h = u(h, m, p, d, e[r + 6], 15, -1560198380),\r\n            d = u(d, h, m, p, e[r + 13], 21, 1309151649),\r\n            p = u(p, d, h, m, e[r + 4], 6, -145523070),\r\n            m = u(m, p, d, h, e[r + 11], 10, -1120210379),\r\n            h = u(h, m, p, d, e[r + 2], 15, 718787259),\r\n            d = u(d, h, m, p, e[r + 9], 21, -343485551),\r\n            p = t(p, s),\r\n            d = t(d, c),\r\n            h = t(h, l),\r\n            m = t(m, f);\r\n        return [p, d, h, m]\r\n    }\r\n    function c(e) {\r\n        var t, n = \"\";\r\n        for (t = 0; t < 32 * e.length; t += 8)\r\n            n += String.fromCharCode(e[t >> 5] >>> t % 32 & 255);\r\n        return n\r\n    }\r\n    function l(e) {\r\n        var t, n = [];\r\n        for (n[(e.length >> 2) - 1] = void 0,\r\n        t = 0; t < n.length; t += 1)\r\n            n[t] = 0;\r\n        for (t = 0; t < 8 * e.length; t += 8)\r\n            n[t >> 5] |= (255 & e.charCodeAt(t / 8)) << t % 32;\r\n        return n\r\n    }\r\n    function f(e) {\r\n        return c(s(l(e), 8 * e.length))\r\n    }\r\n    function p(e, t) {\r\n        var n, r, o = l(e), i = [], a = [];\r\n        for (i[15] = a[15] = void 0,\r\n        o.length > 16 && (o = s(o, 8 * e.length)),\r\n        n = 0; 16 > n; n += 1)\r\n            i[n] = 909522486 ^ o[n],\r\n            a[n] = 1549556828 ^ o[n];\r\n        return r = s(i.concat(l(t)), 512 + 8 * t.length),\r\n        c(s(a.concat(r), 640))\r\n    }\r\n    function d(e) {\r\n        var t, n, r = \"0123456789abcdef\", o = \"\";\r\n        for (n = 0; n < e.length; n += 1)\r\n            t = e.charCodeAt(n),\r\n            o += r.charAt(t >>> 4 & 15) + r.charAt(15 & t);\r\n        return o\r\n    }\r\n    function h(e) {\r\n        return unescape(encodeURIComponent(e))\r\n    }\r\n    function m(e) {\r\n        return f(h(e))\r\n    }\r\n    function g(e) {\r\n        return d(m(e))\r\n    }\r\n    function v(e, t) {\r\n        return p(h(e), h(t))\r\n    }\r\n    function y(e, t) {\r\n        return d(v(e, t))\r\n    }\r\n    function b(e, t, n) {\r\n        return t ? n ? v(t, e) : y(t, e) : n ? m(e) : g(e)\r\n    }\r\n    \"function\" == typeof define && define.amd ? define(\"static/js/lib/md5\", [\"require\"], function() {\r\n        return b\r\n    }) : \"object\" == typeof module && module.exports ? module.exports = b : e.md5 = b\r\n}(this)"
  },
  {
    "path": "notusedspiders/uploadUtils.py",
    "content": "import requests\r\nfrom requests_toolbelt.multipart.encoder import MultipartEncoder\r\n\r\n\r\ndef upload_post(json_data):\r\n    # 上传帖子 ，参考：http://192.168.2.25:3000/api/interface/2016\r\n    # create_post_url = \"http://api.qa.douguo.net/robot/uploadimagespost\"\r\n    create_post_url = \"http://api.douguo.net/robot/uploadimagespost\"\r\n\r\n    # 传帖子\r\n    # dataJson = json.dumps({\"user_id\":\"19013245\",\"gid\":30,\"t\":\"2017-03-23\",\"cs\":[{\"c\":\"啦啦啦\",\"i\":\"\",\"w\":0,\"h\":0},\r\n    #                       {\"c\":\"啦啦啦2222\",\"i\":\"http://wwww.douguo.com/abc.jpg\",\"w\":0,\"h\":0}],\"time\":1235235234})\r\n    # jsonData = {\"user_id\":\"19013245\",\"gid\":5,\"t\":\"TEST\",\"cs\":'[{\"c\":\"啊啊啊\",\"i\":\"qqq\",\"w\":12,\"h\":10},\r\n    #               {\"c\":\"这个内容真不错\",\"i\":\"http://wwww.baidu.com\",\"w\":10,\"h\":10}]',\"time\":61411313}\r\n\r\n    # print(jsonData)\r\n    req_post = requests.post(create_post_url, data=json_data)\r\n    print(req_post.json())\r\n    # print(reqPost.text)\r\n\r\n\r\ndef uploadImage(img_path, content_type, user_id):\r\n    # 上传单个图片 ， 参考：http://192.168.2.25:3000/api/interface/2015\r\n    # UPLOAD_IMG_URL = \"http://api.qa.douguo.net/robot/uploadpostimage\"\r\n    UPLOAD_IMG_URL = \"http://api.douguo.net/robot/uploadpostimage\"\r\n    # 传图片\r\n\r\n    m = MultipartEncoder(\r\n        # fields={'user_id': '192323',\r\n        #         'images': ('filename', open(imgPath, 'rb'), 'image/JPEG')}\r\n        fields={'user_id': user_id,\r\n                'apisign': '99ea3eda4b45549162c4a741d58baa60',\r\n                'image': ('filename', open(img_path, 'rb'), 'image/jpeg')}\r\n    )\r\n\r\n    r = requests.post(UPLOAD_IMG_URL, data=m, headers={'Content-Type': m.content_type})\r\n    print(r.json())\r\n    # print(r.text)\r\n    return r.json()\r\n    # return r.text"
  },
  {
    "path": "notusedspiders/utils.py",
    "content": "import time\r\nimport datetime\r\n\r\n\r\n"
  },
  {
    "path": "pipelines.py",
    "content": "# -*- coding: utf-8 -*-\n\n# Define your item pipelines here\n#\n# Don't forget to add your pipeline to the ITEM_PIPELINES setting\n# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html\n\nimport datetime\nfrom DgSpiderPhantomJS import urlSettings\nfrom DgSpiderPhantomJS.mysqlUtils import dbhandle_online\nfrom DgSpiderPhantomJS.commonUtils import get_linkmd5id\n\n\nclass DgspiderphantomjsPipeline(object):\n\n    def __init__(self):\n        pass\n\n    # process the data\n    def process_item(self, item, spider):\n\n        # get mysql connettion\n        db_object = dbhandle_online()\n        cursor = db_object.cursor()\n\n        print(\">>>>> Spider name :\")\n        print(spider.name)\n\n        for url in item['url']:\n            linkmd5id = get_linkmd5id(url)\n\n            if spider.name == urlSettings.SPIDER_JFSS:\n                spider_name = urlSettings.SPIDER_JFSS\n                gid = urlSettings.GROUP_ID_JFSS\n            elif spider.name == urlSettings.SPIDER_MSZT:\n                spider_name = urlSettings.SPIDER_MSZT\n                gid = urlSettings.GROUP_ID_MSZT\n            elif spider.name == urlSettings.SPIDER_SYDW:\n                spider_name = urlSettings.SPIDER_SYDW\n                gid = urlSettings.GROUP_ID_SYDW\n            elif spider.name == urlSettings.SPIDER_YLBG:\n                spider_name = urlSettings.SPIDER_YLBG\n                gid = urlSettings.GROUP_ID_YLBG\n            elif spider.name == urlSettings.SPIDER_YMYE:\n                spider_name = urlSettings.SPIDER_YMYE\n                gid = urlSettings.GROUP_ID_YMYE\n\n            module = urlSettings.MODULE\n            site = urlSettings.DOMAIN\n            create_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n            status = '0'\n            sql_search = 'select md5_url from dg_spider.dg_spider_post where md5_url=\"%s\"' % linkmd5id\n            sql = 'insert into dg_spider.dg_spider_post(md5_url, url, spider_name, site, gid, module, status, ' \\\n                  'create_time) ' \\\n                  'values(\"%s\", \"%s\", \"%s\", \"%s\", \"%s\", \"%s\", \"%s\", \"%s\")' \\\n                  % (linkmd5id, url, spider_name, site, gid, module, status, create_time)\n            try:\n                # if url is not existed, then insert\n                cursor.execute(sql_search)\n                result_search = cursor.fetchone()\n                if result_search is None or result_search[0].strip() == '':\n                    cursor.execute(sql)\n                    result = cursor.fetchone()\n                    db_object.commit()\n            except Exception as e:\n                print(\"Waring!: catch exception !\")\n                print(e)\n                db_object.rollback()\n\n        return item\n\n    # spider开启时被调用\n    def open_spider(self, spider):\n        pass\n\n    # sipder 关闭时被调用\n    def close_spider(self, spider):\n        pass\n"
  },
  {
    "path": "settings.py",
    "content": "# -*- coding: utf-8 -*-\n\n# Scrapy settings for dg-spider-phantomJS project\n#\n# For simplicity, this file contains only settings considered important or\n# commonly used. You can find more settings consulting the documentation:\n#\n#     http://doc.scrapy.org/en/latest/topics/settings.html\n#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html\n#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html\n\nBOT_NAME = 'dg-spider-phantomJS'\n\nSPIDER_MODULES = ['dg-spider-phantomJS.spiders']\nNEWSPIDER_MODULE = 'dg-spider-phantomJS.spiders'\n\n# 注册PIPELINES\nITEM_PIPELINES = {\n    'dg-spider-phantomJS.pipelines.DgspiderphantomjsPipeline': 544\n}\n\nDOWNLOADER_MIDDLEWARES = {\n    'dg-spider-phantomJS.middlewares.middleware.JavaScriptMiddleware': 543,  # 键为中间件类的路径，值为中间件的顺序\n    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,  # 禁止内置的中间件\n}\n\nUSER_AGENTS = [\n    \"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)\",\n    \"Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)\",\n    \"Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)\",\n    \"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)\",\n    \"Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6\",\n    \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1\",\n    \"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0\",\n    \"Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5\"\n]\n\nCOMMANDS_MODULE = 'dg-spider-phantomJS.commands'\n#\n\n\n# Crawl responsibly by identifying yourself (and your website) on the user-agent\n#USER_AGENT = 'DgSpiderPhantomJS (+http://www.yourdomain.com)'\n\n# Obey robots.txt rules\n# ROBOTSTXT_OBEY = True\n\n# Configure maximum concurrent requests performed by Scrapy (default: 16)\n#CONCURRENT_REQUESTS = 32\n\n# Configure a delay for requests for the same website (default: 0)\n# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay\n# See also autothrottle settings and docs\n# 设置下载延迟\n# DOWNLOAD_DELAY = 3\n\n# The download delay setting will honor only one of:\n#CONCURRENT_REQUESTS_PER_DOMAIN = 16\n#CONCURRENT_REQUESTS_PER_IP = 16\n\n# Disable cookies (enabled by default)\nCOOKIES_ENABLED = True\n\n# Disable Telnet Console (enabled by default)\n#TELNETCONSOLE_ENABLED = False\n\n# Override the default request headers:\n#DEFAULT_REQUEST_HEADERS = {\n#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',\n#   'Accept-Language': 'en',\n#}\n\n# Enable or disable spider middlewares\n# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html\n#SPIDER_MIDDLEWARES = {\n#    'dg-spider-phantomJS.middlewares.DgspiderphantomjsSpiderMiddleware': 543,\n#}\n\n# Enable or disable downloader middlewares\n# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html\n#DOWNLOADER_MIDDLEWARES = {\n#    'dg-spider-phantomJS.middlewares.MyCustomDownloaderMiddleware': 543,\n#}\n\n# Enable or disable extensions\n# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html\n#EXTENSIONS = {\n#    'scrapy.extensions.telnet.TelnetConsole': None,\n#}\n\n# Configure item pipelines\n# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html\n#ITEM_PIPELINES = {\n#    'dg-spider-phantomJS.pipelines.DgspiderphantomjsPipeline': 300,\n#}\n\n# Enable and configure the AutoThrottle extension (disabled by default)\n# See http://doc.scrapy.org/en/latest/topics/autothrottle.html\n#AUTOTHROTTLE_ENABLED = True\n# The initial download delay\n#AUTOTHROTTLE_START_DELAY = 5\n# The maximum download delay to be set in case of high latencies\n#AUTOTHROTTLE_MAX_DELAY = 60\n# The average number of requests Scrapy should be sending in parallel to\n# each remote server\n#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0\n# Enable showing throttling stats for every response received:\n#AUTOTHROTTLE_DEBUG = False\n\n# Enable and configure HTTP caching (disabled by default)\n# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings\n#HTTPCACHE_ENABLED = True\n#HTTPCACHE_EXPIRATION_SECS = 0\n#HTTPCACHE_DIR = 'httpcache'\n#HTTPCACHE_IGNORE_HTTP_CODES = []\n#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'\n"
  },
  {
    "path": "setup.py",
    "content": "from setuptools import setup, find_packages\r\n\r\nsetup(name='scrapy-mymodule',\r\n    entry_points={\r\n        'scrapy.commands': [\r\n            'crawlall=cnblogs.commands:crawlall',\r\n        ],\r\n    },\r\n)\r\n"
  },
  {
    "path": "spiders/UrlSpider_JFSH.py",
    "content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom DgSpiderPhantomJS.items import DgspiderUrlItem\nfrom scrapy.selector import Selector\nfrom DgSpiderPhantomJS import urlSettings\n\n\nclass UrlspiderJfshSpider(scrapy.Spider):\n\n    name = \"UrlSpider_JFSS\"\n\n    # set your allowed domain\n    allowed_domains = [urlSettings.DOMAIN]\n\n    # set spider start url\n    start_urls = [urlSettings.URL_START_JFSS]\n\n    # scrapy crawl\n    def parse(self, response):\n        print(\"LOGS: Starting spider JFSS ...\")\n\n        # init the item\n        item = DgspiderUrlItem()\n\n        # get the page source\n        sel = Selector(response)\n\n        # page_source = self.page\n        url_list = sel.xpath(urlSettings.POST_URL_PHANTOMJS_XPATH).extract()\n\n        # if the url you got had some prefix, it will works, such as 'http://'\n        url_item = []\n        for url in url_list:\n            url = url.replace(urlSettings.URL_PREFIX, '')\n            url_item.append(urlSettings.URL_PREFIX + url)\n\n        # use set to del repeated urls\n        url_item = list(set(url_item))\n\n        item['url'] = url_item\n\n        # transer item to pipeline\n        yield item\n"
  },
  {
    "path": "spiders/UrlSpider_MSZT.py",
    "content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom scrapy.selector import Selector\n\nfrom DgSpiderPhantomJS import urlSettings\nfrom DgSpiderPhantomJS.items import DgspiderUrlItem\n\n\nclass UrlspiderMsztSpider(scrapy.Spider):\n\n    name = \"UrlSpider_MSZT\"\n\n    # set your allowed domain\n    allowed_domains = [urlSettings.DOMAIN]\n\n    # set spider start url\n    start_urls = [urlSettings.URL_START_MSZT]\n\n    # scrapy crawl\n    def parse(self, response):\n        print(\"LOGS: Starting spider MSZT ...\")\n\n        # init the item\n        item = DgspiderUrlItem()\n\n        # get the page source\n        sel = Selector(response)\n\n        # page_source = self.page\n        url_list = sel.xpath(urlSettings.POST_URL_PHANTOMJS_XPATH).extract()\n\n        # if the url you got had some prefix, it will works, such as 'http://'\n        url_item = []\n        for url in url_list:\n            url = url.replace(urlSettings.URL_PREFIX, '')\n            url_item.append(urlSettings.URL_PREFIX + url)\n\n        # use set to del repeated urls\n        url_item = list(set(url_item))\n\n        item['url'] = url_item\n\n        # transer item to pipeline\n        yield item\n"
  },
  {
    "path": "spiders/UrlSpider_SYDW.py",
    "content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom scrapy.selector import Selector\n\nfrom DgSpiderPhantomJS import urlSettings\nfrom DgSpiderPhantomJS.items import DgspiderUrlItem\n\n\nclass UrlspiderSydwSpider(scrapy.Spider):\n\n    name = \"UrlSpider_SYDW\"\n\n    # set your allowed domain\n    allowed_domains = [urlSettings.DOMAIN]\n\n    # set spider start url\n    start_urls = [urlSettings.URL_START_SYDW]\n\n    # scrapy crawl\n    def parse(self, response):\n        print(\"LOGS: Starting spider SYDW ...\")\n\n        # init the item\n        item = DgspiderUrlItem()\n\n        # get the page source\n        sel = Selector(response)\n\n        # page_source = self.page\n        url_list = sel.xpath(urlSettings.POST_URL_PHANTOMJS_XPATH).extract()\n\n        # if the url you got had some prefix, it will works, such as 'http://'\n        url_item = []\n        for url in url_list:\n            url = url.replace(urlSettings.URL_PREFIX, '')\n            url_item.append(urlSettings.URL_PREFIX + url)\n\n        # use set to del repeated urls\n        url_item = list(set(url_item))\n\n        item['url'] = url_item\n\n        # transer item to pipeline\n        yield item"
  },
  {
    "path": "spiders/UrlSpider_YLBG.py",
    "content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom scrapy.selector import Selector\n\nfrom DgSpiderPhantomJS import urlSettings\nfrom DgSpiderPhantomJS.items import DgspiderUrlItem\n\n\nclass UrlspiderYlbgSpider(scrapy.Spider):\n\n    name = \"UrlSpider_YLBG\"\n\n\n    # set your allowed domain\n    allowed_domains = [urlSettings.DOMAIN]\n\n    # set spider start url\n    start_urls = [urlSettings.URL_START_YLBG]\n\n    # scrapy crawl\n    def parse(self, response):\n        print(\"LOGS: Starting spider YLBG ...\")\n\n        # init the item\n        item = DgspiderUrlItem()\n\n        # get the page source\n        sel = Selector(response)\n\n        # page_source = self.page\n        url_list = sel.xpath(urlSettings.POST_URL_PHANTOMJS_XPATH).extract()\n\n        # if the url you got had some prefix, it will works, such as 'http://'\n        url_item = []\n        for url in url_list:\n            url = url.replace(urlSettings.URL_PREFIX, '')\n            url_item.append(urlSettings.URL_PREFIX + url)\n\n        # use set to del repeated urls\n        url_item = list(set(url_item))\n\n        item['url'] = url_item\n\n        # transer item to pipeline\n        yield item"
  },
  {
    "path": "spiders/UrlSpider_YMYE.py",
    "content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom scrapy.selector import Selector\n\nfrom DgSpiderPhantomJS import urlSettings\nfrom DgSpiderPhantomJS.items import DgspiderUrlItem\n\n\nclass UrlspiderYmyeSpider(scrapy.Spider):\n\n    name = \"UrlSpider_YMYE\"\n\n    # set your allowed domain\n    allowed_domains = [urlSettings.DOMAIN]\n\n    # set spider start url\n    start_urls = [urlSettings.URL_START_YMYE]\n\n    # scrapy crawl\n    def parse(self, response):\n        print(\"LOGS: Starting spider YMYE ...\")\n\n        # init the item\n        item = DgspiderUrlItem()\n\n        # get the page source\n        sel = Selector(response)\n\n        # page_source = self.page\n        url_list = sel.xpath(urlSettings.POST_URL_PHANTOMJS_XPATH).extract()\n\n        # if the url you got had some prefix, it will works, such as 'http://'\n        url_item = []\n        for url in url_list:\n            url = url.replace(urlSettings.URL_PREFIX, '')\n            url_item.append(urlSettings.URL_PREFIX + url)\n\n        # use set to del repeated urls\n        url_item = list(set(url_item))\n\n        item['url'] = url_item\n\n        # transer item to pipeline\n        yield item\n\n        # for i in range(5):\n        #     yield Request(self.start_urls[0], callback=self.parse)"
  },
  {
    "path": "spiders/__init__.py",
    "content": "# This package will contain the spiders of your Scrapy project\n#\n# Please refer to the documentation for information on how to create and manage\n# your spiders.\n"
  },
  {
    "path": "test.py",
    "content": "import datetime\r\nimport sys, shelve, time, execjs\r\n# import PyV8\r\n\r\n# create_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\r\n# print(create_time)\r\n\r\n\r\ndef initDriverPool():\r\n    create_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\r\n    time_array = time.strptime(create_time, \"%Y-%m-%d %H:%M:%S\")\r\n    time_stamp = int(time.mktime(time_array))\r\n\r\n    print(time_stamp)\r\n\r\ndef execjs():\r\n    js_str = open('D:\\Scrapy\\DgSpiderPhantomJS\\DgSpiderPhantomJS\\params.js').read()\r\n    a = execjs.compile(js_str).call('getParam')\r\n    # a = execjs.eval(js_str3)\r\n    print(a)\r\n\r\n# def js(self):\r\n#     ctxt = PyV8.JSContext()\r\n#     ctxt.enter()\r\n#     func = ctxt.eval('''(function(){return '###'})''')\r\n#     print(func)\r\n\r\nif __name__=='__main__':\r\n    execjs()"
  },
  {
    "path": "urlSettings.py",
    "content": "# -*- coding: utf-8 -*-\n\n\"\"\"爬取域名\"\"\"\nDOMAIN = 'toutiao.com'\n\n\"\"\"圈子列表\"\"\"\n# 减肥瘦身\nGROUP_ID_JFSS = '33'\n# 情感生活\nGROUP_ID_QQSH = '30'\n# 营养专家\nGROUP_ID_YYZJ = '35'\n# 孕妈育儿\nGROUP_ID_YMYE = '4'\n# 深夜豆文\nGROUP_ID_SYDW = '37'\n# 美食杂谈\nGROUP_ID_MSZT = '24'\n# 娱乐八卦\nGROUP_ID_YLBG = '38'\n\n\"\"\"爬虫列表\"\"\"\nSPIDER_JFSS = 'UrlSpider_JFSS'\nSPIDER_QQSH = 'UrlSpider_QQSH'\nSPIDER_YYZJ = 'UrlSpider_YYZJ'\nSPIDER_YMYE = 'UrlSpider_YMYE'\nSPIDER_SYDW = 'UrlSpider_SYDW'\nSPIDER_MSZT = 'UrlSpider_MSZT'\nSPIDER_YLBG = 'UrlSpider_YLBG'\n\nMODULE = '999'\n\n# url 前缀\nURL_PREFIX = 'http://www.toutiao.com'\n\n# 爬取起始页\nURL_START_JFSS = 'http://www.toutiao.com/ch/news_regimen/'\nURL_START_YMYE = 'http://www.toutiao.com/ch/news_baby/'\nURL_START_SYDW = 'http://www.toutiao.com/ch/news_essay/'\nURL_START_MSZT = 'http://www.toutiao.com/ch/news_food/'\nURL_START_YLBG = 'http://www.toutiao.com/ch/news_entertainment/'\n\n\"\"\"静态页爬取规则\"\"\"\n# # 文章列表页起始爬取URL\n# START_LIST_URL = 'http://www.eastlady.cn/emotion/pxgx/1.html'\n#\n# # 文章列表循环规则\n# LIST_URL_RULER_PREFIX = 'http://www.eastlady.cn/emotion/pxgx/'\n# LIST_URL_RULER_SUFFIX = '.html'\n# LIST_URL_RULER_LOOP = 30\n#\n# # 文章URL爬取规则XPATH\n# POST_URL_XPATH = '//div[@class=\"article_list\"]/ul/li/span[1]/a[last()]/@href'\n\n\"\"\"今日头条-动态JS/Ajax爬取规则\"\"\"\nPOST_URL_PHANTOMJS_XPATH = '//div[@class=\"title-box\"]/a/@href'\n\n\n"
  },
  {
    "path": "webBrowserPools/ghostdriver.log",
    "content": "[INFO  - 2017-05-08T02:11:33.071Z] GhostDriver - Main - running on port 13763\r\n[INFO  - 2017-05-08T02:11:36.561Z] Session [aa201d90-3393-11e7-8f82-03c3e0612c46] - page.settings - {\"XSSAuditingEnabled\":false,\"javascriptCanCloseWindows\":true,\"javascriptCanOpenWindows\":true,\"javascriptEnabled\":true,\"loadImages\":false,\"localToRemoteUrlAccessEnabled\":false,\"userAgent\":\"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)\",\"webSecurityEnabled\":true}\r\n[INFO  - 2017-05-08T02:11:36.561Z] Session [aa201d90-3393-11e7-8f82-03c3e0612c46] - page.customHeaders:  - {}\r\n[INFO  - 2017-05-08T02:11:36.562Z] Session [aa201d90-3393-11e7-8f82-03c3e0612c46] - Session.negotiatedCapabilities - {\"browserName\":\"phantomjs\",\"version\":\"2.1.1\",\"driverName\":\"ghostdriver\",\"driverVersion\":\"1.2.0\",\"platform\":\"windows-7-32bit\",\"javascriptEnabled\":true,\"takesScreenshot\":true,\"handlesAlerts\":false,\"databaseEnabled\":false,\"locationContextEnabled\":false,\"applicationCacheEnabled\":false,\"browserConnectionEnabled\":false,\"cssSelectorsEnabled\":true,\"webStorageEnabled\":false,\"rotatable\":false,\"acceptSslCerts\":false,\"nativeEvents\":true,\"proxy\":{\"proxyType\":\"direct\"},\"phantomjs.page.settings.userAgent\":\"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)\",\"phantomjs.page.settings.loadImages\":false}\r\n[INFO  - 2017-05-08T02:11:36.562Z] SessionManagerReqHand - _postNewSessionCommand - New Session Created: aa201d90-3393-11e7-8f82-03c3e0612c46\r\n"
  },
  {
    "path": "webBrowserPools/pool.py",
    "content": "# douguo object pool\r\n# for the page which loaded by js/ajax\r\n# ang changes should be recored here:\r\n#\r\n# @author zhangjianfei\r\n# @date 2017/05/08\r\n\r\nfrom selenium import webdriver\r\nfrom scrapy.http import HtmlResponse\r\nfrom selenium.webdriver.common.desired_capabilities import DesiredCapabilities\r\nimport time\r\nimport random\r\nimport os\r\nimport DgSpiderPhantomJS.settings as settings\r\nimport pickle\r\n\r\n\r\ndef save_driver():\r\n    dcap = dict(DesiredCapabilities.PHANTOMJS)\r\n    dcap[\"phantomjs.page.settings.userAgent\"] = (random.choice(settings.USER_AGENTS))\r\n    dcap[\"phantomjs.page.settings.loadImages\"] = False\r\n    driver = webdriver.PhantomJS(executable_path=r\"D:\\phantomjs-2.1.1\\bin\\phantomjs.exe\", desired_capabilities=dcap)\r\n    fn = open('D:\\driver.pkl', 'w')\r\n\r\n    # with open(fn, 'w') as f:\r\n    pickle.dump(driver, fn, 0)\r\n    fn.close()\r\n\r\n\r\ndef get_driver():\r\n    fn = 'D:\\driver.pkl'\r\n    with open(fn, 'r') as f:\r\n        driver = pickle.load(f)\r\n    return driver\r\n\r\n\r\nif __name__ == '__main__':\r\n    save_driver()\r\n"
  }
]