使用selenium能夠非常方便的獲取網頁的ajax內容,并且能夠模擬用戶點擊和輸入文本等諸多操作,這在使用scrapy爬取網頁的過程中非常有用。
網上將selenium集成到scrapy的文章很多,但是很少有能夠實現異步爬取的,下面這段代碼就重寫了scrapy的downloader,同時實現了selenium的集成以及異步。
使用時需要PhantomJSDownloadHandler添加到配置文件的DOWNLOADER中。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
|
# encoding: utf-8 from __future__ import unicode_literals from scrapy import signals from scrapy.signalmanager import SignalManager from scrapy.responsetypes import responsetypes from scrapy.xlib.pydispatch import dispatcher from selenium import webdriver from six.moves import queue from twisted.internet import defer, threads from twisted.python.failure import Failure class PhantomJSDownloadHandler( object ): def __init__( self , settings): self .options = settings.get( 'PHANTOMJS_OPTIONS' , {}) max_run = settings.get( 'PHANTOMJS_MAXRUN' , 10 ) self .sem = defer.DeferredSemaphore(max_run) self .queue = queue.LifoQueue(max_run) SignalManager(dispatcher. Any ).connect( self ._close, signal = signals.spider_closed) def download_request( self , request, spider): """use semaphore to guard a phantomjs pool""" return self .sem.run( self ._wait_request, request, spider) def _wait_request( self , request, spider): try : driver = self .queue.get_nowait() except queue.Empty: driver = webdriver.PhantomJS( * * self .options) driver.get(request.url) # ghostdriver won't response when switch window until page is loaded dfd = threads.deferToThread( lambda : driver.switch_to.window(driver.current_window_handle)) dfd.addCallback( self ._response, driver, spider) return dfd def _response( self , _, driver, spider): body = driver.execute_script( "return document.documentElement.innerHTML" ) if body.startswith( "<head></head>" ): # cannot access response header in Selenium body = driver.execute_script( "return document.documentElement.textContent" ) url = driver.current_url respcls = responsetypes.from_args(url = url, body = body[: 100 ].encode( 'utf8' )) resp = respcls(url = url, body = body, encoding = "utf-8" ) response_failed = getattr (spider, "response_failed" , None ) if response_failed and callable (response_failed) and response_failed(resp, driver): driver.close() return defer.fail(Failure()) else : self .queue.put(driver) return defer.succeed(resp) def _close( self ): while not self .queue.empty(): driver = self .queue.get_nowait() driver.close() |
以上這篇在scrapy中使用phantomJS實現異步爬取的方法就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支持服務器之家。
原文鏈接:https://blog.csdn.net/whueratsjtuer/article/details/79198863