python使用scrapy解析js示例

风景,因走过而美丽。命运,因努力而精彩。南国园内看夭红,溪畔临风血艳浓。如果回到年少时光,那间学堂,我愿依靠在你身旁,陪你欣赏古人的诗章,往后的夕阳。


from selenium import selenium

class MySpider(CrawlSpider):
name = 'cnbeta'
allowed_domains = ['cnbeta.com']
start_urls = ['//www.haodaima.com']

rules = (
# Extract links matching 'category.php' (but not matching 'subsection.php')
# and follow links from them (since no callback means follow=True by default).
Rule(SgmlLinkExtractor(allow=('/articles/.*\.htm', )),
callback='parse_page', follow=True),

# Extract links matching 'item.php' and parse them with the spider's method parse_item
)

def __init__(self):
CrawlSpider.__init__(self)
self.verificationErrors = []
self.selenium = selenium("localhost", 4444, "*firefox", "//www.haodaima.com")
self.selenium.start()

def __del__(self):
self.selenium.stop()
print self.verificationErrors
CrawlSpider.__del__(self)


def parse_page(self, response):
self.log('Hi, this is an item page! %s' % response.url)
sel = Selector(response)
from webproxy.items import WebproxyItem

sel = self.selenium
sel.open(response.url)
sel.wait_for_page_to_load("30000")
import time

time.sleep(2.5)

以上就是python使用scrapy解析js示例。世上只有一种英雄主义,就是在认清生活真相之后依然热爱生活。更多关于python使用scrapy解析js示例请关注haodaima.com其它相关文章!

您可能有感兴趣的文章
Python自动化运维-使用Python脚本监控华为AR路由器关键路由变化

Python自动化运维-netmiko模块设备自动发现

Python自动化运维—netmiko模块连接并配置华为交换机

Python自动化运维-利用Python-netmiko模块备份设备配置

Python自动化运维-Paramiko模块和堡垒机实战