使用scrapyImagesPipeline爬取图片资源-创新互联
这篇文章运用简单易懂的例子给大家介绍使用scrapy ImagesPipeline爬取图片资源,内容非常详细,感兴趣的小伙伴们可以参考借鉴,希望对大家能有所帮助。

这是一个使用scrapy的ImagesPipeline爬取下载图片的示例,生成的图片保存在爬虫的full文件夹里。
scrapy startproject DoubanImgs
cd DoubanImgs
scrapy genspider download_douban douban.com
vim spiders/download_douban.py
# coding=utf-8
from scrapy.spiders import Spider
import re
from scrapy import Request
from ..items import DoubanImgsItem
class download_douban(Spider):
name = 'download_douban'
default_headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, sdch, br',
'Accept-Language': 'zh-CN,zh;q=0.8,en;q=0.6',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Host': 'www.douban.com',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36',
}
def __init__(self, url='1638835355', *args, **kwargs):
self.allowed_domains = ['douban.com']
self.start_urls = []
for i in xrange(23):
if i == 0:
page_url = 'http://www.douban.com/photos/album/' + url
else:
page_url = 'http://www.douban.com/photos/album/' + url + '/?start=' + str(i*18)
self.start_urls.append(page_url)
self.url = url
# call the father base function
# super(download_douban, self).__init__(*args, **kwargs)
def start_requests(self):
for url in self.start_urls:
yield Request(url=url, headers=self.default_headers, callback=self.parse)
def parse(self, response):
list_imgs = response.xpath('//div[@class="photolst clearfix"]//img/@src').extract()
if list_imgs:
item = DoubanImgsItem()
item['image_urls'] = list_imgs
yield item 分享文章:使用scrapyImagesPipeline爬取图片资源-创新互联
本文链接:http://www.scyingshan.cn/article/ejies.html


咨询
建站咨询
