爬虫手段是什么,爬虫常见的反爬手段

导读大家好,linda来为大家解答以上的问题。爬虫手段是什么,爬虫常见的反爬手段很多人还不知道,现在让我们一起来看看吧!导读:网络爬虫是一种很好的自动采集数据的通用手段。本文将会对爬虫...

大家好,linda来为大家解答以上的问题。爬虫手段是什么,爬虫常见的反爬手段很多人还不知道,现在让我们一起来看看吧!

导读:网络爬虫是一种很好的自动采集数据的通用手段。本文将会对爬虫的类型进行介绍。

作者:赵国生 王健

来源:华章科技

爬虫手段是什么,爬虫常见的反爬手段

  • 聚焦网络爬虫是“面向特定主题需求”的一种爬虫程序,而通用网络爬虫则是捜索引擎抓取系统(Baidu、Google、Yahoo等)的重要组成部分,主要目的是将互联网上的网页下载到本地,形成一个互联网内容的镜像备份。
  • 增量抓取意即针对某个站点的数据进行抓取,当网站的新增数据或者该站点的数据发生变化后,自动地抓取它新增的或者变化后的数据。
  • Web页面按存在方式可以分为表层网页(surface Web)和深层网页(deep Web,也称invisible Web pages或hidden Web)。
  • 表层网页是指传统搜索引擎可以索引的页面,即以超链接可以到达的静态网页为主来构成的Web页面。
  • 深层网页是那些大部分内容不能通过静态链接获取的、隐藏在搜索表单后的,只有用户提交一些关键词才能获得的Web页面。

01 聚焦爬虫技术

聚焦网络爬虫(focused crawler)也就是主题网络爬虫。聚焦爬虫技术增加了链接评价和内容评价模块,其爬行策略实现要点就是评价页面内容以及链接的重要性。

基于链接评价的爬行策略,主要是以Web页面作为半结构化文档,其中拥有很多结构信息可用于评价链接重要性。还有一个是利用Web结构来评价链接价值的方法,也就是HITS法,其通过计算每个访问页面的Authority权重和Hub权重来决定链接访问顺序。

基于内容评价的爬行策略,主要是将与文本相似的计算法加以应用,提出Fish-Search算法,把用户输入查询词当作主题,在算法的进一步改进下,通过Shark-Search算法就能利用空间向量模型来计算页面和主题相关度大小。

面向主题爬虫,面向需求爬虫:会针对某种特定的内容去爬取信息,而且会保证信息和需求尽可能相关。一个简单的聚焦爬虫使用方法的示例如下所示。

  • 【例1】一个简单的爬取图片的聚焦爬虫
importu0026nbsp;urllib.requestu0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;爬虫专用的包urllib,不同版本的Python需要下载不同的爬虫专用包importu0026nbsp;reu0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;正则用来规律爬取keyname=""u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;想要爬取的内容key=urllib.request.quote(keyname)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;需要将你输入的keyname解码,从而让计算机读懂foru0026nbsp;iu0026nbsp;inu0026nbsp;range(0,5):u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;(0,5)数字可以自己设置,是淘宝某产品的页数u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;url="https://s.taobao.com/search?q="+key+"u0026imgfile=u0026js=1u0026stats_click=search_radio_all%3A1u0026initiative_id=staobaoz_20180815u0026ie=utf8u0026bcoffset=0u0026ntoffset=6u0026p4ppushleft=1%2C48u0026s="+str(i*44)#u0026nbsp;url后面加上你想爬取的网站名,然后你需要多开几个类似的网站以找到其规则#u0026nbsp;data是你爬取到的网站所有的内容要解码要读取内容u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;pat=u0026#39;"pic_url":"//(.*?)"u0026#39;#u0026nbsp;pat使用正则表达式从网页爬取图片#u0026nbsp;将你爬取到的内容放在一个列表里面u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;print(picturelist)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;可以不打印,也可以打印下来看看u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;foru0026nbsp;ju0026nbsp;inu0026nbsp;range(0,len(picturelist)):u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;picture=picturelist[j]u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;pictureurl="http://"+pictureu0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;将列表里的内容遍历出来,并加上http://转到高清图片u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;file="E:/pycharm/vscode文件/图片/"+str(i)+str(j)+".jpg"u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;再把图片逐张编号,不然重复的名字将会被覆盖掉u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;urllib.request.urlretrieve(pictureurl,filename=file)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;最后保存到文件夹

02 通用爬虫技术

通用爬虫技术(general purpose Web crawler)也就是全网爬虫。其实现过程如下。

  • 第一,获取初始URL。初始URL地址可以由用户人为指定,也可以由用户指定的某个或某几个初始爬取网页决定。
  • 第二,根据初始的URL爬取页面并获得新的URL。获得初始的URL地址之后,需要先爬取对应URL地址中的网页,接着将网页存储到原始数据库中,并且在爬取网页的同时,发现新的URL地址,并且将已爬取的URL地址存放到一个URL列表中,用于去重及判断爬取的进程。
  • 第三,将新的URL放到URL队列中,在于第二步内获取下一个新的URL地址之后,会将新的URL地址放到URL队列中。
  • 第四,从URL队列中读取新的URL,并依据新的URL爬取网页,同时从新的网页中获取新的URL并重复上述的爬取过程。
  • 第五,满足爬虫系统设置的停止条件时,停止爬取。在编写爬虫的时候,一般会设置相应的停止条件。如果没有设置停止条件,爬虫便会一直爬取下去,一直到无法获取新的URL地址为止,若设置了停止条件,爬虫则会在停止条件满足时停止爬取。详情请参见图2-5中的右下子图。

通用爬虫技术的应用有着不同的爬取策略,其中的广度优先策略以及深度优先策略都是比较关键的,如深度优先策略的实施是依照深度从低到高的顺序来访问下一级网页链接。

关于通用爬虫使用方法的示例如下。

  • 【例2】爬取京东商品信息
u0026#39;u0026#39;u0026#39;爬取京东商品信息:u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;请求url:https://www.jd.com/u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;提取商品信息:u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;1.商品详情页u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;2.商品名称u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;3.商品价格u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;4.评价人数u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;5.商品商家u0026#39;u0026#39;u0026#39;fromu0026nbsp;seleniumu0026nbsp;importu0026nbsp;webdriveru0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;引入selenium中的webdriverfromu0026nbsp;selenium.webdriver.common.keysu0026nbsp;importu0026nbsp;Keysimportu0026nbsp;timedefu0026nbsp;get_good(driver):u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;try:u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;通过JS控制滚轮滑动获取所有商品信息u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;js_codeu0026nbsp;=u0026nbsp;u0026#39;u0026#39;u0026#39;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;window.scrollTo(0,5000);u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026#39;u0026#39;u0026#39;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;driver.execute_script(js_code)u0026nbsp;u0026nbsp;#u0026nbsp;执行js代码u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;等待数据加载u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;time.sleep(2)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;查找所有商品figureu0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;good_figureu0026nbsp;=u0026nbsp;driver.find_element_by_id(u0026#39;J_goodsListu0026#39;)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;good_listu0026nbsp;=u0026nbsp;driver.find_elements_by_class_name(u0026#39;gl-itemu0026#39;)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;nu0026nbsp;=u0026nbsp;1u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;foru0026nbsp;goodu0026nbsp;inu0026nbsp;good_list:u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;根据属性选择器查找u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;商品链接u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;good_urlu0026nbsp;=u0026nbsp;good.find_element_by_css_selector(u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026#39;.p-imgu0026nbsp;au0026#39;).get_attribute(u0026#39;hrefu0026#39;)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;商品名称u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;good_nameu0026nbsp;=u0026nbsp;good.find_element_by_css_selector(u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026#39;.p-nameu0026nbsp;emu0026#39;).text.replace("n",u0026nbsp;"--")u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;商品价格u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;good_priceu0026nbsp;=u0026nbsp;good.find_element_by_class_name(u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026#39;p-priceu0026#39;).text.replace("n",u0026nbsp;":")u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;评价人数u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;good_commitu0026nbsp;=u0026nbsp;good.find_element_by_class_name(u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026#39;p-commitu0026#39;).text.replace("n",u0026nbsp;"u0026nbsp;")u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;good_contentu0026nbsp;=u0026nbsp;fu0026#39;u0026#39;u0026#39;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;商品链接:u0026nbsp;{good_url}u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;商品名称:u0026nbsp;{good_name}u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;商品价格:u0026nbsp;{good_price}u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;评价人数:u0026nbsp;{good_commit}u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;nu0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026#39;u0026#39;u0026#39;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;print(good_content)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;withu0026nbsp;open(u0026#39;jd.txtu0026#39;,u0026nbsp;u0026#39;au0026#39;,u0026nbsp;encoding=u0026#39;utf-8u0026#39;)u0026nbsp;asu0026nbsp;f:u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;f.write(good_content)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;next_tagu0026nbsp;=u0026nbsp;driver.find_element_by_class_name(u0026#39;pn-nextu0026#39;)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;next_tag.click()u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;time.sleep(2)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;递归调用函数u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;get_good(driver)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;time.sleep(10)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;finally:u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;driver.close()ifu0026nbsp;__name__u0026nbsp;==u0026nbsp;u0026#39;__main__u0026#39;:u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;good_nameu0026nbsp;=u0026nbsp;input(u0026#39;请输入爬取商品信息:u0026#39;).strip()u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;driveru0026nbsp;=u0026nbsp;webdriver.Chrome()u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;driver.implicitly_wait(10)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;往京东主页发送请求u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;driver.get(u0026#39;https://www.jd.com/u0026#39;)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;输入商品名称,并回车搜索u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;input_tagu0026nbsp;=u0026nbsp;driver.find_element_by_id(u0026#39;keyu0026#39;)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;input_tag.send_keys(good_name)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;input_tag.send_keys(Keys.ENTER)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;time.sleep(2)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;get_good(driver)

03 增量爬虫技术

某些网站会定时在原有网页数据的基础上更新一批数据。例如某电影网站会实时更新一批最近热门的电影,小说网站会根据作者创作的进度实时更新最新的章节数据等。在遇到类似的场景时,我们便可以采用增量式爬虫。

增量爬虫技术(incremental Web crawler)就是通过爬虫程序监测某网站数据更新的情况,以便可以爬取到该网站更新后的新数据。

关于如何进行增量式的爬取工作,以下给出三种检测重复数据的思路:

  1. 在发送请求之前判断这个URL是否曾爬取过;
  2. 在解析内容后判断这部分内容是否曾爬取过;
  3. 写入存储介质时判断内容是否已存在于介质中。
  • 第一种思路适合不断有新页面出现的网站,比如小说的新章节、每天的实时新闻等;
  • 第二种思路则适合页面内容会定时更新的网站;
  • 第三种思路则相当于最后一道防线。这样做可以最大限度地达到去重的目的。

不难发现,实现增量爬取的核心是去重。目前存在两种去重方法

  • 第一,对爬取过程中产生的URL进行存储,存储在Redis的set中。当下次进行数据爬取时,首先在存储URL的set中对即将发起的请求所对应的URL进行判断,如果存在则不进行请求,否则才进行请求。
  • 第二,对爬取到的网页内容进行唯一标识的制定(数据指纹),然后将该唯一标识存储至Redis的set中。当下次爬取到网页数据的时候,在进行持久化存储之前,可以先判断该数据的唯一标识在Redis的set中是否存在,从而决定是否进行持久化存储。

关于增量爬虫的使用方法示例如下所示。

  • 【例3】爬取4567tv网站中所有的电影详情数据
importu0026nbsp;scrapyfromu0026nbsp;scrapy.linkextractorsu0026nbsp;importu0026nbsp;LinkExtractorfromu0026nbsp;scrapy.spidersu0026nbsp;importu0026nbsp;CrawlSpider,u0026nbsp;Rulefromu0026nbsp;redisu0026nbsp;importu0026nbsp;Redisfromu0026nbsp;incrementPro.itemsu0026nbsp;importu0026nbsp;IncrementproItemclassu0026nbsp;MovieSpider(CrawlSpider):u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;nameu0026nbsp;=u0026nbsp;u0026#39;movieu0026#39;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;allowed_domainsu0026nbsp;=u0026nbsp;[u0026#39;www.xxx.comu0026#39;]u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;start_urlsu0026nbsp;=u0026nbsp;[u0026#39;http://www.4567tv.tv/frim/index7-11.htmlu0026#39;]u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;rulesu0026nbsp;=u0026nbsp;(u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;Rule(LinkExtractor(allow=ru0026#39;/frim/index7-d+.htmlu0026#39;),u0026nbsp;callback=u0026#39;parse_itemu0026#39;,u0026nbsp;follow=True),u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;创建Redis链接对象u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;connu0026nbsp;=u0026nbsp;Redis(host=u0026#39;127.0.0.1u0026#39;,u0026nbsp;port=6379)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;defu0026nbsp;parse_item(self,u0026nbsp;response):u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;li_listu0026nbsp;=u0026nbsp;response.xpath(u0026#39;//li[@class="p1u0026nbsp;m1"]u0026#39;)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;foru0026nbsp;liu0026nbsp;inu0026nbsp;li_list:u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;获取详情页的urlu0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;detail_urlu0026nbsp;=u0026nbsp;u0026#39;http://www.4567tv.tvu0026#39;u0026nbsp;+u0026nbsp;li.xpath(u0026#39;./a/@hrefu0026#39;).extract_first()u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;将详情页的url存入Redis的set中u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;exu0026nbsp;=u0026nbsp;self.conn.sadd(u0026#39;urlsu0026#39;,u0026nbsp;detail_url)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;ifu0026nbsp;exu0026nbsp;==u0026nbsp;1:u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;print(u0026#39;该url没有被爬取过,可以进行数据的爬取u0026#39;)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;yieldu0026nbsp;scrapy.Request(url=detail_url,u0026nbsp;callback=self.parst_detail)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;else:u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;print(u0026#39;数据还没有更新,暂无新数据可爬取!u0026#39;)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;解析详情页中的电影名称和类型,进行持久化存储u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;defu0026nbsp;parst_detail(self,u0026nbsp;response):u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;itemu0026nbsp;=u0026nbsp;IncrementproItem()u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;item[u0026#39;nameu0026#39;]u0026nbsp;=u0026nbsp;response.xpath(u0026#39;//dt[@class="name"]/text()u0026#39;).extract_first()u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;item[u0026#39;kindu0026#39;]u0026nbsp;=u0026nbsp;response.xpath(u0026#39;//figure[@class="ct-c"]/dl/dt[4]//text()u0026#39;).extract()u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;item[u0026#39;kindu0026#39;]u0026nbsp;=u0026nbsp;u0026#39;u0026#39;.join(item[u0026#39;kindu0026#39;])u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;yieldu0026nbsp;it

管道文件:

fromu0026nbsp;redisu0026nbsp;importu0026nbsp;Redisclassu0026nbsp;IncrementproPipeline(object):u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;connu0026nbsp;=u0026nbsp;Noneu0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;defu0026nbsp;open_spider(self,spider):u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;self.connu0026nbsp;=u0026nbsp;Redis(host=u0026#39;127.0.0.1u0026#39;,port=6379)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;defu0026nbsp;process_item(self,u0026nbsp;item,u0026nbsp;spider):u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;dicu0026nbsp;=u0026nbsp;{u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026#39;nameu0026#39;:item[u0026#39;nameu0026#39;],u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026#39;kindu0026#39;:item[u0026#39;kindu0026#39;]u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;}u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;print(dic)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;self.conn.push(u0026#39;movieDatau0026#39;,dic)u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;#u0026nbsp;如果push不进去,那么dic变成str(dic)或者改变redis版本u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;pipu0026nbsp;installu0026nbsp;-Uu0026nbsp;redis==2.10.6u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;u0026nbsp;returnu0026nbsp;item

04 深层网络爬虫技术

在互联网中,网页按存在方式可以分为表层网页深层网页两类。

所谓的表层网页,指的是不需要提交表单,使用静态的链接就能够到达的静态页面;而深层网页则隐藏在表单后面,不能通过静态链接直接获取,是需要提交一定的关键词后才能够获取到的页面,深层网络爬虫(deep Web crawler)最重要的部分即为表单填写部分。

在互联网中,深层网页的数量往往要比表层网页的数量多很多,故而,我们需要想办法爬取深层网页。

深层网络爬虫的基本构成:URL列表、LVS列表(LVS指的是标签/数值集合,即填充表单的数据源)、爬行控制器、解析器、LVS控制器、表单分析器、表单处理器、响应分析器。

深层网络爬虫的表单填写有两种类型:

  • 基于领域知识的表单填写(建立一个填写表单的关键词库,在需要的时候,根据语义分析选择对应的关键词进行填写);
  • 基于网页结构分析的表单填写(一般在领域知识有限的情况下使用,这种方式会根据网页结构进行分析,并自动地进行表单填写)。

关于作者:赵国生,哈尔滨师范大学教授,工学博士,硕士生导师,黑龙江省网络安全技术领域特殊人才。主要从事可信网络、入侵容忍、认知计算、物联网安全等方向的教学与科研工作。

本文摘编自《Python网络爬虫技术与实战》,经出版方授权发布。

爬虫手段是什么,爬虫常见的反爬手段

延伸阅读《Python网络爬虫技术与实战》

推荐语:本书是一本系统、全面地介绍Python网络爬虫的实战宝典。作者融合自己丰富的工程实践经验,紧密结合演示应用案例,内容覆盖了几乎所有网络爬虫涉及的核心技术。在内容编排上,一步步地剖析算法背后的概念与原理,提供大量简洁的代码实现,助你从零基础开始编程实现深度学习算法。

本文到此结束,希望对大家有所帮助。

免责声明:本文由用户上传,如有侵权请联系删除!