爬虫入门到精通_框架篇14(PySpider架构概述及用法详解)

news2024/11/22 11:43:47

官方文档
Sample Code:

from pyspider.libs.base_handler import *


class Handler(BaseHandler):
    crawl_config = {
    }

	# minutes=24 * 60:每隔一天重新爬取
    @every(minutes=24 * 60)
    def on_start(self):
        self.crawl('http://scrapy.org/', callback=self.index_page)

   # age:过期时间
    @config(age=10 * 24 * 60 * 60)
    def index_page(self, response):
        for each in response.doc('a[href^="http"]').items():
            self.crawl(each.attr.href, callback=self.detail_page)

    def detail_page(self, response):
        return {
            "url": response.url,
            "title": response.doc('title').text(),
        }

1 架构(Architecture)

在这里插入图片描述
scheduler:调度器
feter:请求器,请求网页
processor:用来处理数据,处理的url可以重新调用调度器
monitor&webui:监视器与webUI

scheduler

维护request队列,默认是存储在本地的数据库

feter

在这里插入图片描述
可以用两种模式:一种是直接请求,一种是js的请求。

processor

处理数据,将url重新封装起来,可以重新调用调度器。

2 Command Line

Global Config

global options work for all subcommands.

Usage: pyspider [OPTIONS] COMMAND [ARGS]...

  A powerful spider system in python.

Options:
  -c, --config FILENAME    a json file with default values for subcommands.
                           {“webui”: {“port”:5001}}
  --logging-config TEXT    logging config file for built-in python logging
                           module  [default: pyspider/pyspider/logging.conf]
  --debug                  debug mode
  --queue-maxsize INTEGER  maxsize of queue
  --taskdb TEXT            database url for taskdb, default: sqlite
  --projectdb TEXT         database url for projectdb, default: sqlite
  --resultdb TEXT          database url for resultdb, default: sqlite
  --message-queue TEXT     connection url to message queue, default: builtin
                           multiprocessing.Queue
  --amqp-url TEXT          [deprecated] amqp url for rabbitmq. please use
                           --message-queue instead.
  --beanstalk TEXT         [deprecated] beanstalk config for beanstalk queue.
                           please use --message-queue instead.
  --phantomjs-proxy TEXT   phantomjs proxy ip:port
  --data-path TEXT         data dir path
  --version                Show the version and exit.
  --help                   Show this message and exit.

config

pyspider启动的时候的一些配置选项

{
  "taskdb": "mysql+taskdb://username:password@host:port/taskdb",
  "projectdb": "mysql+projectdb://username:password@host:port/projectdb",
  "resultdb": "mysql+resultdb://username:password@host:port/resultdb",
  "message_queue": "amqp://username:password@host:port/%2F",
  "webui": {
    "username": "some_name",
    "password": "some_passwd",
    "need-auth": true
  }
}

项目启动时,会生成上述3个db,在当前项目的data目录下
在这里插入图片描述
webui:指定好界面访问的用户名密码
在这里插入图片描述
在这里插入图片描述
此时弹出:
在这里插入图片描述

all

运行所有的组件

Usage: pyspider all [OPTIONS]

  Run all the components in subprocess or thread

Options:
  --fetcher-num INTEGER         instance num of fetcher
  --processor-num INTEGER       instance num of processor
  --result-worker-num INTEGER   instance num of result worker
  --run-in [subprocess|thread]  run each components in thread or subprocess.
                                always using thread for windows.
  --help                        Show this message and exit.

在这里插入图片描述

one

在一个进程下运行所有的组件

Usage: pyspider one [OPTIONS] [SCRIPTS]...

  One mode not only means all-in-one, it runs every thing in one process
  over tornado.ioloop, for debug purpose

Options:
  -i, --interactive  enable interactive mode, you can choose crawl url.
  --phantomjs        enable phantomjs, will spawn a subprocess for phantomjs
  --help             Show this message and exit.

bench

请求测试

Usage: pyspider bench [OPTIONS]

  Run Benchmark test. In bench mode, in-memory sqlite database is used
  instead of on-disk sqlite database.

Options:
  --fetcher-num INTEGER         instance num of fetcher
  --processor-num INTEGER       instance num of processor
  --result-worker-num INTEGER   instance num of result worker
  --run-in [subprocess|thread]  run each components in thread or subprocess.
                                always using thread for windows.
  --total INTEGER               total url in test page
  --show INTEGER                show how many urls in a page
  --help                        Show this message and exit.

已下组件都是单独开启

scheduler

Usage: pyspider scheduler [OPTIONS]

  Run Scheduler, only one scheduler is allowed.

Options:
  --xmlrpc / --no-xmlrpc
  --xmlrpc-host TEXT
  --xmlrpc-port INTEGER
  --inqueue-limit INTEGER  size limit of task queue for each project, tasks
                           will been ignored when overflow
  --delete-time INTEGER    delete time before marked as delete
  --active-tasks INTEGER   active log size
  --loop-limit INTEGER     maximum number of tasks due with in a loop
  --scheduler-cls TEXT     scheduler class to be used.
  --help                   Show this message and exit.

phantomjs

Usage: run.py phantomjs [OPTIONS] [ARGS]...

  Run phantomjs fetcher if phantomjs is installed.

Options:
  --phantomjs-path TEXT  phantomjs path
  --port INTEGER         phantomjs port
  --auto-restart TEXT    auto restart phantomjs if crashed
  --help                 Show this message and exit.

fetcher

Usage: pyspider fetcher [OPTIONS]

  Run Fetcher.

Options:
  --xmlrpc / --no-xmlrpc
  --xmlrpc-host TEXT
  --xmlrpc-port INTEGER
  --poolsize INTEGER      max simultaneous fetches
  --proxy TEXT            proxy host:port
  --user-agent TEXT       user agent
  --timeout TEXT          default fetch timeout
  --fetcher-cls TEXT      Fetcher class to be used.
  --help                  Show this message and exit.

processor

Usage: pyspider processor [OPTIONS]

  Run Processor.

Options:
  --processor-cls TEXT  Processor class to be used.
  --help                Show this message and exit.

result_worker

Usage: pyspider result_worker [OPTIONS]

  Run result worker.

Options:
  --result-cls TEXT  ResultWorker class to be used.
  --help             Show this message and exit.

webui

Usage: pyspider webui [OPTIONS]

  Run WebUI

Options:
  --host TEXT            webui bind to host
  --port INTEGER         webui bind to host
  --cdn TEXT             js/css cdn server
  --scheduler-rpc TEXT   xmlrpc path of scheduler
  --fetcher-rpc TEXT     xmlrpc path of fetcher
  --max-rate FLOAT       max rate for each project
  --max-burst FLOAT      max burst for each project
  --username TEXT        username of lock -ed projects
  --password TEXT        password of lock -ed projects
  --need-auth            need username and password
  --webui-instance TEXT  webui Flask Application instance to be used.
  --help                 Show this message and exit.

在这里插入图片描述
在这里插入图片描述

3 API Reference

self.crawl(url, **kwargs)

self.crawl is the main interface to tell pyspider which url(s) should be crawled.
发起请求的函数

Parameters:

url

the url or url list to be crawled.
请求的网页

callback

the method to parse the response. _default: call _
回调函数

def on_start(self):
    self.crawl('http://scrapy.org/', callback=self.index_page)

the following parameters are optional

age

the period of validity of the task. The page would be regarded as not modified during the period. default: -1(never recrawl)
请求的过期时间,单位秒,过期了才重新请求

@config(age=10 * 24 * 60 * 60)
def index_page(self, response):
    ...

Every pages parsed by the callback index_page would be regarded not changed within 10 days. If you submit the task within 10 days since last crawled it would be discarded.

priority

the priority of task to be scheduled, higher the better. default: 0
指定爬取优先级.

def index_page(self):
    self.crawl('http://www.example.org/page2.html', callback=self.index_page)
    self.crawl('http://www.example.org/233.html', callback=self.detail_page,
               priority=1)

The page 233.html would be crawled before page2.html. Use this parameter can do a BFS and reduce the number of tasks in queue(which may cost more memory resources).

exetime

the executed time of task in unix timestamp. default: 0(immediately)
指定执行时间

import time
def on_start(self):
    self.crawl('http://www.example.org/', callback=self.callback,
               exetime=time.time()+30*60)

The page would be crawled 30 minutes later.

retries

retry times while failed. default: 3
失败之后的重试次数,默认时3,最大10.

itag

a marker from frontier page to reveal the potential modification of the task. It will be compared to its last value, recrawl when it’s changed. default: None
标识符,

def index_page(self, response):
    for item in response.doc('.item').items():
        self.crawl(item.find('a').attr.url, callback=self.detail_page,
                   itag=item.find('.update-time').text())

In the sample, .update-time is used as itag. If it’s not changed, the request would be discarded.
如果没有改变,不进行重新爬取

Or you can use itag with Handler.crawl_config to specify the script version if you want to restart all of the tasks.

class Handler(BaseHandler):
    crawl_config = {
        'itag': 'v223'
    }

Change the value of itag after you modified the script and click run button again. It doesn’t matter if not set before.

auto_recrawl

when enabled, task would be recrawled every age time. default: False
自动重爬,如果开启,age过期后自动重爬。

def on_start(self):
    self.crawl('http://www.example.org/', callback=self.callback,
               age=5*60*60, auto_recrawl=True)

The page would be restarted every age 5 hours.

method

HTTP method to use. default: GET
HTTP的请求方法。

params

dictionary of URL parameters to append to the URL.
get请求的一些参数.

def on_start(self):
    self.crawl('http://httpbin.org/get', callback=self.callback,
               params={'a': 123, 'b': 'c'})
    self.crawl('http://httpbin.org/get?a=123&b=c', callback=self.callback)

The two requests are the same.

data

the body to attach to the request. If a dictionary is provided, form-encoding will take place.
post请求的一些data.

def on_start(self):
    self.crawl('http://httpbin.org/post', callback=self.callback,
               method='POST', data={'a': 123, 'b': 'c'})

files

dictionary of {field: {filename: ‘content’}} files to multipart upload.
上传的文件.

user_agent

the User-Agent of the request

headers

dictionary of headers to send.

cookies

dictionary of cookies to attach to this request.

connect_timeout

timeout for initial connection in seconds. default: 20

timeout

maximum time in seconds to fetch the page. default: 120

allow_redirects

follow 30x redirect default: True
一些网页的302的跳转是否重定向设置.

validate_cert

For HTTPS requests, validate the server’s certificate? default: True
HTTPS的证书请求忽略.

proxy

proxy server of username:password@hostname:port to use, only http proxy is supported currently.

class Handler(BaseHandler):
    crawl_config = {
        'proxy': 'localhost:8080'
    }

Handler.crawl_config can be used with proxy to set a proxy for whole project.
设置代理.

etag

use HTTP Etag mechanism to pass the process if the content of the page is not changed. default: True
判断页面更新爬取情况.

last_modified

use HTTP Last-Modified header mechanism to pass the process if the content of the page is not changed. default: True

fetch_type

set to js to enable JavaScript fetcher. default: None
设置为js请求.渲染查找后的页面.

js_script

JavaScript run before or after page loaded, should been wrapped by a function like function() { document.write(“binux”); }.
网页爬取后,执行脚本.

def on_start(self):
    self.crawl('http://www.example.org/', callback=self.callback,
               fetch_type='js', js_script='''
               function() {
                   window.scrollTo(0,document.body.scrollHeight);
                   return 123;
               }
               ''')

The script would scroll the page to bottom. The value returned in function could be captured via Response.js_script_result.

js_run_at

run JavaScript specified via js_script at document-start or document-end. default: document-end
把脚本加到当我document前面还是后面.

js_viewport_width/js_viewport_height

set the size of the viewport for the JavaScript fetcher of the layout process.
JS视窗的大小.

load_images

load images when JavaScript fetcher enabled. default: False
加载时是否加载图片.

save

a object pass to the callback method, can be visit via response.save.
用来在多个函数直接传递变量的参数

def on_start(self):
    self.crawl('http://www.example.org/', callback=self.callback,
               save={'a': 123})

def callback(self, response):
    return response.save['a']

123 would be returned in callback

taskid

unique id to identify the task, default is the MD5 check code of the URL, can be overridden by method def get_taskid(self, task)
指定task的唯一标识码,ruquest队列的消息去重.

import json
from pyspider.libs.utils import md5string
def get_taskid(self, task):
    return md5string(task['url']+json.dumps(task['fetch'].get('data', '')))

Only url is md5 -ed as taskid by default, the code above add data of POST request as part of taskid.

force_update

force update task params even if the task is in ACTIVE status.
强制更新.

cancel

cancel a task, should be used with force_update to cancel a active task. To cancel an auto_recrawl task, you should set auto_recrawl=False as well.
取消任务.

@config(**kwargs)

default parameters of self.crawl when use the decorated method as callback. For example:
config里可以传递参数.

@config(age=15*60)
def index_page(self, response):
    self.crawl('http://www.example.org/list-1.html', callback=self.index_page)
    self.crawl('http://www.example.org/product-233', callback=self.detail_page)

@config(age=10*24*60*60)
def detail_page(self, response):
    return {...}

age of list-1.html is 15min while the age of product-233.html is 10days. Because the callback of product-233.html is detail_page, means it’s a detail_page so it shares the config of detail_page.

Handler.crawl_config = {}

default parameters of self.crawl for the whole project. The parameters in crawl_config for scheduler (priority, retries, exetime, age, itag, force_update, auto_recrawl, cancel) will be joined when the task created, the parameters for fetcher and processor will be joined when executed. You can use this mechanism to change the fetch config (e.g. cookies) afterwards.
将一些配置放入其中,使其全局生效.

class Handler(BaseHandler):
    crawl_config = {
        'headers': {
            'User-Agent': 'GoogleBot',
        }
    }

    ...

Response

Response.url

final URL.

Response.text

Content of response, in unicode.
返回网页源代码.
if Response.encoding is None and chardet module is available, encoding of content will be guessed.

Response.content

Content of response, in bytes.
返回网页源代码,二进制.

Response.doc

A PyQuery object of the response’s content. Links have made as absolute by default.
网页解析,调用 PyQuery的解析库.
Refer to the documentation of PyQuery: https://pythonhosted.org/pyquery/

It’s important that I will repeat, refer to the documentation of PyQuery: https://pythonhosted.org/pyquery/

Response.etree

A lxml object of the response’s content.
返回lxml对象

Response.json

The JSON-encoded content of the response, if any.
转换为Json格式.

Response.status_code

状态码:200,300等等.

Response.orig_url

If there is any redirection during the request, here is the url you just submit via self.crawl.
有重定向,就显示最原始的url.

Response.headers

A case insensitive dict holds the headers of response.

Response.cookies

Response.error

Messages when fetch error

Response.time

Time used during fetching.

Response.ok

True if status_code is 200 and no error.
判断请求是否OK.

Response.encoding

Encoding of Response.content.
用来指定编码
If Response.encoding is None, encoding will be guessed by header or content or chardet(if available).
Set encoding of content manually will overwrite the guessed encoding.

Response.save

The object saved by self.crawl API
指定不同函数直接传递参数.

Response.js_script_result

content returned by JS script
接受JS script的结果.

Response.raise_for_status()

Raise HTTPError if status code is not 200 or Response.error exists.
抛出一些错误.

self.send_message

部署过程中,消除队列的配置.

@every(minutes=0, seconds=0)

定时爬取中用的比较多.
method will been called every minutes or seconds

@every(minutes=24 * 60)
def on_start(self):
    for url in urllist:
        self.crawl(url, callback=self.index_page)

一天执行一次
The urls would be restarted every 24 hours. Note that, if age is also used and the period is longer then @every, the crawl request would be discarded as it’s regarded as not changed:

@every(minutes=24 * 60)
def on_start(self):
    self.crawl('http://www.example.org/', callback=self.index_page)

@config(age=10 * 24 * 60 * 60)
def index_page(self):
    ...

Even though the crawl request triggered every day, but it’s discard and only restarted every 10 days.

4 Frequently Asked Questions

Unreadable Code (乱码) Returned from Phantomjs

Phantomjs doesn’t support gzip, don’t set Accept-Encoding header with gzip.

How to Delete a Project?

set group to delete and status to STOP then wait 24 hours. You can change the time before a project deleted via
scheduler.DELETE_TIME.
status to STOP:
在这里插入图片描述
group to delete:
在这里插入图片描述

How to Restart a Project?

Why
It happens after you modified a script, and wants to crawl everything again with new strategy. But as the age of urls are not expired. Scheduler will discard all of the new requests.
Solution
1.Create a new project.
2.Using a itag within Handler.crawl_config to specify the version of your script.

What does the progress bar mean on the dashboard?

When mouse move onto the progress bar, you can see the explaintions.
For 5m, 1h, 1d the number are the events triggered in 5m, 1h, 1d. For all progress bar, they are the number of total tasks in correspond status.
Only the tasks in DEBUG/RUNNING status will show the progress.
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1511308.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【FFmpeg】ffmpeg 命令行参数 ⑤ ( 使用 ffmpeg 命令提取 音视频 数据 | 保留封装格式 | 保留编码格式 | 重新编码 )

文章目录 一、使用 ffmpeg 命令提取 音视频 数据1、提取音频数据 - 保留封装格式2、提取视频数据 - 保留封装格式3、提取视频数据 - 保留编码格式4、提取视频数据 - 重新编码5、提取音频数据 - 保留编码格式6、提取音频数据 - 重新编码 一、使用 ffmpeg 命令提取 音视频 数据 1…

【Attribute】Inspector视图枚举字段范围限定特性

简介 为了提升枚举的复用性,有时候我们可以通过限定枚举字段的范围来避免定义新的枚举类型,例如有一个代表方向的枚举(包括None,Left,Up,Right,Down),全局方向&#xff0…

如何在RTMP推送端和RTMP播放端支持Enhanced RTMP H.265(HEVC)

技术背景 时隔多年,在Enhancing RTMP, FLV With Additional Video Codecs And HDR Support(2023年7月31号正式发布)官方规范出来之前,如果RTMP要支持H.265,大家约定俗成的做法是扩展flv协议,CDN厂商携手给…

掌握未来数据管理:MongoDB学习网站全攻略!

介绍:MongoDB是一个开源的文档型数据库系统,以其灵活性和可扩展性而闻名。以下是对MongoDB的详细介绍: 基本概念:MongoDB与传统的关系型数据库不同,它使用BSON(类似JSON)格式存储数据&#xff0…

案例分析篇05:数据库设计相关28个考点(9~16)(2024年软考高级系统架构设计师冲刺知识点总结系列文章)

专栏系列文章推荐: 2024高级系统架构设计师备考资料(高频考点&真题&经验)https://blog.csdn.net/seeker1994/category_12593400.html 【历年案例分析真题考点汇总】与【专栏文章案例分析高频考点目录】(2024年软考高级系统架构设计师冲刺知识点总结-案例分析篇-…

LeetCode[题解] 1261. 在受污染的二叉树中查找元素

首先我们看原题 给出一个满足下述规则的二叉树: root.val 0如果 treeNode.val x 且 treeNode.left ! null,那么 treeNode.left.val 2 * x 1如果 treeNode.val x 且 treeNode.right ! null,那么 treeNode.right.val 2 * x 2 现在这个…

细粒度IP定位参文2(Corr-SLG):A street-level IP geolocation method (2021年)

[2]S. Ding, F. Zhao, and X. Luo, “A street-level IP geolocation method based on delay-distance correlation and multilayered common routers,” Secur. Commun. Netw., vol. 2021, no. 1, pp. 1–10, 2021. 智能设备的地理位置可以帮助提供多媒体内容提供商和5G网络中…

linux设置systemctl启动

linux设置nginx systemctl启动 生成nginx.pid文件 #验证nginx的配置,并生成nginx.pid文件 /usr/local/nginx/sbin/nginx -t #pid文件目录在 /usr/local/nginx/run/nginx.pid 设置systemctl启动nginx #添加之前需要先关闭启动状态的nginx,让nginx是未…

静态时序分析:SDC约束命令set_output_delay详解

相关阅读 静态时序分析https://blog.csdn.net/weixin_45791458/category_12567571.html?spm1001.2014.3001.5482 目录 指定延迟值 指定端口、引脚列表 指定参考时钟 简单使用 指定时钟下降沿 指定参考端口、引脚 包含源、网络延迟 指定电平敏感 指定上升、下降沿 指…

DVWA靶场-暴力破解

DVWA是一个适合新手锻炼的靶机,是由PHP/MySQL组成的 Web应用程序,帮助大家了解web应用的攻击手段 DVWA大致能分成以下几个模块,包含了OWASP Top 10大主流漏洞环境。 Brute Force——暴力破解 Command Injection——命令注入 CSRF——跨站请…

网工内推 | 数据库工程师,最高35k*14薪,OCP认证优先,带薪年假

01 洛轲智能 招聘岗位:数据库工程师 职责描述: 1. 负责数据库备份及恢复策略制定; 2. 负责数据库性能分析及调优; 3. 负责数据库相关项目的方案制定、评测、投产实施和维护管理; 4. 数据库日常运维工作: -…

手撸nano-gpt

nano GPT 跟着youtube上AndrejKarpathy大佬复现一个简单GPT 1.数据集准备 很小的莎士比亚数据集 wget https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt 1.1简单的tokenize 数据和等下的模型较简单,所以这里用了个…

解析Perl爬虫代码:使用WWW__Mechanize__PhantomJS库爬取stackoverflow.com的详细步骤

在这篇文章中,我们将探讨如何使用Perl语言和WWW::Mechanize::PhantomJS库来爬取网站数据。我们的目标是爬取stackoverflow.com的内容,同时使用爬虫代理来和多线程技术以提高爬取效率,并将数据存储到本地。 Perl爬虫代码解析 首先&#xff0…

神经网络线性量化方法简介

可点此跳转看全篇 目录 神经网络量化量化的必要性量化方法简介线性对称量化线性非对称量化方法神经网络量化 量化的必要性 NetworkModel size (MB)GFLOPSAlexNet2330.7VGG-1652815.5VGG-1954819.6ResNet-50983.9ResNet-1011707.6ResNet-15223011.3GoogleNet271.6InceptionV38…

【机器学习300问】34、决策树对于数值型特征如果确定阈值?

还是用之前的猫狗二分类任务举例(这个例子出现在【机器学习300问】第33问中),我们新增一个数值型特征(体重),下表是数据集的详情。如果想了解更多决策树的知识可以看看我之前的两篇文章: 【机器…

spring启动时如何自定义日志实现

一、现象 最近在编写传统的springmvc项目时,遇到了一个问题:虽然在项目的web.xml中指定了log4j的日志启动监听器Log4jServletContextListener,且开启了日志写入文件,但是日志文件中只记录业务代码中我们声明了日志记录器的日志&a…

CPU设计实战-协处理器访问指令的实现

目录 一 协处理器的作用与功能 1.计数寄存器和比较寄存器 2.Status寄存器 3.Cause寄存器(标号为13) 4.EPC寄存器(标号为14) 5.PRId寄存器(标号为15) 6.Config 寄存器(标号为16)-配置寄存器 二 协处理器的实现 三 协处理器访问指令说明 四 具体实现 1.译码阶段 2.执行…

3/12/24交换排序、插入排序、选择排序、归并排序

目录 交换排序 冒泡排序 快速排序 插入排序 直接插入排序 选择排序 简单选择排序 堆排序 归并排序 各种排序的时间复杂度、空间复杂度、稳定性和复杂度 快排真题2016 选排真题2022 排序算法分为交换类排序、插入类排序、选择类排序、归并类排序。 交换排序 交换排…

【智能算法】哈里斯鹰算法(HHO)原理及实现

目录 1.背景2.算法原理2.1算法思想2.2算法过程 3.代码实现4.参考文献 1.背景 2019年,Heidari 等人受到哈里斯鹰捕食行为启发,提出了哈里斯鹰算法(Harris Hawk Optimization, HHO)。 2.算法原理 2.1算法思想 根据哈里斯鹰特性,HHO分为探索-…

新智元 | Stable Diffusion 3技术报告流出,Sora构架再立大功!生图圈开源暴打Midjourney和DALL·E 3?

本文来源公众号“新智元”,仅用于学术分享,侵权删,干货满满。 原文链接:Stable Diffusion 3技术报告流出,Sora构架再立大功!生图圈开源暴打Midjourney和DALLE 3? 【新智元导读】Stability AI放…