一、安装scrapy库文件文章源自菜鸟学院-https://www.cainiaoxueyuan.com/bc/46356.html
pip install scrapy
二、创建项目文章源自菜鸟学院-https://www.cainiaoxueyuan.com/bc/46356.html
1、在Pycharm的终端里面运行文章源自菜鸟学院-https://www.cainiaoxueyuan.com/bc/46356.html
scrapy startproject 项目名称
2、进入到创建的目录,并执行下面代码文章源自菜鸟学院-https://www.cainiaoxueyuan.com/bc/46356.html
cd kjpc
scrapy genspider 爬虫名字 任意一个网址(后面可以修改)
scrapy genspider pachong www.aiyou.com
3、自动创建的文件及目录文章源自菜鸟学院-https://www.cainiaoxueyuan.com/bc/46356.html
4、参数解释文章源自菜鸟学院-https://www.cainiaoxueyuan.com/bc/46356.html
name:就是爬虫的唯一标识,运行爬虫就是运行这个文件名字
allowed_domains:爬虫允许爬取的域名范围
start_urls:可以被scrapy发起get请求,可以填写多个地址
response:响应对象
5、修改相关配置settings.py,绕过某些限制文章源自菜鸟学院-https://www.cainiaoxueyuan.com/bc/46356.html
1、robots协议修改
ROBOTSTXT_OBEY = False
2、只查看日志的错误信息
LOG_LEVEL = 'ERROR'
3、添加头信息
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36"
6、在pycharm的终端中运行项目文章源自菜鸟学院-https://www.cainiaoxueyuan.com/bc/46356.html
scrapy crawl pachong 打印日志信息
scrapy crawl pachong --nolog 不打印日志信息
7、源码文章源自菜鸟学院-https://www.cainiaoxueyuan.com/bc/46356.html
import scrapy
class PachongSpider(scrapy.Spider):
name = "pachong"
#allowed_domains = ["www.aiyou.com"]
start_urls = ["https://www.baidu.com/","https://www.sina.com"]
def parse(self, response):
print("响应内容:",response)