人生苦短,我用Python
首先肯定是去正经足浴店,
毕竟一年出差也不少,
大家都很辛苦,
好不容易放假了,
约上好兄弟一起去放松放松~
所需环境
- python 3.8 解释器
- pycharm 编辑器
所需模块
- requests
数据来源分析
- 确定我们要爬取数据内容是什么?——店铺基本数据信息
- 通过开发者工具进行抓包分析 分析数据从哪里可以获取——从第一页数据进行分析的, 没办法实现翻页爬取操作
代码流程步骤
1. 发送请求, 对于店铺信息数据包url地址发送请求
2. 获取数据, 获取服务器返回的response响应数据
3. 解析数据, 提取我们想要的一些数据内容 (店铺信息)
4. 保存数据, 把相应的数据内容保存csv表格里面
5. 多页爬取:多页爬取数据内容
代码展示
f = open('按摩data.csv', mode='a', encoding='utf-8', newline='')
csv_writer = csv.DictWriter(f, fieldnames=[
'店铺名称',
'人均消费',
'店铺评分',
'营业时间',
'详情页',
])
csv_writer.writeheader()
def get_shop_info(html_url):
headers = {
'Cookie': '',
'Host': 'www.meituan.com',
'Referer': '',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36',
}
response = requests.get(url=html_url, headers=headers)
# print(response.text)
phone = re.findall('"phone":"(.*?)"', response.text)[0]
openTime = re.findall('"openTime":"(.*?)"', response.text)[0].replace('\\n', '')
address = re.findall('"address":"(.*?)"', response.text)[0]
shop_info = [address, phone, openTime]
# print(shop_info)
return shop_info
# def get_shop_info(html_url):
# headers_1 = {
# 'Cookie': '',
# # 'Referer': 'https://sz.meituan.com/',
# 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
# }
# response_1 = requests.get(url=html_url, headers=headers_1)
# html_data = re.findall('"address":"(.*?)","phone":"(\d+)"', response_1.text)[0]
# return html_data
# get_shop_info('https://www.meituan.com/meishi/193587069/')
for page in range(0, 1537, 32):
time.sleep(2)
url = ''
data = {
'uuid': '',
'userid': '266252179',
'limit': '32',
'offset': page,
'cateId': '-1',
'q': '按摩',
'token': ''
}
headers = {
'Referer': 'https://sz.meituan.com/',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
}
response = requests.get(url=url, params=data, headers=headers)
result = response.json()['data']['searchResult']
for index in result:
shop_id = index['id']
index_url = f'https://www.meituan.com/meishi/{shop_id}/'
shop_info = get_shop_info(index_url)
dit = {
'店铺名称': index['title'],
'人均消费': index['avgprice'],
'店铺评分': index['avgscore'],
}
csv_writer.writerow(dit)
print(dit)