以下是服务器反爬虫攻略:Apache/Nginx/PHP禁止某些User Agent抓取网站,希望对大家有所帮助。
一、Apache
①、通过修改 .htaccess 文件
修改网站目录下的.htaccess,添加如下代码即可(2 种代码任选):
可用代码 (1):
- RewriteEngineOn
- RewriteCond%{HTTP_USER_AGENT}(^$|FeedDemon|IndyLibrary|AlexaToolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|CoolpadWebkit|Java|Feedly|UniversalFeedParser|ApacheBench|MicrosoftURLControl|Swiftbot|ZmEu|oBot|jaunty|Python–urllib|lightDeckReportsBot|YYSpider|DigExt|HttpClient|MJ12bot|heritrix|EasouSpider|Ezooms)[NC]
- RewriteRule^(.*)$–[F]
可用代码 (2):
- SetEnvIfNoCase^User–Agent$.*(FeedDemon|IndyLibrary|AlexaToolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|CoolpadWebkit|Java|Feedly|UniversalFeedParser|ApacheBench|MicrosoftURLControl|Swiftbot|ZmEu|oBot|jaunty|Python–urllib|lightDeckReportsBot|YYSpider|DigExt|HttpClient|MJ12bot|heritrix|EasouSpider|Ezooms)BADBOT
- OrderAllow,Deny
- Allowfromall
- Denyfromenv=BADBOT
②、通过修改 httpd.conf 配置文件
找到如下类似位置,根据以下代码 新增 / 修改,然后重启 Apache 即可:
Shell
- DocumentRoot/home/wwwroot/xxx
- <Directory“/home/wwwroot/xxx”>
- SetEnvIfNoCaseUser–Agent“.*(FeedDemon|IndyLibrary|AlexaToolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|CoolpadWebkit|Java|Feedly|UniversalFeedParser|ApacheBench|MicrosoftURLControl|Swiftbot|ZmEu|oBot|jaunty|Python-urllib|lightDeckReportsBot|YYSpider|DigExt|HttpClient|MJ12bot|heritrix|EasouSpider|Ezooms)”BADBOT
- Orderallow,deny
- Allowfromall
- denyfromenv=BADBOT
- </Directory>
二、Nginx 代码
进入到 nginx 安装目录下的 conf 目录,将如下代码保存为 agent_deny.conf
- cd/usr/local/nginx/conf
- vimagent_deny.conf
- #禁止Scrapy等工具的抓取
- if($http_user_agent~*(Scrapy|Curl|HttpClient)){
- return403;
- }
- #禁止指定UA及UA为空的访问
- if($http_user_agent~*“FeedDemon|IndyLibrary|AlexaToolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|CoolpadWebkit|Java|Feedly|UniversalFeedParser|ApacheBench|MicrosoftURLControl|Swiftbot|ZmEu|oBot|jaunty|Python-urllib|lightDeckReportsBot|YYSpider|DigExt|HttpClient|MJ12bot|heritrix|EasouSpider|Ezooms|^$”){
- return403;
- }
- #禁止非GET|HEAD|POST方式的抓取
- if($request_method!~^(GET|HEAD|POST)$){
- return403;
- }
然后,在网站相关配置中的 location / { 之后插入如下代码:
Shell
- includeagent_deny.conf;
如下的配置:
Shell
- [marsge@Mars_Server~]$cat/usr/local/nginx/conf/zhangge.conf
- location/{
- try_files$uri$uri//index.php?$args;
- #这个位置新增1行:
- includeagent_deny.conf;
- rewrite^/sitemap_360_sp.txt$/sitemap_360_sp.phplast;
- rewrite^/sitemap_baidu_sp.xml$/sitemap_baidu_sp.phplast;
- rewrite^/sitemap_m.xml$/sitemap_m.phplast;
保存后,执行如下命令,平滑重启 nginx 即可:
Shell
三、PHP 代码
将如下方法放到贴到网站入口文件 index.php 中的第一个 <!–?php 之后即可:
PHP
- //获取UA信息
- $ua=$_SERVER[‘HTTP_USER_AGENT’];
- //将恶意USER_AGENT存入数组
- $now_ua=array(‘FeedDemon‘,‘BOT/0.1(BOTforJCE)’,‘CrawlDaddy‘,‘Java’,‘Feedly’,‘UniversalFeedParser’,‘ApacheBench’,‘Swiftbot’,‘ZmEu’,‘IndyLibrary’,‘oBot’,‘jaunty’,‘YandexBot’,‘AhrefsBot’,‘MJ12bot’,‘WinHttp’,‘EasouSpider’,‘HttpClient’,‘MicrosoftURLControl’,‘YYSpider’,‘jaunty’,‘Python-urllib’,‘lightDeckReportsBot’);
- //禁止空USER_AGENT,dedecms等主流采集程序都是空USER_AGENT,部分sql注入工具也是空USER_AGENT
- if(!$ua){
- header(“Content-type:text/html;charset=utf-8”);
- die(‘请勿采集本站,因为采集的站长木有小JJ!’);
- }else{
- foreach($now_uaas$value)
- //判断是否是数组中存在的UA
- if(eregi($value,$ua)){
- header(“Content-type:text/html;charset=utf-8”);
- die(‘请勿采集本站,因为采集的站长木有小JJ!’);
- }
- }
四、测试效果
如果是 vps,那非常简单,使用 curl -A 模拟抓取即可,比如:
模拟宜搜蜘蛛抓取:
Shell
- curl–I–A‘YisouSpider’bizhi.bcoderss.com
模拟 UA 为空的抓取:
Shell
- curl–I–A”bizhi.bcoderss.com
模拟百度蜘蛛的抓取:
Shell
- curl–I–A‘Baiduspider’bizhi.bcoderss.com