在linux系统中使用curl监控网页的一段shell脚本。
监控网页的shell脚本,内容如下:
复制代码代码示例:
#!/bin/bash
smail() {
ssms() {
cd /home/maintain/gaojianwei/Script/
sed -e '/^#/d;/^$/d' ${File} | while read Ip Port URL awk -F":" '{if(($1*1000<8000)&&($2>0)&&($3=="200"||$3=="301"||$30=="302"||$3=="401")) {} else {print $0 >> "Curl_Out_1.txt"}}' Curl_Out.txt
if [ -s Curl_Out_1.txt ];then
复制代码代码示例:
curl -o /dev/null -s -w %{http_code} http://www.jbxue.com
复制代码代码示例:
[coomix@localhost ~]$ echo `curl -o /dev/null -s -m 10 --connect-timeout 10 -w %{http_code} "http://www.jbxue.com/index.php"`
200 [coomix@localhost ~]$ echo `curl -o /dev/null -s -m 10 --connect-timeout 10 -w %{http_code} "http://www.jbxue.com/index5.php"` 404
监控机器列表文件:
复制代码代码示例:
#!/bin/sh
monitor_dir=/home/admin/monitor/ if [ ! -d $monitor_dir ]; then mkdir $monitor_dir fi cd $monitor_dir web_stat_log=web.status if [ ! -f $web_stat_log ]; then touch $web_stat_log fi server_list_file=server.list if [ ! -f $server_list_file ]; then echo "`date '+%Y-%m-%d %H:%M:%S'` ERROR:$server_list_file NOT exists!" >>$web_stat_log exit 1 fi #total=`wc -l $server_list_file|awk '{print $1}'` for website in `cat $server_list_file` do url="http://$website/app.htm" server_status_code=`curl -o /dev/null -s -m 10 --connect-timeout 10 -w %{http_code} "$url"` if [ "$server_status_code" = "200" ]; then echo "`date '+%Y-%m-%d %H:%M:%S'` visit $website status code 200 OK" >>$web_stat_log else echo "`date '+%Y-%m-%d %H:%M:%S'` visit $website error!!! server can't connect at 10s or stop response at 10 s, send alerm sms ..." >>$web_stat_log echo "!app alarm @136xxxxxxxx server:$website can't connect at 10s or stop response at 10s ..." | nc smsserver port & fi done exit 0
主要是利用 curl -o /dev/null -s -m 10 --connect-timeout 10 -w %{http_code} "$url" 返回状态码是否200,如果10s没有返回200状态码,则发警报
复制代码代码示例:
crontab -e
*/10 * * * * /home/admin/app/bin/webstatus.sh
这样每隔10分钟就会执行一次
复制代码代码示例:
#!/bin/bash
while read URL (责任编辑:IT) |