1.取访问前10的ip地址
cat access.log|awk ‘{print $1}’|sort|uniq -c|sort -nr|head -10
cat access.log|awk ‘{counts[$(11)]+=1}; END {for(url in counts) print counts[url], url}’
2.访问次数最多的文件或页面,比如获取前10
cat access.log|awk ‘{print $11}’|sort|uniq -c|sort -nr|head -10
3.输出传输最大的几个exe文件,在分析下载站的时候会用到
cat access.log |awk ‘($7~/.exe/){print $10 " " $1 " " $4 " " $7}’|sort -nr|head -20
4.输出大于100000byte(约100kb)的exe文件和对应文件发生次数
cat access.log |awk ‘($10 > 100000 && $7~/.exe/){print $7}’|sort -n|uniq -c|sort -nr|head -50
5.如果日志最后一列记录的是页面文件传输时间,则列出到客户端最耗时的页面
cat access.log |awk ‘($7~/.php/){print $NF " " $1 " " $4 " " $7}’|sort -nr|head -50
6.列出最最耗时的页面(超过60秒的)以及该页面发生次数
cat access.log |awk ‘($NF > 60 && $7~/.php/){print $7}’|sort -n|uniq -c|sort -nr|head -100
7.列出传输时间超过 30 秒的文件
cat access.log |awk ‘($NF > 30){print $7}’|sort -n|uniq -c|sort -nr|head -20
8.统计网站流量(G)
cat access.log |awk ‘{sum+=$10} END {print sum/1024/1024/1024}’
9.统计404的连接
awk ‘($9 ~/404/)’ access.log | awk ‘{print $9,$7}’ | sort
10. 统计http状态
cat access.log |awk ‘{counts[$(9)]+=1}; END {for(code in counts) print code,counts[code]}‘
cat access.log |awk ‘{print $9}‘|sort|uniq -c|sort -rn
10.爬虫分析,查看是哪些蜘蛛在抓取内容。
/usr/sbin/tcpdump -i eth0 -l -s 0 -w - dst port 80 | strings | grep -i user-agent | grep -i -E ‘bot|crawler|slurp|spider‘