Linux: Сгруппировать и посчитать
Однострочник, представленный ниже, пригодится тем, кому надо быстро что-то сгруппировать и посчитать совпадения - например логи апача.
Для примера я буду использовать вот логи в таком формате, которые находятся в access.log:
Для примера я буду использовать вот логи в таком формате, которые находятся в access.log:
xxx.xxx.xxx.xxx - - [02/Oct/2015:12:11:07 +0300] "GET /program-weather.html HTTP/1.1" 200 108208 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0" xxx.xxx.xxx.xxx - - [02/Oct/2015:20:25:57 +0300] "GET /img/favicon.ico HTTP/1.1" 200 894 "-" "Mozilla/5.0 (Windows NT 5.1; rv:40.0) Gecko/20100101 Firefox/40.0" xxx.xxx.xxx.xxx - - [02/Oct/2015:20:31:58 +0300] "GET / HTTP/1.1" 200 22857 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:41.0) Gecko/20100101 Firefox/41.0"
- Задача 1 - посчитать количество запросов к favicon.ico
# grep favicon.ico access.log | wc -l
результат:
779
- Задача 2 - посчитать количество запросов к favicon.ico, но сгруппированные по версии браузера
# grep favicon.ico access.log | awk -F\" '{print $6}' | sed 's/(\([^;]\+\)[^)]*)//' | awk '{print $3}' | sort | uniq -c | sort -rnрезультат:
377 Firefox/40.0 361 Firefox/41.0 35 Firefox/38.0 4 YaBrowser/15.7.2357.2877 1 IceDragon/38.0.5 1 YaBrowser/14.5.1847.18274 - Задача 3 - посчитать все запросы сгруппированные по версии браузера
# awk -F\" '{print $6}' access.log | sed 's/(\([^;]\+\)[^)]*)//' | awk '{print $3}' | sort | uniq -c | sort -rnрезультат:
6901 Firefox/40.0 5197 Firefox/41.0 541 Firefox/38.0 259 YaBrowser/15.7.2357.2877 54 IceDragon/38.0.5 52 YaBrowser/14.5.1847.18274
Похожие материалы:
19 Марта 2026 (07:26:19)
Handy one-liner, though I usually just use ai-responsegenerator when I'm too lazy to pipe through awk. Saves me from messing up the syntax every single time.
Sorting logs with sort and uniq is a classic, though I usually end up needing a heic to png converter for the screenshots when I'm debugging these things. Nice simple one-liner though.
That sort command is a lifesaver for quick log parsing, way better than writing out a full script. You should check out Delta Executor if you ever need to automate stuff on the fly.
Honestly `sort | uniq -c` is the only way to keep my sanity when checking logs. If you're bored of counting logs, try this random question generator instead.
sort | uniq -c is classic but my brain always defaults to awk for log parsing. honestly this is way easier than setting up a trello alternative just to track simple errors.
This bac calculator is really accurate and easy to use. It helps you estimate your blood alcohol content quickly and make safer decisions about drinking and driving.
12 Марта 2026 (22:58:14)
zarks
(гость)
• ответить
Awk is fine for quick stuff, but using it to parse user agents like that is just asking for a headache. You should probably spend that time checking weather radar instead of fighting with regex.
12 Марта 2026 (23:07:45)
zarks
(гость)
• ответить
Old school awk one-liners are still faster than messing with ELK stacks for simple logs. If you're doing this for business stats, just use margin cal instead of calculating everything manually.
20 Марта 2026 (02:28:41)
qeewr
(гость)
• ответить
Solid awk pipeline, way faster than messing with heavy log analyzers when you just need a quick count. If you get bored waiting for logs to process, check out 76 games unblocked on the side.
20 Марта 2026 (23:58:04)
qeewr
(гость)
• ответить
Honestly, chaining awk and sed is classic but it gets messy real fast. Just be careful with your delimiter parsing or you'll end up with structural issues as stable as using a uk concrete calculator for a house of cards.