sum numbers in first column of a file
$ cat filename | awk 'BEGIN { tot=0 } { tot+=$1 } END { print tot }'
only print certain lines in a file, eq 20 - 30
$ sed -n -e 20,30p filename
Search all files in and below for a string xyz:
$ find . -exec egrep xyz {} \; -print
Remove all files name ".log.tmp"
$ find . -name .log.tmp -exec rm {} \;
search a file for any of three strings:
$ egrep 'abc|xyz|or' filename
A unique blog for Unix, Linux based Tips, tricks and Shell Scripts. This is intended to be one-stop Information Center for all your Unix, Linux needs.
Showing posts with label awk. Show all posts
Showing posts with label awk. Show all posts
Wednesday, June 9, 2010
Wednesday, July 2, 2008
Removing non-consecutive duplicate lines from a file
The uniq command will "Discard all but one of successive identical lines" from a file or input stream.
In order to remove non-consecutive duplicate lines, use awk:
awk '!x[$0]++' FILE
In order to remove non-consecutive duplicate lines, use awk:
awk '!x[$0]++' FILE
Subscribe to:
Comments (Atom)