Many oneliners rely on the magic of Perl's -i switch, which means when files supplied on the commandline are opened they are edited in place, and if an extension was passed, a backup copy with that extension will be made. -e specifies the Perl code to run. See perlrun(1) for any other switches used here - in particular, -n and -p make powerful allies for -i.
The unsurpassed power of Perl's RegularExpression flavour contributes a great deal to the usefulness of nearly every oneliner, so you will also want to read the perlretut(1) and perlre(1) manpages to learn about it.
perl -pi -e 's/foo/bar/' file
Does an inplace SED on the file. GNU sed(1) v4 also supports this with -i and will probably be quicker if you only need a simple query and replacement. However, Perl's RegularExpressions are more powerful and easier on the hands than the POSIX variety offered by SED. With GNU sed(1), you can use the -r switch to get an extended RegularExpression syntax which also requires fewer backslashes than the POSIX flavour.
perl -ni.bak -e'/\S/ && print' file1 file2
In Shell:
for FILE in file1 file2 ; do mv "$F"{,.bak} ; grep '[^ ]' "$F.bak" > "$F" ; done
perl -00 -pi.bak -e1 file1 file2
Note the use of 1 as a no-op piece of Perl code. In this case, the -00 and -p switches already do all the work, so only a dummy needs to be supplied.
perl -e 'printf "%08b\n", $_ for unpack "C*", shift' 'My String'
perl -pe 's!\\n!\n!g; s!\\t!\t!g' $file
Note that you can use any punctuation as the separator in an s/// command, and if you have backslashes or even need literal slashes in your pattern then doing this can increase clarity.
This assumes that each of the input rows is exactly the same length (in terms of number of items), and assumes they are separated by spaces. This is useful if you have data in tabular form, but need it to be in columns instead (eg. you want to use it as input to GnuPlot).
perl -e '@rows=(); while ($l=<>) {@line=split(/ \s+/,$l); push @rows, [@line ]} for $i (0 .. @{$rows[0]}) { for $row (@rows) {print $row->[$i] . "\t"} print "\n" }'
Alternatively you can let Perl do the drudgework work for you. In the following, -n implies the while(<>){} loop and the -a -Fregex imply the split (the result is stored in the predefined @F array). Anyone who is at all familiar with AWK should follow along easily.
perl -aF'\s+' -ne'push @rows, [ @F ]; END { for $i ( 0 .. $#{ $rows[0] } ) { for $cols ( @rows ) { print $cols->[ $i ] . "\t" } print "\n" } }'
Both of these will read whitespace-separated tabular data from stdin(3) or from the files passed, and will write to tab-separated tabular data to stdout(3).
This is useful if you suspect that ps(1) is not reliable, whether due to a RootKit or some other cause. It prints the process ID and command line of every running process on the system (except some "special" kernel processes that lie about/don't have command lines).
perl -0777 -pe 'BEGIN { chdir "/proc"; @ARGV = sort { $a <=> $b } glob("*/cmdline") } $ARGV =~ m!^(\d+)/!; print "$1\t"; s/\0/ /g; $_ .= "\n";'
It runs an implicit loop over the /proc/*/cmdline files, by priming @ARGV with a list of files sorted numerically (which needs to be done explicitly using <=> -- the default sort is ASCIIbetical) and then employing the -p switch. -0777 forces files to be slurped wholesale. Per file, the digits that lead the filename are printed, followed by a tab. Since a null separates the arguments in these files, all of them are replaced by spaces to make the output printable. Finally, a newline is appended. The print call implicit in the -p switch then takes care of outputting the massaged command line.
See above for what this does. The how is different, though.
perl -MFile::Slurp -0le 'for(sort { $a <=> $b } grep !/\D/, read_dir "/proc") { @ARGV = "/proc/$_/cmdline"; printf " %6g %s\n", $_, join(" ", <>); }'
This time the loop is explicit. Again, there are two parts to the program -- selecting files and doing I/O on them.
To read the directory, a convenience function is pulled in from the File::Slurp module, loaded using the -M switch. The module has been part of the core distribution since Perl 5.8.0. Reading a directory manually is straightforward but the code would be longer and clumsier.
Selecting the files is pretty simple, if a little obtuse: it's done by reading the contents of /proc, then using grep !/D/ to reject any entries that contain non-digit characters from the list. The results are then sorted numerically, which needs to be done explicitly using the <=> operator because the default sort is ASCIIbetical. Each entry is then interpolated into a full path and stuck into @ARGV one by one, from where the <> "diamond operator" will pick it up, auto-open it and read it for us, even autoreporting any errors in a nicely verbose format.
Producing human-readable output is a little more involved, using switches to abbreviate a bit of magic. The -0 switch sets the $/ variable: here, because the switch is not followed by a digit, it sets the variable to a null character. This means that null characters will be regarded as line separators on input. The -l switch has two effects, of which only one is relevant to us: it automatically removes line terminators from lines read using the diamond operator. (The other is to set the $\ variable, which we aren't interested in or affected by.)
Note that the -0 and -l switches are order sensitive both in syntax and semantics. We order them for syntax here, because they can both accept an octal number as input, but we don't to pass one to either of them (particularly, -0 will be mistaken for a digit parameter to -l if we turn them around).
Together, these switches effectively mean that we get null terminated lines from files, with the nulls removed on input. So we get the command line arguments listed in a /proc/*/cmdline file as nice list of separate strings. And because join() expects a list, <> returns all "lines" (ie command line arguments) at once, which join() then dutifully puts together with spaces between.
The printf(3) is straightforward.
4 pages link to PerlOneLiners: