cool sed/grep magic to convert output to csv format

I generallly keep doing this a lot, so thought will share with you. Lets assume we are capturing free ouput every min/hour/or whatever. The output looks like this:

Time: Mon Jan 21 23:59:10 AEDT 2019
——————-

total        used        free      shared  buff/cache   available
Mem:          32014        8656        1735        1697       21621       21308
Swap: 51195 75 51120

then we can use some grep and sed to convert this to something like this:

Mon Jan 21 23:59:10 AEDT 2019,32014,8656,1735,1697,21621,21308

This is the code that I used for this:

zgrep -E '^(Time|Mem):' free.20190121.gz |sed -E '/Mem/ s/\s+/,/g'|sed -E 's/^(Time|Mem):\s*//' |sed   ':a;$!N;s/\n//;P'
Explanation:

use zgrep to get the line starting with time or mem
use sed to convert multiple spaces to single space
use sed again to get only the line containing Memory or time
use sed the last time to merge the 2 lines

bash function for rpm whatprovides

Sometimes some simple one-liner function can save you a lot of time, like-

wps ()
{
    rpm -q --whatprovides $(which $1 )
}   # ----------  end of function wps  ----------

Directories with maximum number of files

Lot of times, I want to find the directories with maximum number of files and so I wrote this quick function to do exactly the same

 

function count_lines ()
{
    oldIFS=$IFS
    count=0
    IFS=$'\n'
    dir=${1:-.}
    cd $dir
    find . -type d |while read line
    do
        echo -n "$(find $line -type f |wc -l) $line"
        echo 
        printf "Directories :: %8d\r" $count >&2
        ((count++))
    done|sort -n
    IFS=$oldIFS
}   # ----------  end of function count_lines  ----------