Get disk usage for all the containers with python script

With my increasing love for python, here is my attempt to get the disk usage of all the containers on some host. Well, since the requirements vary for everyone, so this script is far from complete.

import docker
import json

# We will connect to for docker daemon. If that is not the case,
# then change the below.

client = docker.DockerClient(base_url="tcp://")

# Get list of all containers.

# And now we will iterate over that list to get stats for all the containers.
for val in cls:
    print (
    stats[] = val.stats(stream=False)
    # Get the disk usage for root and /tmp from containers with docker.exec
    stats[]['df-root'] = ( str(val.exec_run(r'df -kh --output="size,used,avail,pcent" /', stream=False).splitlines()[1]).replace("'","").split()[1:] )
    stats[]['df-tmp'] = ( str((val.exec_run(r'df -kh --output="size,used,avail,pcent" /tmp ', stream=False).splitlines()[1:]+[''])[0]).replace("'","").split()[1:] )

# Now if you want, we have dict of all the data and we can process the
# way we like it, for example create a html table for disk usage only.
print ('<table>')
for st in stats:
    print ('<tr>')
    print ("<td>Root-%s</td>"%(st))
    for i in stats[st]['df-root']:
        print ('<td>%s</td>'%(i) )
    print ('</tr>')
    print ('<tr>')
    print ("<td>tmp-%s</td>"%(st))
    for i in stats[st]['df-tmp']:
        print ('<td>%s</td>'%(i) )
    print ('</tr>')

print ('</table>')

Easily monitor and archive your system log reports.

If you want to monitor your server logs and also like them to be emailed then just Logwatch may not be sufficient. It sends you a mail but does not archive them, so head over to epylog

Name        : epylog
Arch        : noarch
Epoch       : 0
Version     : 1.0.7
Release     : 9.fc22
Size        : 151 k
Repo        : fedora
Summary     : New logs analyzer and parser
URL         :
License     : GPLv2+
Description : Epylog is a new log notifier and parser which runs periodically out of
: cron, looks at your logs, processes the entries in order to present
: them in a more comprehensive format, and then provides you with the
: output. It is written specifically with large network clusters in mind
: where a lot of machines (around 50 and upwards) log to the same
: loghost using syslog or syslog-ng.

To install :

dnf install epylog

After this you need to configure the directory for the archiving and also the means of transport, that could be just File or File with email. In the second case, the reports are archived and email sent with link to the report.


Here is the sample configuration that I am using

cfgdir = /etc/epylog
tmpdir = /var/tmp
vardir = /var/lib/epylog

title = [Cron] ubu  @@HOSTNAME@@ system events: @@LOCALTIME@@
template = /etc/epylog/report_template.html
include_unparsed = yes
publishers = file

method = mail
smtpserv = /usr/sbin/sendmail -t
mailto = root
format = html
lynx = /usr/bin/lynx
include_rawlogs = no
rawlogs_limit = 200
# GPG encryption requires pygpgme installed
gpg_encrypt = no
# If gpg_keyringdir is omitted, we’ll use the default ~/.gnupg for the
# user running epylog (/root/.gnupg, usually).
#gpg_keyringdir = /etc/epylog/gpg/
# List key ids, can be emails or fingerprints. If omitted, we’ll
# encrypt to all keys found in the pubring.
#gpg_recipients = [email protected], [email protected]
# List key ids that we should use to sign the report.
# If omitted, the report will not be signed, only encrypted.
#gpg_signers = [email protected]

method = file
path = /var/www/epylog
dirmask = %Y-%b-%d_%a
filemask = %H%M
save_rawlogs = no
expire_in = 700
notify = [email protected]
smtpserv = /usr/sbin/sendmail -t
pubroot =

After you are done, you might want to head over to Fedora Wiki for Epylog and then download the weed_local file and file. The weed local file is a file containing the regex for common errors that you do not want to see in reports. So, feel free to add yours. And file sets up a pager. Download that and put it in the cgi-bin directory and configure the epylog data directory, and you are done.