get the contents of whole site like some wiki or wikia

For wikis and wikia, generally if you are trying to get some url mirror, then websucker.py is an excellent option. This script is in the python sources so, to get this tool,

yumdownloader --source python

Install the rpm downloaded in current directory and then go to ~/rpmbuild/SOUURCES.  You should find a Python-*.tar.xz file here, just extract with

tar xvf Python*.tar.xz

and there you go, you should find the tool in Tools/webchecker/websucker.py.

Enhanced by Zemanta

get all the urls in html file (local or on server).

To use this, you will need the lynx tool, so install that first.

sudo yum install lynx

Now, to get list of all the URLs in local html file or some URL, just execute this:

lynx -dump -listonly

 

Enhanced by Zemanta

nautilis fork ( File manager ) with tree view in sidebar.

Was searching for this for sometime now, finally found it.

sudo yum install nemo
sudo yum list nemo*

First just install nemo. Configure nemo not to interfere with default desktop and also make it default handler. So, here are the settings that would do it.

gconftool-2 --set  /desktop/gnome/applications/component_viewer/exec --type 'string' 'nemo "%s"'
gconftool-2 --set  /desktop/gnome/url-handlers/trash/command --type 'string' 'nemo "%s"'
gsettings set org.nemo.desktop show-desktop-icons false

Now, if you need more functionality in file manager then check the list of nemo packages from the output of second command of yum. It has plugins like file preview and so on. Install and enjoy.

 

Enhanced by Zemanta