Solaris dump analysis

I had to debug a solaris crash dump and had no ides. Google search wasn’t much useful until finally I found this article:

http://cuddletech.com/blog/?p+AD0-448

have a look at this article, this suggests how to debug the core and find the offending process and root cause of the core in case of kernel panic.

Enhanced by Zemanta

Unix shell script for removing duplicate files

The following shell script finds duplicate (2 or more identical) files and outputs a new shell script containing commented-out rm statements for deleting them (copy-paste from here):

::: updated on 02 May 20121, seems like wordpress did not like it so well so reformatting the code :::::::

#!/bin/bash -
#===============================================================================
#
#          FILE:  a.sh
#
#         USAGE:  ./a.sh
#
#   DESCRIPTION:
#
#       OPTIONS:  ---
#  REQUIREMENTS:  ---
#          BUGS:  ---
#         NOTES:  ---
#        AUTHOR: Amit Agarwal (aka), amit.agarwal@roamware.com
#       COMPANY: blog.amit-agarwal.co.in
#       CREATED: 02/05/12 06:52:08 IST
# Last modified: Wed May 02, 2012  07:03AM
#      REVISION:  ---
#===============================================================================

OUTF=rem-duplicates.sh;
echo "#!/bin/sh" >$OUTF;
find "$@" -type f -exec md5sum {} \; 2>/dev/null | sort --key=1,32 | uniq -w 32 -d |cut -b 1-32 --complement |sed 's/^/rm -f/' >>$OUTF

Pretty good one line, I must say 🙂

Enhanced by Zemanta

 

Terminating a SSH session after starting background process.

 

This is too good. If you are planning to start a background process in the bash script in the background and continue in the script, you cannot do it until…….

You would need to close the stdout/stdin and stderr before you can terminate any ssh session automatically. Here’s some more light on this topic.

http://lists.debian.org/debian-user/2005/09/msg00254.html

On Thu, Sep 01, 2005 at 05:33:28PM -0400, Roberto C. Sanchez wrote: > I occasionally log into a machine remotely and start a process in the > background: > > command & > > However, when I log out of the machine, the ssh process on my local > machine blocks. I guess that it is becuase the remote still has jobs > running. Is there a way to get it start the process in the background > and then detach from the shell? I have already tried this:

This is often caused because the process still has a file descriptor (FD) referencing the tty. Ssh doesn’t like to terminate when this occurs, because there’s a chance that the FD could still be required. If this is the case, then you probably just need to redirect the usual suspects… stdin, stdout, and/or stderr. Something like:

$ command &0 2>&0 &

It may not be necessary to redirect all of them, but that will probably require some experimentation to determine. You may want to consider running your command under “nohup” as well, to protect it from the loss of your session.

Greg Norris wrote: > Roberto C. Sanchez wrote: > > I occasionally log into a machine remotely and start a process in the > > background: > > > > command & > > > > However, when I log out of the machine, the ssh process on my local > > machine blocks. > > This is often caused because the process still has a file descriptor (FD) > referencing the tty. Ssh doesn’t like to terminate when this occurs, > because there’s a chance that the FD could still be required.

Agreed. SSH must keep the connection open in that case. This commonly trips people up. You can use the ~# command to see what connections are still open. You can forcibly terminate the connection with ~. (~? gives help.)

> If this is the case, then you probably just need to redirect the > usual suspects… stdin, stdout, and/or stderr. Something like: > > $ command &0 2>&0 &

With BSD job control is it also necessary to disassociate from the controlling terminal. Otherwise the command will still be attached to the tty device as you can see from a ps listing. The ‘at now’ suggestion previously posted is one common but clever way to launch as a daemon without associating with a tty that avoids the problem.

> It may not be necessary to redirect all of them, but that will probably > require some experimentation to determine. You may want to consider > running your command under “nohup” as well, to protect it from the loss > of your session.

Before BSD job control the nohup command was sufficient. Just ignore the SIGHUP produced when you logged out and the job would keep running. But with job control kernels nohup is not sufficient. It does redirect input and output and so helps there. But it does not disassociate from the tty.

Bob

Enhanced by Zemanta