cvs add files recursively – not already in repository

When you have a lot of files in some repository and you have added a couple of new, in CVS there is no command to add just the new ones to the repository, so here is a workaround for that.

cvs status 2>/dev/null | awk '{if ($1=="?")print "cvs add -kb " $2}'

Well, if you are adding text files then you might want to remove the “-kB” in the cvs command above.

 

Enhanced by Zemanta

create text tables from delimited files.

To create simple text tables to paste in emails or to use in any other document where you want to show a table, here is something that you can use. There is a perl module which provides “tablify“. And here is how to use it:

sudo yum install perl-Text-RecordParser

This will install a command “tablify” that you can use in number of ways. Here is a simple example to use it. You can read the man pages to see how you can use it.

: tmp ; cat < b.tab 
1	2001
2	3001
3	5001
4	1001
EOF

: tmp ; tablify --no-headers b.tab 
+--------+--------+
| Field1 | Field2 |
+--------+--------+
| 1      | 2001   |
| 2      | 3001   |
| 3      | 5001   |
| 4      | 1001   |
+--------+--------+
4 records returned

Enhanced by Zemanta

analyze debug queries output for wordpress

Some time back, my website became too slow and I started getting timeout response for quite a lot of my pages. When I analyzed things, I found the issue was with the DB queries taking a lot of time. So, I thought of getting my hands dirty and started with installing the plugin “Debug Queries”. Just in case, you don’t know about the plugin, it lists all the queries to DB along with the time taken for the query when a Admin user visits any page.

The output of the plugin is below the whole page and looks something like this:

45. Time: 0.0030910968780518
Query: SELECT * FROM <>  WHERE <>
Call from: require

Note: The list contains the actual complete query and also all the calls to the query. But as I had more than 40 odd queries, looking at .00 something time was tiresome to find the highest time. So, I copied this text into a text file called “test” and wrote this one line to get me the highest time.

1
sed -n '/.*Time:/ s/.*://p' test |sort -n

Once you have the time, simply search or grep for this values and you know the highest time taking query.

Enhanced by Zemanta