Thursday, October 15, 2009
Run hddtemp as user
# hddtemp /dev/sda
This method has one disadvantage - hddtemp must be executed as root, otherwise you will get
/dev/sda: open: Permission denied
But there is a quick solution to run hddtemp as a normal user. Most Linux distributions have hddtemp daemon. After you run it you should be able to use just telnet or netcat as a normal use to get your hard drive temperature:
~ telnet localhost 7634
Wednesday, October 14, 2009
Google Custom Search Results per Page
<script type="text/javascript" src="http://www.google.com/afsonline/show_afs_search.js"></script>
Now open this script in browser and look into the code itself. This line does the trick
h=(h=b.googleNumSearchResults)?Math.min(h,20):10
It checks if the variable googleNumSearchResults exists and if true, the count of search results per page will be set to minimum value from googleNumSearchResults and 20. If the googleNumSearchResults is not set, the count of search results per page will be set to 10. So you can easily customize the results count by adding this line to your custom search form code (remember you can't get more than 20 results per page)
var googleNumSearchResults = 15;
Tuesday, October 13, 2009
Mythfrontend for Windows

MythTv Player latest version
Saturday, October 10, 2009
Increase USB Flash Drive Write Speed
Okay, first I bought Transcend 8GB usb flash stick. It had been formatted with FAT32 filesystem initially. So I decided to run data read/write speed test. Mount the filesystem and execute following
# hdparm -t /dev/sdb
/dev/sdb:
Timing buffered disk reads: 102 MB in 3.05 seconds = 33.43 MB/sec
$ dd count=100 bs=1M if=/dev/urandom of=/media/disk/test
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 29.5112 s, 3.6 MB/s
The disk read speed is good enough, but the write speed is not so good. That's because most of NAND flash drives (the most commonly used flash sticks) have 128k erase block size. Filesystems usually have 4k (4096 bytes) block size. And here we came into problem. If the filesystem blocks are not aligned to flash drive blocks, the performance overhead during disk writes will increase. So what we can do is to align filesystem properly. The best way to do this is to use 224 (32*7) heads and 56 (8*7) sectors/track. This produces 12544 (256*49) sectors/cylinder, so every cylinder is 49*128k.
# fdisk -H 224 -S 56 /dev/sdb
Now turn on expert mode with fdisk and force the partition to begin on 128k alignment. In my case I have set new beginning of data to 256. Create as many partitions as you need (I created only one - /dev/sdb1).
Do not forget to save changes and write new layout to flash drive (all data on the flash disk will be lost)
Now it's time to create the filesystem. I used ext4 because there is a way to tell it to specify a strip width to keep your filesystem aligned:
# mke2fs -t ext4 -E stripe-width=32 -m 0 /dev/sdb1
Now lets mount the filesystem and test the overall performance
# hdparm -t /dev/sdb
/dev/sdb:
Timing buffered disk reads: 102 MB in 3.01 seconds = 33.94 MB/sec
$ dd count=100 bs=1M if=/dev/urandom of=/media/disk/test
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 17.0403 s, 6.2 MB/s
As we can see, the data read performance is almost the same while the write speed is considerably faster.
Thursday, October 1, 2009
Directory Checksum
Recently I found on some of my websites suspicious files. After some research I discovered that most of my custom html and php files were also modified and were containing base64 encoded code. So I decided to make universal script that will allow me to take site fingerprint and then manually check it for any changes in my files weekly. This php script takes md5sums of all files in directory specified (including subdirectories) and save the result in custom data file. The next time you run it it will show you new files, files that were not changed and files that WERE CHANGED. The output and some other options can be customized inside the code itself. Anyway if you have ssh access to your webserver, you can do almost the same by running
find test5 -type f | xargs md5sum
<?php
#comment this if you want to debug the script
error_reporting(0);
function lookDir($path) {
$handle = @opendir($path);
if (!$handle)
return false;
while ($item = readdir($handle)) {
if ($item!="." && $item!="..") {
if (is_dir($path."/".$item))
lookDir($path."/".$item);
else
checkFile($path."/".$item);
}
}
closedir($handle);
return true;
}
function checkFile($file) {
global $hashes;
global $output;
global $force_update;
if (is_readable($file))
if (!isset($hashes[$file])) {
$hashes[$file] = md5_file($file);
if ($output["new"])
echo $file."\t\tNew\n";
} elseif ($hashes[$file] == md5_file($file)) {
if ($output["success"])
echo $file."\t\tSuccess\n";
}
else {
if ($output["failed"])
if ($force_update) {
$hashes[$file]=md5_file($file);
echo $file."\t\tUpdate forced\n";
}
else
echo $file."\t\tFailed!\n";
}
}
#directory for checking integrity
$dir = "./test5";
#file for storing fingerprints, should be writeable in case of fingerprints update
$file = "./fingerprints";
#set this value to false if you do not want to update fingerprints
$can_update = true;
#set this to value to true if you want to update fingerprints of modified files
#you should do this only if you had modified files yourself
$force_update = false;
#the output parameters
$output["new"] = true;
$output["success"] = true;
$output["failed"] = true;
header("Content-Type: text/plain");
$hashes = unserialize(file_get_contents($file));
if (!$hashes || !is_array($hashes))
$hashes = array();
if (!lookDir($dir))
echo "Could not open the directory ".$dir."\n";
if ($can_update)
if (file_put_contents($file, serialize($hashes)))
echo "Fingerprints were updated\n";
else
echo "The file cannot be opened for writing! Fingerprints were not updated\n";
else
echo "Fingerprints were not updated\n";
?>
.htaccess remove authentification from child directory
Satisfy Any
Mysqld-bin logs problem
mysql> flush logs;
mysql> reset master;
That's it! After that the all logbin files should be removed. Also you can disable mysqld-bin logging completely by commenting out log-bin line in my.cnf and restarting Mysql server daemon.