Thursday, October 15, 2009

Run hddtemp as user

If you want to get temperature of your hard drive, you can simply run

# hddtemp /dev/sda

This method has one disadvantage - hddtemp must be executed as root, otherwise you will get

/dev/sda: open: Permission denied

But there is a quick solution to run hddtemp as a normal user. Most Linux distributions have hddtemp daemon. After you run it you should be able to use just telnet or netcat as a normal use to get your hard drive temperature:

~ telnet localhost 7634

Wednesday, October 14, 2009

Google Custom Search Results per Page

If you are using Google Custom Search on your site, you may wonder how to change the count of search results per page. Okay, if you take a look at the custom search form html code, you will find this line

<script type="text/javascript" src=""></script>

Now open this script in browser and look into the code itself. This line does the trick


It checks if the variable googleNumSearchResults exists and if true, the count of search results per page will be set to minimum value from googleNumSearchResults and 20. If the googleNumSearchResults is not set, the count of search results per page will be set to 10. So you can easily customize the results count by adding this line to your custom search form code (remember you can't get more than 20 results per page)

var googleNumSearchResults = 15;

Tuesday, October 13, 2009

Mythfrontend for Windows

I've been looking for application simular to mythfrontend that can play Live TV from MythTV server (mythbackend) under Windows for a while. Currently there is a windows port of mythfrontend but it needs to be built from source with MinGW/MSys. So I searched for an easier way and found pretty good application called MythTV Player. It doesn't require any additional software or multimedia codecs and simply works out of the box. It is still in development stage so you may experience some issues during video playback, but after almost week of hard usage there was quite few player hangs and no other noticeable bugs. Here is the screenshot and the download link

MythTv Player latest version

Saturday, October 10, 2009

Increase USB Flash Drive Write Speed

The one of the biggest problems of usb flash drives is a slow data write speed. This article will guide you through the process that can possibly increase your flash stick write speed.

Okay, first I bought Transcend 8GB usb flash stick. It had been formatted with FAT32 filesystem initially. So I decided to run data read/write speed test. Mount the filesystem and execute following

# hdparm -t /dev/sdb

Timing buffered disk reads: 102 MB in 3.05 seconds = 33.43 MB/sec

$ dd count=100 bs=1M if=/dev/urandom of=/media/disk/test
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 29.5112 s, 3.6 MB/s

The disk read speed is good enough, but the write speed is not so good. That's because most of NAND flash drives (the most commonly used flash sticks) have 128k erase block size. Filesystems usually have 4k (4096 bytes) block size. And here we came into problem. If the filesystem blocks are not aligned to flash drive blocks, the performance overhead during disk writes will increase. So what we can do is to align filesystem properly. The best way to do this is to use 224 (32*7) heads and 56 (8*7) sectors/track. This produces 12544 (256*49) sectors/cylinder, so every cylinder is 49*128k.

# fdisk -H 224 -S 56 /dev/sdb

Now turn on expert mode with fdisk and force the partition to begin on 128k alignment. In my case I have set new beginning of data to 256. Create as many partitions as you need (I created only one - /dev/sdb1).
Do not forget to save changes and write new layout to flash drive (all data on the flash disk will be lost)
Now it's time to create the filesystem. I used ext4 because there is a way to tell it to specify a strip width to keep your filesystem aligned:

# mke2fs -t ext4 -E stripe-width=32 -m 0 /dev/sdb1

Now lets mount the filesystem and test the overall performance

# hdparm -t /dev/sdb

Timing buffered disk reads: 102 MB in 3.01 seconds = 33.94 MB/sec

$ dd count=100 bs=1M if=/dev/urandom of=/media/disk/test
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 17.0403 s, 6.2 MB/s

As we can see, the data read performance is almost the same while the write speed is considerably faster.

Thursday, October 1, 2009

Directory Checksum

Recently I found on some of my websites suspicious files. After some research I discovered that most of my custom html and php files were also modified and were containing base64 encoded code. So I decided to make universal script that will allow me to take site fingerprint and then manually check it for any changes in my files weekly. This php script takes md5sums of all files in directory specified (including subdirectories) and save the result in custom data file. The next time you run it it will show you new files, files that were not changed and files that WERE CHANGED. The output and some other options can be customized inside the code itself. Anyway if you have ssh access to your webserver, you can do almost the same by running

find test5 -type f | xargs md5sum

#comment this if you want to debug the script
function lookDir($path) {
  $handle = @opendir($path);
  if (!$handle)
  return false;
  while ($item = readdir($handle)) {
  if ($item!="." && $item!="..") {
  if (is_dir($path."/".$item))
  return true;

function checkFile($file) {
  global $hashes;
  global $output;
  global $force_update;
  if (is_readable($file))
  if (!isset($hashes[$file])) {
  $hashes[$file] = md5_file($file);
  if ($output["new"])
  echo $file."\t\tNew\n";
  } elseif ($hashes[$file] == md5_file($file)) {
  if ($output["success"])
  echo $file."\t\tSuccess\n";
  else {
  if ($output["failed"])
  if ($force_update) {
  echo $file."\t\tUpdate forced\n";
  echo $file."\t\tFailed!\n";

#directory for checking integrity
$dir = "./test5";

#file for storing fingerprints, should be writeable in case of fingerprints update
$file = "./fingerprints";

#set this value to false if you do not want to update fingerprints
$can_update = true;

#set this to value to true if you want to update fingerprints of modified files
#you should do this only if you had modified files yourself
$force_update = false;

#the output parameters
$output["new"] = true;
$output["success"] = true;
$output["failed"] = true;

header("Content-Type: text/plain");
$hashes = unserialize(file_get_contents($file));
if (!$hashes || !is_array($hashes))
  $hashes = array();
if (!lookDir($dir))
  echo "Could not open the directory ".$dir."\n";
if ($can_update)
  if (file_put_contents($file, serialize($hashes)))
  echo "Fingerprints were updated\n";
  echo "The file cannot be opened for writing! Fingerprints were not updated\n";
  echo "Fingerprints were not updated\n";


.htaccess remove authentification from child directory

Suppose you have a public directory on Apache webserver protected with AuthType Basic using .htaccess file. And you want to remove the authentification from some child directories. There is no AuthType None option for .htaccess. The solution is to place an .htaccess file into the desired child directory with the following option

Satisfy Any

Mysqld-bin logs problem

After continuous running of Mysql server, I've noticed that /var/lib/mysql directory uses too much disk space. The reason of that problem was a set of mysqld-bin.xxxxxx files. Each of that file was 1GB in size. First I thought that I can stop the Mysql server and remove that files, but I didn't want to act this way because there was sensitive data in databases that I didn't want to loose. So I found the better way to achieve this. Connect to Mysql server and perform the following

mysql> flush logs;

mysql> reset master;

That's it! After that the all logbin files should be removed. Also you can disable mysqld-bin logging completely by commenting out log-bin line in my.cnf and restarting Mysql server daemon.