Thursday, March 10, 2016

How to check disk fragmentation without using fsck

http://superuser.com/questions/474536/in-linux-how-do-you-check-if-a-disk-is-fragmented
#!/usr/bin/perl -w

#this script search for frag on a fs
use strict;

#number of files
my $files = 0;
#number of fragment
my $fragments = 0;
#number of fragmented files
my $fragfiles = 0;

#search fs for all file
open (FILES, '-|', "find '$ARGV[0]' -xdev -type f -print0");

$/ = "\0";

while (defined (my $file = <FILES>)) {
        open (FRAG, "-|", "filefrag", $file);
        my $res = <FRAG>;
        if ($res =~ m/.*:\s+(\d+) extents? found/) {
                my $fragment = $1;
                $fragments += $fragment;
                if ($fragment > 1) {
                        $fragfiles++;
                }
                $files++;
        } else {
                print ("$res : not understand for $file.\n");
        }
        close (FRAG);
}
close (FILES);

print ( $fragfiles / $files * 100 . "% non contiguous files, " . $fragments / $files . " average fragments.\n");

Tuesday, March 8, 2016

How to benchmark Java 8 parallel stream

http://stackoverflow.com/questions/23170832/java-8s-streams-why-parallel-stream-is-slower


There are several issues going on here in parallel, as it were.
The first is that solving a problem in parallel always involves performing more actual work than doing it sequentially. Overhead is involved in splitting the work among several threads and joining or merging the results. Problems like converting short strings to lower-case are small enough that they are in danger of being swamped by the parallel splitting overhead.
The second issue is that benchmarking Java program is very subtle, and it is very easy to get confusing results. Two common issues are JIT compilation and dead code elimination. Short benchmarks often finish before or during JIT compilation, so they're not measuring peak throughput, and indeed they might be measuring the JIT itself. When compilation occurs is somewhat non-deterministic, so it may cause results to vary wildly as well.
For small, synthetic benchmarks, the workload often computes results that are thrown away. JIT compilers are quite good at detecting this and eliminating code that doesn't produce results that are used anywhere. This probably isn't happening in this case, but if you tinker around with other synthetic workloads, it can certainly happen. Of course, if the JIT eliminates the benchmark workload, it renders the benchmark useless.
I strongly recommend using a well-developed benchmarking framework such as JMH instead of hand-rolling one of your own. JMH has facilities to help avoid common benchmarking pitfalls, including these, and it's pretty easy to set up and run.