Saturday, September 25, 2010
Geek Saturday (Rants)
1) When fading, mouse/keyboard input doesn't stop fading and you have to go enter the password to unlock the screen.
https://bugzilla.redhat.com/show_bug.cgi?id=612620
The latest xorg-x11-server update fixed the problem. Simple enough!
2) Inserting headphones in the audio jack doesn't mute the speakers.
https://bugzilla.redhat.com/show_bug.cgi?id=623632
After some grepping in /var/log/messages and google, found out that my kernel couldn't detect the model of my audio card. Simple fix was to add
<code>
options snd_hda_intel model=quanta
</code>
to /etc/modprobe.conf/dist-alsa.conf to force snd_hda_intel module to use the given model.
3) Flash videos don't work in fullscreen.
https://bugzilla.redhat.com/show_bug.cgi?id=608957
Simple fix was this:
<code>
mkdir -p /etc/adobe
echo "OverrideGPUValidation=1" > /etc/adobe/mms.cfg
</code>
There is just one irritating problem now. When I shutdown Fedora with windows open on various desktops, Fedora should remember the desktop number the window was open on. However, when I start back, all windows come back to desktop 1. hmm Couldn't figure out a solution to that yet :-(. But it was low priority anyway. All the major problems are now resolved :-). Peace!
Friday, September 24, 2010
On paranoia
After my MS, I will probably be such a security paranoid as to even block ssh connections to my machine as Linus does :-(. Ignorance is such a bliss!
Wednesday, September 15, 2010
Writing parallel programs with C and gcc
for (i = 0; i < MAX_NUM; i++) {
for (j = 0; j < MAX_NUM; j++) {
for (k = 0; k < MAX_NUM; k++)
Y[i][j] += A[i][k] * B[k][j];
}
}
The i and j loops are parallel. This means that given a unique (i, j) combination, the computations in the loop body do not depend on any previously computed values for multiplication i.e. each matrix element can be computed independently.
Lets get out hand dirty and make the i loop parallel. I have chosen size of matrix to be 1000X1000 so that we can see the speed-up clearly.
Now, all we have to do is, add one #pragma line before the i loop. Here is what the parallel code will look like
#pragma omp parallel for default(shared) private(i, j, k)
for (i = 0; i < MAX_NUM; i++) {
for (j = 0; j < MAX_NUM; j++) {
for (k = 0; k < MAX_NUM; k++)
Y[i][j] += A[i][k] * B[k][j];
}
}
Don't worry about the #pragma statement. These are actually quite simple to figure out and are a 5 minute job to learn from here: http://www.openmp.org/mp-documents/cspec20.pdf
Note that we haven't specified how many parallel tasks to fork. This is specified via an environment variable OMP_NUM_THREADS. This allows great flexibility in specifying how many threads to use without recompiling the program.
To enable OpenMP spec, use the "-fopenmp" switch with gcc:
$ gcc -o parallel -fopenmp parallel.c
Here are some test-runs which clearly show the speed-up:
Only one thread (serial program)
$ export OMP_NUM_THREADS=1
$ time ./parallel
real 0m13.633s
user 0m13.558s
sys 0m0.028s
Two threads
$ export OMP_NUM_THREADS=2
$ time ./parallel
real 0m8.144s
user 0m13.963s
sys 0m0.035s
Four threads
$ export OMP_NUM_THREADS=4
$ time ./parallel
real 0m7.960s
user 0m13.815s
sys 0m0.037s
Notice the speed-up?
With one thread, we have a execution time of 13.6s. With two threads it becomes 8.144s which is almost half. And with four threads it is 7.9s. Now what happened in the case of 4 threads? Didn't we expect 1/4th time? Well, mine is only a dual core PC, so the max speed-up for a CPU intensive task is when number of parallel tasks = number of cores/threads = 2.
You can see the running parallel threads with
$ ps -Haux | grep parallel
jitesh 7240 26.0 0.0 44536 1164 pts/0 Rl+ 20:10 0:00 ./parallel
jitesh 7240 25.5 0.0 44536 1164 pts/0 Rl+ 20:10 0:00 ./parallel
jitesh 7240 27.5 0.0 44536 1164 pts/0 Rl+ 20:10 0:00 ./parallel
jitesh 7240 27.5 0.0 44536 1164 pts/0 Rl+ 20:10 0:00 ./parallel
The "-H" switch displays the threads.
Cool, isn't it?
Thus, a parallel program with minimum effort and LOTS of time savings. We reduced the execution time of matrix multiplication to half with proper parallelism. On a 16-core machine with native support for threads, imagine how parallel programs will perform!
gcc and OpenMP FTW!
Note: If you are interested in learning OpenMP, see the spec links I have posted in the body of the blog.
Monday, September 6, 2010
A note on email address obfuscation
When you put your email address on the internet (eg.
jitesh@example.com), there is a chance that your address is mined for
spamming. Spam-bots crawl the internet day-and-night to mine email
addresses for spamming. Now, you can do something to prevent a spam
bot reading your email address. The key factor here is that an
automated BOT is going to scan the pages, so it is easy to fool it. Or
is it?
A very common form of obfuscation is to spell out the special
characters. eg. jitesh AT example DOT com. Note that this is infact a
very weak form of obfuscation. Because, you are simply changing the
syntax of writing your email address and hence, the bot-writer has to
just add one more grammar to his list of rules and he is done.
Ofcourse, bot-writers are not so stupid as to miss this simple change.
What you need is a semantic change which is impossible (or extremely
hard to be practical) for the bots to infer. But, humans can do it
easily. eg. I might obfuscate my address as : my-first-name AT example
DOT com
Note that: "my-first-name" here is a semantic change. Only a human can
infer that this is to be replaced by "jitesh". This is REAL
obfuscation.
So guys, if you have a non-gmail account with a sucky spam filter,
make your obfuscation stronger and do NOT underestimate spammers :-)
Wednesday, September 1, 2010
Loving LaTeX
\m/
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Free minix-like kernel sources for 386-AT
Message-ID: <1991Oct5.054106.4647@klaava.Helsinki.FI>
Date: 5 Oct 91 05:41:06 GMT
Organization: University of Helsinki
Do you pine for the nice days of minix-1.1, when men were men and wrote their own device drivers? Are you without a nice project and just dying to cut your teeth on a OS you can try to modify for your needs? Are you finding it frustrating when everything works on minix? No more all-nighters to get a nifty program working? Then this post might be just for you
As I mentioned a month(?) ago, I'm working on a free version of a minix-lookalike for AT-386 computers. It has finally reached the stage where it's even usable (though may not be depending on what you want), and I am willing to put out the sources for wider distribution.
It is just version 0.02 (+1 (very small) patch already), but I've successfully run bash/gcc/gnu-make/gnu-sed/compress etc under it.