Thursday, October 30, 2008

Apache2 PHP config

I am installing a couple of shopping cart online stores AKA commerce packages in order to evaluate them for use in an online store I'm setting up. I did a little research and decided based on features and the one deal breaker required feature, Google Checkout, on two commerce packages. The first is Zen Cart and the second is Magenta.

One of the commerce packages noted that one of the extension modules required enabling PHP short_tags. I had previously disabled php short_tags because it was preventing the use of the xml declaration tag at the top of my html files.

I use XHTML 1.1 for all my HTML work. Converting every document I do significant work on to XHTML 1.1. So turning back on PHP short_tags in my HTML files was not an option.

Luckily that was not necessary. I have enabled the PHP handler for .html and .xhtml files in my php5.conf located in /etc/apache2/conf.d/. I have a few other PHP related configuration statements that are wrapped by an if module mod_php statement.

I created a Files section within that if module mod_php that disables short tags only within .html and .xhtml files. Like so:

<IfModule mod_php5.c>
AddHandler application/x-httpd-php .html .xhtml
<Files ~ "\.x?html$">
php_flag short_open_tag off
Alias /phpinfo /var/www/phpinfo.php
Alias /apcinfo /usr/share/php/apc.php

I also have a couple of aliases in there that make useful system info URLs available automatically on all web hostnames. The PHP Alternative Cache, APC, has builtin PHP login security. I'm going to add PHP authentication to the phpinfo and post the code.

Yikes backslash zero on

I started a new blog at, One Liner Code. I was reviewing my posts when I realized that my sed example didn't make sense to me. I didn't understand how I could have messed up such a simple example in the language that most easily lends itself to one liners. I thought I had double checked all of my examples and made sure they were running right. Turns out the html code looked fine. There was a preprocessor stepping in between me and my sed Hello World post. It was turning my backslash zero into a null.

I whipped out my handy dandy php evaluator and encountered failure using the htmlentities() string function. php wouldn't give me the htmlentity code for characters that didn't need them. I decided to switch it up and get the charactor code using a ruby one liner:

ruby -e '"0".each_byte {|i| p i}'

I could have also used a php one liner like this:

php -r 'echo ord("0")."\n";'

The quick fix \&#48; allowed the post preprocessor to accept the backslash zero and turn it into the correct character representation.

Tuesday, September 9, 2008

iPhone Gurgling Water Noises and Dropped Calls

Updated 2008-10-08:
I traced the source of the problem back to smsnotify. The daemon shell script runs through a tight loop repeatedly checking the voicemail and sms sqlite databases using the sqlite3 command. I edited the script and put some bigger time delays between queries to the databases and a growing delay between vibrating alerts. The smsnotify default time delays can be a little disconcerting when you get a voicemail or SMS while on a call. It also creates a problem when you put down your phone and leave it for an hour. A single voicemail or sms will run down your battery and wake you from your sleep with incessant vibration alerts every 10 seconds.

Here is my version of the script. The original post follows it.:


#set -Tvx


while test 1
if [ `echo 'select count(*) from message where \
flags=0;' | sqlite3 \
/var/mobile/Library/SMS/sms.db` -gt 0 ]
if [ $SMSNOTIFY_COUNT -lt 2 ]
elif [ $SMSNOTIFY_COUNT -lt 4 ]
elif [ $SMSNOTIFY_COUNT -lt 5 ]
elif [ $SMSNOTIFY_COUNT -lt 6 ]
#Each parameter to vibrate is 1 more
#vibration, so vibrate 3 times
$SMSNOTIFY_PATH/vibrate 1 1

if [ `echo 'select count(*) from voicemail \
where flags & 1 == 0;' | sqlite3 \
/var/mobile/Library/Voicemail/voicemail.db` -gt 0 ]
if [ $VMNOTIFY_COUNT -lt 2 ]
elif [ $VMNOTIFY_COUNT -lt 4 ]
elif [ $VMNOTIFY_COUNT -lt 5 ]
elif [ $VMNOTIFY_COUNT -lt 6 ]
#vibrate 2 times

#This program sleeps for 5 seconds then checks
#how long it's been sleeping. If it's been more
#than 10 seconds the phone has slept and it exits,
#allowing the loop to continue

# vim:ai:et:sw=4:sts=4:

I've been having all kinds of iPhone trouble acquiring signal, dropping calls, and the signal going in and out. My voice sounded incoherent and was always accompanied by the gurgling sound of electronic water.

The problem was of my own creation. I had jail broken my phone and installed all kinds of apps that had launched many server processes. Most of the server processes failed to stop with a simple:

su -c 'launchctl stop com.sourceforge.netatalk.afpd'

I ended up having to do a:

su -c 'for i in /Library/LaunchDaemons/*;
do launchctl unload $i;done'

I did the same for the ones installed in /System/Library/LaunchDaemons. I just limited the file glob expression to the services I wanted to unload as a plain * would probably force me to have to restore my iPhone. i.e. I used* and other more specific file globbing expressions.

Since all of my calls had garbled audio quality my first phone call verified that I had fixed the problem. Also the fact that I had a crystal clear conversation with only one bar was the proof I needed. As I hadn't been able to get my iPhone to dial most of the time with 3 bars.

The only caveat is that inorder to renable these services you have to load the profiles again rather than doing a simple stop and then start. I'll have to look into how those .plist files should be written so that the start and stop work properly and maybe even have launchd do the listening for some of the services rather than having all of those services running unnecessarily.

Hope this helps. It saved me from carrying a separate phone just for conversations.

Friday, August 1, 2008

Automate ssh-agent loading and sharing across logins

When the keychain package is not available on a platform I usually use a simple script in my .profile or .bash_profile or .bashrc that loads or reuses an existing ssh-agent.  This allows me to load an ssh key once and use it in any terminal without any additional effort. Like so:

# ssh-agent sharing
if [ -e ~/.ssh-agent ]; then
. ~/.ssh-agent
eval $(ssh-agent|tee ~/.ssh-agent)

I tried sprucing it up for Terminal on my iPhone because it starts 4 simultaneous terminals at the same time. My first attempt was a dismal failure. I misguidedly added the following above the previous script:

# ssh-agent sharing - failed multiple concurrent launch attempt
if { ps awx |grep ssh-agent ; } then
if [ -e ~/.ssh-agent ]; then
. ~/.ssh-agent
killall ssh-agent
rm ~/.ssh-agent

I ended up finding a relatively simple file locking script:

function my_lockfile ()
{ echo $$ > $TEMPFILE ; } &> /dev/null || {
echo "You don't have permission to access `dirname $TEMPFILE`"
return 1
ln $TEMPFILE $LOCKFILE >& /dev/null && {
return 0
kill -0 `cat $LOCKFILE` >& /dev/null && {
return 1
echo "Removing stale lock file"
ln $TEMPFILE $LOCKFILE >& /dev/null && {
return 0
return 1

Which makes it relatively easy to rewrite the script like so:

until my_lockfile ~/.sshagent; do
sleep 1

if [ -z "$SSH_AGENT_PID" ]; then
if [ -e ~/.ssh-agent ]; then
. ~/.ssh-agent >& /dev/null
eval $(ssh-agent|tee ~/.ssh-agent) >& /dev/null

if ! ps -p "$SSH_AGENT_PID" >& /dev/null; then
eval $(ssh-agent|tee ~/.ssh-agent) >& /dev/null

rm -f ~/.ssh-agent.lock

The import part is the loop at the beginning of the script and the rm at the end.
I could add in a timeout but the lockfile script is pretty good at cleaning up unused lock files. Worst case scenario you can Ctrl-C to break out of the startup script.
If you need help with the basics of using ssh keys and the ssh-agent follow this link:
Automate a Remote Login Using SSH - Webmonkey
Blogged with the Flock Browser

Tuesday, June 17, 2008

How to make history appendable across multiple concurrent bash shells

I've been meaning to change the default bash setting in Debian/Ubuntu for saving the command history across multiple concurrent shells.

Thought I'd document what I ended up doing.

I added the following to the end of my ~/.bashrc

# my settings
shopt -s histappend
shopt -s histverify
set -b

I got a litle carried away and added a couple of extra tweaks to suit my preferences. histverify lets you edit commands recalled from history expansion before executing them. set -b makes background process notification happen in real time rather than waiting for the next shell prompt.


ffmpeg with h.264 3gp/amr xvid lame/mp3 a52/ac3 dts aac

I was going through the source for mythweb to find a good spot to insert code to setup and return VLC VOD links instead of simple http streams.  I followed the code for flash video .flv on the fly conversion and streaming with http range based seeking.  I was surpised to find a very simple and elegant ffmpeg based  CGI script doing all the work.  In /var/www/mythweb/modules/stream/ I found the following:

elsif ($ENV{'REQUEST_URI'} =~ /\.flv$/i) {
# Print the movie
$ffmpeg_pid = open(DATA,
"$ffmpeg -y -i ".shell_escape($filename)
.' -s '.shell_escape("${width}x$height")
.' -r 24 -f flv -ac 2 -ar 11025
.' -ab '.shell_escape("${abitrate}k")
.' -b '.shell_escape("${vbitrate}k")
.' /dev/stdout 2>/dev/null |'
unless ($ffmpeg_pid) {
print header(),
"Can't do ffmpeg: $!";
print header(-type => 'video/x-flv');
my $buffer;
while (read DATA, $buffer, 262144) {
print $buffer;
close DATA;

Pretty cool. I'm going to use this as a template for generating VLC VOD media on the fly using netcat. More to follow.

Anyway what happened is that I realized the flash video was working great but there was no audio. Playing with ffmpeg on the command line using the parameters above clued me in. ffmpeg didn't know how to decode AC3 and it also didn't know how to encode mp3.

I was surpised to find that the Medibuntu ffmpeg was missing some codecs that I know ffmpeg supports. I looked around the web for a way to compile ffmpeg with the missing codecs.

I found a bunch of apt-get source ffmpeg style how to's and figured out the DEBIAN_BUILD_OPTIONS="risky" part.

Had to install fakeroot to make the source by hand.

Another site had the details for getting AMR/3GP/3GPP/3GP2 AKA AMR narrow band and wide band. Found the how to for ffmpeg for Debian at this page. And the HOWTO with AMR at this page

I performed the following:

sudo apt-get build-dep ffmpeg
sudo aptitude install fakeroot liblame-dev liba52-0.7.4 liba52-dev libdts-dev libfaac0 libfaac0-dev libfaad0 libfaad-dev libxvidcore4 libxvidcore4-dev libx264-57 libx264-dev
apt-get source ffmpeg
unzip -q
unzip -q
cd ffmpeg-0.cvs20070307
mkdir libavcodec/amr_float
mkdir libavcodec/amrwb_float
cd libavcodec/amr_float
unzip -q ../../../
cd ../../libavcodev/amrwb_float
unzip -q ../../../
cd ../..

Ended up changing the following section in ffmpeg-0.cvs20070307/debian/rules in the source directory from this:

confflags += --enable-gpl --enable-pp --enable-swscaler --enable-pthreads
confflags += --enable-libvorbis --enable-libtheora --enable-libogg --enable-libgsm

to this:

confflags += --enable-gpl --enable-pp --enable-swscaler --enable-pthreads
confflags += --enable-libvorbis --enable-libtheora --enable-libogg --enable-libg
confflags += --enable-x264 --enable-liba52 --enable-libdts --enable-amr_nb
confflags += --enable-amr_wb

And finally I performed the following in the extracted source directory:

DEBIAN_BUILD_OPTIONS="risky" fakeroot debian/rules binary
cd ..
sudo dpkg -i *.deb

Blogged with the Flock Browser

Friday, June 6, 2008

My Working MythTV Firewire Fix

Updated post 6/6/2008 - And now it is tested and working

I finally found a working failsafe fix for my MythTV FireWire setup that doesn't require manual intervention to keep MythTV up and running. In the MythTV startup script I repeatedly unload and reload the 1394 modules and then test to determine if the the firewire is working by checking for errors from my external channel changing program..

I changed the following block in /etc/init.d/mythtv-backend

su - $USER -c "/usr/bin/mythprime"

to this

#until { su - $USER -c "/usr/bin/mythprime"|grep '1\ stbs\ primed'; };do
while sa3250ch 1 2>&1 | grep oops;do
modprobe -r dv1394
modprobe -r video1394
modprobe -r raw1394
modprobe -r ohci1394
modprobe -r ieee1394
modprobe raw1394
/bin/sleep 1

I had to less +F /var/log/mythtv/mythbackend.log. So I could turn off then on the cable box using the power button when the MythTV backend was recording a show. Nothing else seemed to get the video data flowing. After getting the initial recording going it seems to work fine with later recordings. Without the need for manual intervention. I commented out my cron scripts as the latest version of MythTV seems much more robust and the cronjobs only created confusion when there is a problem that requires intervention.

In my search for a completely automated solution I ended up changing the driver for my Time Warner supplied SA3250HD firewire to use a 100Mbps connection.

This is the third fourth fifth update to the details of my working MythTV firewire setup. If I have any more updates I'll make a new postcontinue to update this post. Looks good so far.It's working again.

Latest change changed again:
I have changed my channel changing binary to a scripted frontend. I renamed the binary to sa3250ch.bin and made a /usr/local/bin/sa3250ch script like so. I commented out what it used to do. Just changing the channel and letting the channel changes fail and miss recordings is fine. When I find the time to find a channel changing script for the SA3250 using p2p I will post it here.:


#/usr/local/bin/sa3250ch.bin 1
#/usr/local/bin/sa3250ch.bin 1
#/bin/sleep 1
/usr/local/bin/sa3250ch.bin $1
#/bin/sleep 2

Saturday, May 17, 2008

Automounting Kubuntu NFS Shares in OS X Leopard

I discovered this feature in OS X Leopard by accident. I thought I was sshed into my Kubuntu home server. I was having some problems with my NFS sharing setup across my MacBook Pro and Kubuntu boxes. I resolved the problem very easily by editing the nfs defaults file and making sure stats and idmap2 were enabled.

I was still having problems with a Seagate external USB drive. I had previously dealt with the problem of the drive going to sleep forever by using sdparm to change the sleep timeout to a day and had a cron script tap the drive twice a day. But the drive had been getting noticibly louder from constantly spinning. A little research turned up an elegant fix using udev to set the drive can be restarted /sys filesystem attribute to true on creating the device node for the drive.

Here's the trick for reference:

$ cat /etc/udev/rules.d/local.rules
# Seagate FreeAgent allow_restart fix (i/o errors)
SUBSYSTEMS=="scsi",DRIVERS=="sd",ATTRS{vendor}=="Seagate*", ATTRS{model}=="FreeAgent*",RUN+="/bin/sh -c 'echo 1 > /sys/class/scsi_disk/%k/allow_restart'"

Now back to problem of the drive not mounting on boot. I decided to install the autofsd and automount tools. So that whether or not the drive showed up at at a different SCSI device address it would mount at the same place. I also wanted any access to the Seagate external drive to kick off a mount if the Seagate external USB wasn't already mounted and for the filesystem to be umounted when idle.

Using automount/autofs would kill two problems I was having. The first problem was that the drive wasn't unmounting before it went to sleep. Leaving the filesystem locked and mounted to the ghost device. The second problem was that I couldn't count on the drive being at the same device name. I'm using the drive as the location of my MythTV recording directory and for my shared network file storage.

Enabled auto.misc in /etc/auto.master and added these lines to /etc/auto.misc:

mythtv -fstype=reiserfs :LABEL=500gusb
ubuntu -fstype=iso9660,loop,ro :/mythtv/downloads/ubuntu-8.04-desktop-i386.iso

I tried to get them to mount off the root filesystem but that is apparently a no no for automounting. At least it didn't work for me. A little filesystem linking covered that up. Notice the fine LABEL= device syntax. :) I added the Ubuntu ISO automounting and sharing so my other Kubuntu boxes could upgrade using the CD over NFS. I left the autofs loop in there since it is pretty light weight. It only mounts the image on access and unmounts it while idle. It inspired me to fiddle with the /etc/auto.master on all the Kubuntu boxes and enable Allowing me to access the shared automounted filesystems with a simple reference to /net/<hostname>/<sharename>.

The /etc/exports (I decided to export each automount individually because NFS complained in the logs about exporting /misc even though it worked with the crossmount option.):

/misc/mythtv *.local(rw,async,nohide,no_root_squash,no_subtree_check,insecure)
/misc/ubuntu *.local(rw,async,nohide,no_root_squash,no_subtree_check,insecure)

Which brings me back to my original point of accidentily finding that Mac OS X Leopard is using the proper script and that it is enabled by default. Meaning ls /net/<hostname>/<sharename> automounts NFS shares - no configuration needed. Allowing me to get rid of my netinfo custom automounts located under /mounts in the local netinfo directory. I then turned my /mythtv directory into a link using sudo -s and then ln -s /net/<hostname>/misc/mythtv /mythtv (I don't give my everyday account blanket sudo permissions ergo sudo -s first in order to execute a command without explicit sudo rights being granted).

One final note. Don't let the Finder make your links for you. The aliases Finder creates are not soft links and are invisible to the POSIX subsystem and even some Cocoa apps. Found that out the hard way. Even scared me for a bit after all that fine setup work. Nearly wiped the smug look off my face.

Saturday, May 3, 2008

Ubuntu or Kubuntu on a IBM Thinkpad T21 or T22 Video Driver Fix

I recently upgraded to Ubuntu 8.04 Hardy Heron and experienced problems with Xorg and my savage video card.  Traced it back to the AGPSize and AGPMode options for the savage driver.  Now have working 3D acceleration aka DRI and AIGLX.  I've been missing that since the upgrade to version 7.xx.  Turns out AGPSize defaults to a 16MB window which causes a lockup or failure on my IBM Thinkpad T21 and T22 with acceleration turned on.  The Thinkpads have only 8MB video ram.

FYI I actually use Kubuntu but I upgraded all my Kubuntu systems just fine using the Ubuntu CD.

The working, 3D accelerated xorg.conf from my IBM Thinkpad T22:

# xorg.conf (X.Org X Window System server configuration file)
# This file was generated by dexconf, the Debian X Configuration tool, using
# values from the debconf database.
# Edit this file with caution, and see the xorg.conf manual page.
# (Type "man xorg.conf" at the shell prompt.)
# This file is automatically updated on xserver-xorg package upgrades *only*
# if it has not been modified since the last upgrade of the xserver-xorg
# package.
# If you have edited this file but would like it to be automatically updated
# again, run the following command:
#   sudo dpkg-reconfigure -phigh xserver-xorg

Section "InputDevice"
    Identifier    "Generic Keyboard"
    Driver        "kbd"
    Option        "XkbRules"    "xorg"
    Option        "XkbModel"    "pc101"
    Option        "XkbLayout"    "us"
    Option        "XkbOptions"    "lv3:ralt_switch"

Section "InputDevice"
    Identifier    "Configured Mouse"
    Driver        "mouse"
    Option        "CorePointer"
    Option        "EmulateWheel"        "true"
    Option        "EmulateWheelButton"    "2"
    Option        "ZAxisMapping"        "4 5 8 9"

Section "InputDevice"
    Identifier    "Synaptics Touchpad"
    Driver        "synaptics"
    Option        "SendCoreEvents"    "true"
    Option        "Device"        "/dev/psaux"
    Option        "Protocol"        "auto-dev"
    Option        "HorizEdgeScroll"    "0"

Section "Device"
    Identifier    "Configured Video Device"
    Option        "AGPMode"        "2"
    Option        "AGPSize"        "8"
    #Option        "ShadowStatus"        "true"
    #Option        "ShadowFB"        "true"
    #Option        "NoAccel"        "true"

Section "Monitor"
    Identifier    "Configured Monitor"

Section "Screen"
    Identifier    "Default Screen"
    Monitor        "Configured Monitor"
    Device        "Configured Video Device"
    DefaultDepth    16

Section "ServerLayout"
    Identifier    "Default Layout"
    Screen        "Default Screen"
    #InputDevice    "Synaptics Touchpad"

Blogged with the Flock Browser

Tuesday, April 29, 2008

Johnny Chung Lee

I'm already using my wiimote to control my MythTV. But I want to try out some of the projects Johnny Lee has done with the wiimote.

Video's of Johnny Chung Lee demoing innovative and useful applications of the Wii Remote for very low cost available at his website.

and Johnny Lee's TED Talk:

Monday, April 28, 2008

A Briefer History of Time

I am amazed at how many of the practical implications of a correct understanding of the universe and the implications of the Theory of Relativity are completely lost on most science fiction authors. Either that or most authors are so concerned with the wrong headed idea that space travel and intergalactic communication are necessary that they gloss over the simple facts of physics.

Time travel to the future is a practical reality. I've read debates between characters in science fiction books about the effect of the constant acceleration of a gravity well like the earth on stationary objects on it's surface that seemed to be inconclusive. Well physics defines time as relative and gravity wells as warped space time so guess what living on the Earth's surface is flinging us into the future by the very fact that a gravity well slows down time. The reach and power of the Earth's warp on space and time is revealed in the orbit of the moon. The moon is traveling in a straight line. Space time is so warped by the density of matter at the center of the Earth that space time around the Earth is curved right around into a sphere! The Earth is flat! It's a space time plane warped into a sphere.

The effect is very dramatic when you consider the fact that one second on the surface of the Sun is equivalent to one year on Earth. Just think of the implications that the Mars Rovers have lasted not just a lot longer than they were designed to last but that they are also on a smaller planet with a smaller space time warp with a smaller slowing of time. Meaning that even more time has passed for the rovers on Mars than for us on Earth. Or you consider that voyager 1 has been travelling for over a decade without the benefit of any space time warp to slow down the passage of time for it except for the Sun. It would be interesting to calculate an estimate of the real length of voyager 1's journey as experienced by voyager 1 and then translated back into Earth years.

I'm only halfway through the book and am heartened to read facts about our Universe that support my theory that space travel by humans is a waste of energy. Any intelligent entity with a basic understanding of physics will know that in order to know anything about the Universe you only need to look inward to the quantum for understanding and outward to the universe for information.

Makes me wonder why the author encourages space travel and colonization. When it seems that sustainability and peaceful coexistence is a more reasonable approach to avoiding extinction. I know there are extinction level events that will require preparation through technological advancement in order to avoid catastrophe. Like supervolcanoes, super tsunamis, and ice ages. But in my opinion a more concerted effort with global cooperation is required. Instead of waiting for Ragnarok or Judgement Day and trying to start the whole process over again on another planet.

Sunday, April 27, 2008

I joined Twitter

That is the new stuff in the upper right corner of my blog. After submitting to Friendster and resisting the urge or just not feeling the urge to join facebook, myspace, etc. I finally gave in to Twitter. Twitter is just a chat space that can be real time but is more like a personal log. If you want to remember what you did yesterday you can just look at your twitter page and hopefully if it was important you wrote something about it. Since it works from SMS it is also a perfect way to remember things you need to check out wen you get back on-line.

What I've learned since then is that if you follow tweets from your favorite bloggers that are on twitter you can quickly find a group of like minded or topic specific individuals to chat at and with. It also adds an ability to my blog to micro-post by having the Twitter module embedded in the side.

Al Gore 2008 TED Talk Presentation

Here is a link to the latest from Al Gore. It's his TED talk. It's considerably cut down and he's got a disclaimer saying he is invested in green technology.

Monday, April 21, 2008

Solar Concentrators vs. Photovoltaic

I did a little research into a pet peeve of mine, solar power. I always say that it is a joke to say that solar can't replace oil. Everybody who has played with a magnifying glass in the sunlight knows what a powerful energy source the sun is. I recently stumbled across my steam powered turbine idea as someone else's old news. Like those Tivo DVR commercials where the guy says he came up with the idea for DVR first.

Turns out there are already power plants in use around the world that use solar concentrators rather than over priced and inefficient solar panels AKA photovoltaics. To my delight I also googled practical home applications that you can have installed and more importantly, schematics and instructions on how to build some of these things yourself.

I'm going to read through the projects on these pages and pick one as my first solar project.

Concentrating Solar Collectors

Solar Cooking and Food Drying and Solar Stills and Root Cellars

Build a Solar Cooker

Looks like I can get away with a discreet solar cooker project without attracting anyone's attention. I think I've seen people doing solar barbeque and I remember thinking to myself either they are nuts or they just love putting aluminum foil over everything. I guess it's going to be my turn to wear the tin foil hat.

Sunday, April 20, 2008

Migration to MythTV 0.21

I decided to upgrade my MythTV setup to version 0.21, the new release. I had taken my MythTV setup off-line last September to wait out the free scheduling shakeout. Turns out the guys of MythTV at Schedules Direct have stayed true to their word and the price of a yearly subscription has dropped to $20 a year and it can support many other DVR platforms. Allowing for an easy exit strategy if I decide there is a better free DVR. Which I haven't. Even though I took a good look at what the competition had to offer before I performed the 0.21 upgrade.

I had a little trouble getting MythTV working with my Time Warner digital cable box using the firewire connection. I was a little skeptical of the advice given in the wiki about MythTV being incompatible with the firewire modules of kernel 2.6.21 or later because of changes to the modules. The proposed solution to this "problem" was to change the makefile for the 2.6.21 or greater source and recompile the modules. After playing around with the firewire off and on over a period of two weeks I found the real source of the problem. The raw1394 and video1394 kernel modules were not being loaded automatically on my Ubuntu box. A quick couple of modprobes later and everything was working fine. Although I find I have to change to channel 1 after changing channels with the cable box remote to get video recording again. I also had to use an external channel changing program instead of the built in one.

I like the new robustness of the firewire connection. I was previously unable to change channels while they were being recorded using the cable remote without crashing MythTV. Now I can check the mythbackend.log file and if I see repeated unable to get video errors I simply change the channel to 1 then back to the previous channel to fix the problem. No stopping and starting MythTV over and over. And no firewire priming scripts! :) I still have my originals but they are definitely unnecessary. In fact I replaced the mythprime program that comes with 0.21 with a simple script the resets the firewire by first unloading and then the reloading the firewire module stack.

I had bought a FreeAgent USB 500GB external drive a while back and had resorted to the never spin-down method of keeping the drive from going off-line forever after spinning down. I had noticed recently that the drive was staring to get louder. But it is not as loud as some other drives I have set to never spin down. I decided to look for a solution to the FreeAgent USB spin-down that didn't involve burning up the drive and making it unreasonably loud. Luckily it had been months since I had looked for a solution and things had improved. I found a perfect and elegant solution here. It reduces the problem to a udev local rule file that sets the allow_restart attribute through the sys file system. Excellent solution! :) I'm impressed! ;)

That completes the upgrade on my MythTV backend. My MythTV backend is not powerful enough to record and display and play at a reasonable frame rate. At least not with the video card on the motherboard the last time I checked. I need to revisit using the backend as an all in one box after performing some tests and potentially adding a more powerful video card or switching to a more powerful machine and giving up on the PCI only small form factor box I'm using. I have a dual processor Pentium 3 motherboard with AGP and a Radeon 9700 that shouldn't have any problems delivering better performance than the Pentium 3 MythTV backend I'm currently using.

I've become extremely sensitive to the volume of noise produced by my computers and DVR. I originaly assumed most of the noise was coming from the fans. Turns out the most egregious sounds were being generated by loud drives that had become loud from never spinning down. My MythTV frontend is a Thinkpad notebook with a slightly loud hard drive.

I replaced the hard drive and use laptop_mode and a 30 second spin-down timer to keep the currently tested as quiet replacement notebook drive from getting as loud as the notebook drive it is replacing. Note to anybody that is reading this: notebook drives are not meant to run continously. They are designed to be spun down many more times that desktop drives. Notebook drives also have a much shorter spinned up life span. I wish I had done my research before I destroyed all those drives over the years! ;

More to come.


I using one of my Wiimotes as a bluetooth remote and loving it. The Wiimote has easily replaced my completely lame Sony T610 bluetooth phone hack as a remote setup that I was using before the rebuild.

Tuesday, April 8, 2008

Pipes: Rewire the web

Pipes: Rewire the web

I bet this is why Microsoft wants to buy Yahoo so badly.  Yahoo's web based visual designer is the next step in programming.

Why invent the future using in house "geniuses" when you can just buy the next gen tech from someone who actually knows what they are doing.

I'm sure the first change they will make will be to charge for it and add proprietary extensions destroying it's Javascript only, cross browser, cross platform capabilities.

Blogged with the Flock Browser

Monday, April 7, 2008

Solar Power

We are told so much junk about everything we are and how things work and our place in the universe. I found these video podcasts about our stuff and consumerism here:

The Story of Stuff Part 1

The Story of Stuff Part 2

It pretty much lays out the lies about so many things we are told and how most of it ties together. I say buy a farm and be self sufficient.

We are constantly being sold that solar power will never be able to provide the energy needed to repace oil. What a racket! Solar power just doesn't work unless you are Las Vegas and you have a desert with tons of sun and space. Then solar power is a very practical way to generate electricity. Look ma no solar cells. Just mirrors used like magnifying glasses to heat water to run a turbine. You know like you used to use to burn ants and start fires when you were a silly little kid!. What a joke our lives have become that special interests come before common sense.

Check out the solar power here:

They call it solar power concentrating.

Here's the results of using solar panels:

Solar panels are much more expensive than just converting that heat into energy. It looks like it is also producing more power, using less space, and doesn't require expensive equipment.

Let's see.

Solar Concentration:
$3 million to generate 64 mega-watts at a cost in pennies per a kilo-watt hour.

Solar Panels
$100 million to generate 15 mega-watts at a cost of 2 dollars and change per a kilo-watt hour.

Guess what. The department of defense created the solar concentration power plant as research. While the Solar panel power plant was created for a military base.

I like this article:
177 mega-watts in one square mile. They call the technology solar thermal.

This is just crazy talk:
Jimmy Stewart & Jean Arthur on Solar Power in You Can't Take it With You

End Diatribe.

Here's some solar resources I found while researching this post. I might play around with one of the kits.

Friday, March 14, 2008

Nice Work Mr President

Bush Intervened to Weaken EPA Smog Rules

Thanks for the smog. What a creep! He didn't just allow car emission standards to be lowered but in order to do it he had the scientific and Supreme Court mandated acceptable levels of car emission toxins increased.

What a cretin! I hope he ends up spending his final days in LA or some suburb of it. So he can enjoy the benefits of all the exhaust fumes he is single handedly creating by getting smog laden lungs.

What a jerk! When you only crave money. You only get money. And if you think that is a good thing then enjoy your money. Life has a way of teaching the greedy to find a more worthwhile pursuit.

Jet Contrails being used for Global Cooling

I located the PBS report linked in the title for this post after I read this posting on one of my favorite sci-fi author's blog. He doesn't believe in global warming or at least not in the science used to connect it to CO2 so far.

Although I believe the shrinking of the glaciers are a clear sign the end of the water is nigh upon us. I tend to believe the science hasn't quite modeled the problem in terms of the glaciers.

If the glaciers once covered most of the world during the ice ages then the melt process is just the continuation along those lines. I take it a step further and say the real problem isn't the temperature increase but the disappearing glaciers. It looks like the short term problem of creating drinking water may already be solved by the desalination plants. But the solution to the longer term problem of a coming ice age isn't being addressed and may end up being the easier problem to deal with if our technology keeps advancing at the exponential pace it's on.

Personally I feel we are all being held back simultaneously by not being nomadic and being told the solution to our energy crises is to live in close proximity in cities. How blind do you have to be to not see all the free energy given out by the sun? Don't be fooled. We all know it only takes a magnifying glass or a mirror to release the heat energy at a level that is usable for generating energy.

Updated: 2008-03-14

I found some links to the science about jet contrails:

How Do Jet Contrails Affect The Weather

Look Up They Are Spraying Us

Rocket Exhaust Leaves Mark Above Earth

Sunday, March 9, 2008

sudoers on Mac OS X

In my last post I refer to a lot of commands that require superuser privileges without prefacing them with sudo. That is how I type the commands in.

So how do I do it? I don't login as a user with administrator privileges and I don't use su. I have a separate account with administrator privileges that I don't login to that I use as an unprivileged user in the username and password prompt to perform activities that require superuser privileges.

I used to su to the account with administrative privileges to run sudo commands. Until I educated myself a little bit by "man sudoers".

You should always use visudo to edit the /etc/sudoers file it checks the syntax of the file before committing your changes. Saving you from yourself by preventing you from thinking you have correctly made changes to the /etc/sudoers file when all you have done is locked out all your accounts with administrative privileges from performing any administrative tasks. visudo will notify you of an error and give you a chance to fix the problem or abandon your changes when you exit your editor before it commits your changes. If you decide to fix the problem visudo will relaunch your editor with the changes you made intact. It is up to you to decide whether to try repeatedly to get your syntax right or abandon your changes. I locked myself out of sudo on my Mac years ago by directly updating /etc/sudoers. I think I recovered from it by rebooting or fixing file permissions. Save yourself some time by always use visudo.

There are a couple of command line options to sudo that are very su like but stick to sudo's use your own password metaphor.

The "-s" command line option launches the user's own shell with superuser privileges. There are many options you can set that control what and whether any environment variables are carried over from the user's environment. The environment can also be cleared of certain environment variables and the forced setting of environment variables can be specified. In order to use the "-s" command line option to sudo you have to add the user's shell command to the list of executable programs. Which will usually be /bin/bash.

The "-i" command line option launches the root user's login environment. You can use an option to sudo to switch to a different user. You have to add the root user's shell to the allowed executable programs with the full path, /bin/sh.

By using either of these command line options to sudo you avoid having to repeatedly type sudo before every command. I also avoid having to add my everyday user to the administration group by adding just the commands I need for one liner's like port gem while still having a backdoor to any command through the shell command. There are a couple of installation programs that refuse to work with my setup. But they are easily fixed by logging in to my administration account to install them. The need to do that has become less over time.

Friday, March 7, 2008

Rails Requires RubyGems >= 0.9.4 Error on Mac OS X

Updated 2008-10-08:
Please note that I forgot to mention in this post the fact that Ruby on Rails is installed by default with OS X Leopard. This blog is most helpful to those who had previously installed rails using MacPorts and now want to move to the Apple supported version that comes with OS X Leopard.

I was following along with the Apple Developer Connection article that shows how to update and use the version of Ruby on Rails that comes with Mac OS X Leopard. I followed through the article without a hitch. Successfully creating the Events Rails Site and viewing it at localhost:3000. The only change I made outside of the article's instructions was to add the script/generate and script/console commands to the same place as the article recommends the script/server. I figured keeping the original data model definition commands with the rails project would help if I wanted to make any changes to the data model later. I moved the data model definition commands to a shell script located in the top directory of the project. Which will make a nice portable design convention. I have seen that method mentioned in a blog post before.

While I was browsing around the running instance of the Events rails site I decided to do a full update of all the gem packages on my system by way of "gem update" without the "--system". Big mistake or so I thought when I tried to start my own rails project a day later. I couldn't even get the generate commands to run. The response from every script/ command was the same "Rails requires RubyGems >= 0.9.4. I googled around for the solution which always seemed to say remove the Leopard version and install the MacPorts version. Which I didn't want to do since I was trying to get rid of the duplicate MacPorts packages in the first place. In fact I had thought I had removed all the ruby packages from MacPorts. But on reinspection I found 4 versions of Ruby installed and one active a la "port search ruby and installed". First I did a "port uninstall ruby and inactive" to get rid of the old versions. Then I found a ruby gem I had somehow missed when I had done a "port uninstall rb-* and installed". After removing the lone gem I got rid of the installed and active version of Ruby via "port uninstall ruby". A quick verification of /opt/local/bin verfied that there was no longer a ruby or gem or update_rubygems command left.

Of course I left out the part where I removed all of the manually installed gems under /opt/local/lib/ruby/1.8/rubygems by "rm -fr / opt/local/lib/ruby/1.8/rubygems". Don't quote me on that path as I am writing this post on my iPhone and recalling it from memory. I only know the path because I also reinstalled rubygems manually before I figured out what the real problem was. I would probably have missed the real problem if I hadn't installed rubygems with "ruby setup.rb" after downloading it. The install commands showed RubyGems beings installed to /opt/local/... I had to remove that copy of RubyGems manually. Which wasn't so hard. All the install commands with the full paths of the installed files were still in my scrollback buffer.

I also didn't mention that I had uninstalled all my gem packages from the Leopard side of things. Including the original packages under /System. Luckily one of the articles I'd googled had the complete list. It was easy enough to do a "gem install rails sqlite-ruby capistrano mongrel libxml-ruby ruby-openid ruby-yadis rubynode RedCloth sources ferret acts_as_ferret fcgi termios cgi_multipart_eof_fix daemons dnssd gem_plugin hpricot needle actionwebservice activesupport". The full list of preinstalled gem packages is available here. I abbreviated my gem install command to only include what is necessary as installing rails pulls in some of the other packages in the list like activerecord and rake.
Everything worked like a charm after that. Now on to writing that SFA app. After that I'll have to get back to my holy war to remove redundant MacPorts packages. I think I was up to removing those two verions of Python 2.4 and 2.5. Seeing as how Leopard ships with 2.5 and 2.4 was a buggy beast at best when compared to the speed and bug fixes of version 2.5.

Monday, February 18, 2008

Fix for iPhone not syncing some video

I use the iPhone. Before upgrading iTunes and the iPhone firmware I was able to load video on the iPhone using iTunes. Now I have problems getting the same video to sync to the iPhone. I also have problems getting some video podcasts to sync to the iPhone that used to sync just fine.

I did some research using ffmpeg -i on some of the files that synced and some of the ones that wouldn't sync. I found that the video that wouldn't sync all had something in common. The aspect ratio was showing up as 0:1 (for DAR and PAR).

I googled some iPhone video conversion parameters and found an ffmpeg parameter that gets any video to sync, -aspect 16:9. Now I just need to come up with a set of ffmpeg video conversion parameters that play for more than a few seconds before the audio drops out and the video stops playing.

Gonna try doing an ffmpeg copy and report back the results.

Update 2008-03-08

I found out a lot. The 0.49 version of ffmpeg that comes with ffmpegX works fine. It was either patched to deal with the QuickTime mpeg4 issues or uses default setting that work with QuickTime. Turns out QuickTime needs square pixels for mpeg4 and the harddup video filter to avoid a frame copy problem when it converts to the mpeg4 format as the duplicate frame flag gets lost in the translation. I haven't figured out how to do this with ffmpeg. But all the info on this issue is dealt with in a section of the mencorder documentation under the heading QuickTime. Mencoder uses ffmpeg under the "-lavc" flag.

I'll update with the details after I perform some tests.