Saturday, February 21, 2009

Fixing time sync messed up VOB files

I have many home video discs recorded on a Sony mini DVD cam which are giving me hell trying to import them into iMovie. Seems even playback was only running the first 20 seconds of any VOB file and then stopping. 

Resolved my issues with a nifty little mac app called Mpeg Streamclip

It opened up the streams and identified and corrected the time issues right away! I then could export direct to a Quicktime format h.26X format and get my movies into iMovie!

I love free software...

K

Thursday, February 19, 2009

Still bitching at /dev/random's slowness

Pondering,

So I believe I have the solution to the /dev/random being so god damn slow issue. All still totally theoretical of-course, but I reckon just thinking about it intensely for a few days will reveal the solution. 

The Problem (re-iterated maybe)
/dev/random = dog slow, in fact its so slow that its not usable at all, and I need a truly good randomized data source, No mathematical algorithm can create that. It has to be gathered.

The Proposed Solution
Quite simply put, Im going to run a sniffer on my wireless NIC and dump all the data as is to file, maybe even use a few wireless cards and some physical nics and mux the data all together. Then write a very simple C++ program which will gather a integer block size value from /dev/urandom which is not true randomness but enough for this purpose, Then read that block-size worth of data from the wireless sniffed data and convert it to a sha1 fingerprint, The fingerprint should be very unique and after converting the sha1 to a binary string, you should be pretty good to go.

This would be a very fast and good way to get very good random data. A few things spring to mind that need to be taken into account though. 

Firstly, tcpdump must strip all link layer and other common elements out of the data feed, so essentially only the packet payloads should be dumped. There would still be common elements in this data but I thing the sha1 would overcome any collisions.

Secondly, You need a noisy wireless network to run this by, which is okay for me since I have at least 17 in my listening range. But most home networks are not going to be that busy. I guess you could essentially use any data source provided its not something common, maybe that family video which you had put on DVD, or your iPhoto library! 

Ill spend some time tinkering in C++ and see if I can come up with a working model and post it here.

Wednesday, February 18, 2009

a Thousand Monkeys (Virtual Servers)

So I spent a few days pondering the issue with /dev/random under linux not being able to generate enough data to fill my hard disk in a timely fashion, to backtrack, I want to prepare a new encrypted volume, a small 100GB disk. 

As many people know, before you depend on your crypted disks, you need to fill the disk with very high quality random data, true, crypt strength data to be exact.

By my calculations on a average linux server you only get about 3 to 4 bytes per second out of /dev/random which computes to 1 Year and 1 Month to fill a 100gb disk with high quality random data. I am in more of a hurry than that.

Mac OS X Leopard seems to generate almost 10MB per second but its entropy is questionable so that is also not a option. 

I then theorized that if I spawned many virtual hosts, and ran netcat off /dev/random to a listening netcat on one of my real servers, that I could mass cat all the dev random pipes together onto the disk. 

Unfortunately, as expected,  it seems that /dev/random on virtual machines is pretty quiet since the machine acoustics are, well artificial in nature so there is no CPU fan noise or keyboard and mouse data to gather from. 

The shell script I wrote to spawn a large number of virtual hosts could come in useful though.

Here is the script.