Friday, October 2, 2009

Hudson servers FAIL

I noticed the worst of all coding bugs! as is with many java development houses, we use hudson for all our building. And I discovered today that when hudson.dev.java.net is down, the web interface for your hudson server starts to break..

For some fucked up reason, Hudson dev team thought it would be shit cool to make calls to their website from all hudson servers project specific configuration pages..

see the request here I intercepted.
hudson.yourdomain.com:80 127.0.0.1 - - [02/Oct/2009:17:30:54 +0200] "\x16\x03\x01" 501 292 "-" "-"
hudson.yourdomain.com:80 127.0.0.1 - - [02/Oct/2009:17:30:54 +0200] "\x16\x03\x01" 501 292 "-" "-"
hudson.yourdomain.com:80 127.0.0.1 - - [02/Oct/2009:17:30:54 +0200] "\x16\x03\x01" 501 292 "-" "-"
hudson.yourdomain.com:80 127.0.0.1 - - [02/Oct/2009:17:30:54 +0200] "\x16\x03\x01" 501 292 "-" "-"

WTF were you thinking!!!

Anyway so if you have the missing batch task buttons bug or general hudson weirdness like you get a submitForm error when you try save settings,
add a over-ride for

127.0.0.1 hudson.dev.java.net

on your pc which you are trying to access your hudson with. You can direct it at any apache server anywhere!

please goto hudson forums and WTF in there!

Kegan

Wednesday, September 23, 2009

Using Jython to query JMX Objects / Attributes

I was tinkering today with re-implementing some nagios and cacti checks to use jython to query jmx objects and attributes since I am getting tired of re-compiling my java code jmx query-er.

Anyway, here is the basic code in a nut shell. You need to adapt this to your needs ofcourse.

#START
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;

import javax.management.MBeanServerConnection;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

from array import array
# put correct auth info in here
ad=array(java.lang.String,["username","password"])
n = java.util.HashMap()
n.put (javax.management.remote.JMXConnector.CREDENTIALS, ad);
# this is a example but you probably want the url as a URL object or similar...
jmxurl = javax.management.remote.JMXServiceURL("service:jmx:rmi:///jndi/rmi://10.0.0.233:12086/jmxrmi")
testme = javax.management.remote.JMXConnectorFactory.connect(jmxurl,n)
connection = testme.getMBeanServerConnection();

# The actual query
object="java.lang:type=Memory"
attribute="HeapMemoryUsage"

# Execute
attr=connection.getAttribute(javax.management.ObjectName(object),attribute)
print attr

# Close the connection
testme.close()
#END

Wednesday, July 29, 2009

Importing CVS into SVN with history

Converting old CVS repos and their history is a question that always comes up. So heres a mini-howto convert cvs to svn and preserve the history.

Firstly install cvs2svn, on Debian based distros you can grab it with apt:
sudo apt-get install cvs2svn

cvs2svn can also be downloaded from http://cvs2svn.tigris.org/

First we need to setup a place to work and create a CVSROOT dir else cvs2svn wont be happy.
mkdir -p ~/oldcvs/CVSROOT
mkdir ~/newsvn/

Now lets copy the CVS repo's data into ~/oldcvs/modulename
cp -r /path/to/cvs/modulename /home/user/oldcvs/modulename
cvs2svn --encoding=iso8859_10 --dumpfile=/home/user/newsvn/modulename.SVN ~/oldcvs/modulename
for Encodings check http://docs.python.org/library/codecs.html#standard-encodings
also dumpfile didnt like the ~ in the path so full path needed here!

If all goes well you should end up with a nice report of all the revisions and their mother.
Now we need to import our newly create .SVN file into subversion, in my case I need to create a new project for it aswell like so:
svnadmin create /path/to/svn/repos/modulename

Then just import the dump into our SVN repo we created earlier
svnadmin load /path/to/svn/repos/modulename <~/newsvn/modulename.SVN

All done.
Due to the unavailability of cheap iPhone 3G's in Sweden, I nabbed one in a second hand store in Switzerland for 300 CHF ( 2200 SEK ) and another two online for about 3000 SEK each.

Each phone was locked to some or other provider. Here is my experience in jailbreaking and unlocking a Swisscom locked iPhone 3G.

Firstly get-a-mac, I dont do windows and never will. You will need PwnageTool 3.0 for mac. Get it from one of these:
Procedure
  1. Update the iPhone to 3.0 via iTunes.
  2. Launch PwnageTool and click the expert button, click the iPhone 3G.
  3. You should see the 3.0 firmware in the main window, this is from the iTunes library actually so its trustable.
  4. Click the firmware for 3.0 and Next
  5. Click General and Next
  6. Check "Activate the phone", this makes the phone activate without the provider sim
  7. Check "Enable baseband update", this makes the the phone unlock-able via Cydia
  8. Resize the root partition to at least 700mb pref 1gb if you can spare that.
  9. Neuter bootloader should be unchecked
  10. under Cydia choose "manage sources" and add http://repo666.ultrasn0w.com
  11. Click next through Packages and Logos until you can click the Build button and then Next again
  12. Follow instructions in PwnageTool on how to put your phone into DFU mode
  13. Restore you custom image onto you iPhone via opening iTunes and hold down Option and click restore
  14. Choose your Custom firmware which was generated by PwnageTool
  15. And watch the jailbreaking process
Next we need to unlock your phone to access GSM network, quite easily, all you need to do turn off your 3G and open Cydia and search for ultrasn0w (thats a numerical ZERO ) and install it, reboot the phone and voila!

Saturday, February 21, 2009

Fixing time sync messed up VOB files

I have many home video discs recorded on a Sony mini DVD cam which are giving me hell trying to import them into iMovie. Seems even playback was only running the first 20 seconds of any VOB file and then stopping. 

Resolved my issues with a nifty little mac app called Mpeg Streamclip

It opened up the streams and identified and corrected the time issues right away! I then could export direct to a Quicktime format h.26X format and get my movies into iMovie!

I love free software...

K

Thursday, February 19, 2009

Still bitching at /dev/random's slowness

Pondering,

So I believe I have the solution to the /dev/random being so god damn slow issue. All still totally theoretical of-course, but I reckon just thinking about it intensely for a few days will reveal the solution. 

The Problem (re-iterated maybe)
/dev/random = dog slow, in fact its so slow that its not usable at all, and I need a truly good randomized data source, No mathematical algorithm can create that. It has to be gathered.

The Proposed Solution
Quite simply put, Im going to run a sniffer on my wireless NIC and dump all the data as is to file, maybe even use a few wireless cards and some physical nics and mux the data all together. Then write a very simple C++ program which will gather a integer block size value from /dev/urandom which is not true randomness but enough for this purpose, Then read that block-size worth of data from the wireless sniffed data and convert it to a sha1 fingerprint, The fingerprint should be very unique and after converting the sha1 to a binary string, you should be pretty good to go.

This would be a very fast and good way to get very good random data. A few things spring to mind that need to be taken into account though. 

Firstly, tcpdump must strip all link layer and other common elements out of the data feed, so essentially only the packet payloads should be dumped. There would still be common elements in this data but I thing the sha1 would overcome any collisions.

Secondly, You need a noisy wireless network to run this by, which is okay for me since I have at least 17 in my listening range. But most home networks are not going to be that busy. I guess you could essentially use any data source provided its not something common, maybe that family video which you had put on DVD, or your iPhoto library! 

Ill spend some time tinkering in C++ and see if I can come up with a working model and post it here.

Wednesday, February 18, 2009

a Thousand Monkeys (Virtual Servers)

So I spent a few days pondering the issue with /dev/random under linux not being able to generate enough data to fill my hard disk in a timely fashion, to backtrack, I want to prepare a new encrypted volume, a small 100GB disk. 

As many people know, before you depend on your crypted disks, you need to fill the disk with very high quality random data, true, crypt strength data to be exact.

By my calculations on a average linux server you only get about 3 to 4 bytes per second out of /dev/random which computes to 1 Year and 1 Month to fill a 100gb disk with high quality random data. I am in more of a hurry than that.

Mac OS X Leopard seems to generate almost 10MB per second but its entropy is questionable so that is also not a option. 

I then theorized that if I spawned many virtual hosts, and ran netcat off /dev/random to a listening netcat on one of my real servers, that I could mass cat all the dev random pipes together onto the disk. 

Unfortunately, as expected,  it seems that /dev/random on virtual machines is pretty quiet since the machine acoustics are, well artificial in nature so there is no CPU fan noise or keyboard and mouse data to gather from. 

The shell script I wrote to spawn a large number of virtual hosts could come in useful though.

Here is the script.