Feb 032011

I recently setup a VPS with Arp Networks. By default they set you up with your own IPv6 block. All you need to do is configure it to meet your needs. I reinstalled the VPS with Ubuntu Lucid (which is not one of their default OSes) so I needed to reconfigure my interface to use one of my assigned v6 addresses. The process is extremely easy. Here is the relevant section of my /etc/network/interfaces file:

iface eth0 inet6 static
        address 2607:f2f8:a230::2
        gateway 2607:f2f8:a230::1
        netmask 64

After restarting networking, I was able to reach the machine via IPv6. You can verify the results of this by testing soljerome.com at http://ipv6-test.com/validate.php. Now that I’m IPv6 ready, I just wish that Comcast would finish their dual-stack rollout so I can use it natively from home. Seeing as how the IANA just allocated the final five /8 blocks of IPv4 address space, I’m hoping that the rollout happens sooner rather than later.

 Posted by at 10:22
Jan 262011

I was recently setting up DBStats for a Bcfg2 installation and was having some serious performance issues when a client was uploading statistics to the server.

hwfwrv003.web.e.uh.edu:probe:groups:['group:rpm', 'group:linux', 'group:redhat', 'group:redhat-5Server', 'group:redhat-5', 'group:x86_64']
Generated config for hwfwrv003.web.e.uh.edu in 0.044s
Handled 1 events in 0.000s
Client hwfwrv003.web.e.uh.edu reported state clean
Imported data for hwfwrv003.web.e.uh.edu in 139.942095041 seconds

This is drastically slower than normal. So, I remounted the sqlite database on a ramdisk.

# losetup /dev/loop0 /bcfg2/bcfg2.sqlite
# mount -t ramfs /dev/loop0 /bcfg2/
# mount | grep ramfs
/dev/loop0 on /bcfg2 type ramfs (rw)

Here is the time it took once I moved the sqlite database to a ramdisk.

hwfwrv003.web.e.uh.edu:probe:groups:['group:rpm', 'group:linux', 'group:redhat', 'group:redhat-5Server', 'group:redhat-5', 'gr
Generated config for hwfwrv003.web.e.uh.edu in 0.074s
Handled 1 events in 0.000s
Client hwfwrv003.web.e.uh.edu reported state clean
Imported data for hwfwrv003.web.e.uh.edu in 1.16791296005 seconds

That’s faster by a factor of almost 120! As you can see, something is very odd with the performance hit we are taking when using an ext4 filesystem. Just for comparison, I created an ext3 partition to hold the sqlite database.

# mount | grep foo
/dev/loop1 on /foo type ext3 (rw)
# ls /foo/

Here is the same client update again when using ext3 to hold the sqlite database.

hwfwrv003.web.e.uh.edu:probe:groups:['group:rpm', 'group:linux', 'group:redhat', 'group:redhat-5Server', 'group:redhat-5', 'gr
Generated config for hwfwrv003.web.e.uh.edu in 0.037s
Handled 1 events in 0.000s
Client hwfwrv003.web.e.uh.edu reported state clean
Imported data for hwfwrv003.web.e.uh.edu in 1.60297989845 seconds

I was finally able to track this down to a change in the default kernel configuration used by Ubuntu for ext4 filesystems. The change is detailed at https://bugs.launchpad.net/ubuntu/+source/linux/+bug/588069. Ubuntu apparently decided it was a good idea to turn on barriers by default in 10.04 (Lucid). Luckily, I was able to remount the ext4 partition without barriers (-o barrier=0) and the performance dropped back down to something more reasonable.

hwfwrv003.web.e.uh.edu:probe:groups:['group:rpm', 'group:linux', 'group:redhat', 'group:redhat-5Server', 'group:redhat-5', 'gr
Generated config for hwfwrv003.web.e.uh.edu in 0.038s
Handled 1 events in 0.000s
Client hwfwrv003.web.e.uh.edu reported state clean
Imported data for hwfwrv003.web.e.uh.edu in 6.47736501694 seconds

That’s still much slower than ext3, but it’s at least acceptable in this particular case.

While I can understand the reasoning behind changing something like this, it does not appear to be a good idea to drastically reduce the performance of a LTS release without at least warning people VERY LOUDLY.

More information about this can be found at http://lwn.net/Articles/283161/.

 Posted by at 09:27
Nov 142010

I have recently started using ssh multiplexing and thought I’d share this technique with everyone. You will find this technique especially useful in cases where the initial connection negotiation takes longer than expected. You may also find it useful if the remote host forces password authentication and you get tired of typing your password repeatedly.

The first thing you need to do is copy the following lines to your ~/.ssh/config (or the global ssh_config):

Host *
    ControlMaster auto
    ControlPath /tmp/%r@%h:%p

After doing this, go ahead and test that things are working:

solj@abbysplaything $ ssh -f pjacosta-desktop sleep 60
solj@abbysplaything $ ls -l /tmp/solj*
srw------- 1 solj solj 0 Nov 14 13:44 /tmp/solj@pjacosta-desktop:22

Here you can see that a socket has been created for my user which can be reused by any additional ssh connections which go to the same user/host/port combination. Not only does this bypass future negotiations, but it also prevents opening additional connections unnecessarily. This has also really helped out at work because now when I log into a remote machine with an extremely high load, I can simply use the existing connection if I need to open up multiple sessions.

I haven’t found any downsides to using this sort of multiplexing and it certainly has some upside. This feature doesn’t appear to be very popular or publicized, but I think that it provides really useful functionality.

 Posted by at 13:57
Nov 092010

The new Bcfg2 VPS is finally setup and active. After a little bit of tweaking, I was able to get the documentation building from the master git branch every five minutes. The current setup is such that the documentation for the latest stable version of Bcfg2 can be found at http://docs.bcfg2.org/ while the latest development branch documentation is at http://docs.bcfg2.org/dev.

I plan on publishing the configuration for the web server soon, however, I want to do this at the same time as the common Bcfg2 repository so that I can finally resolve both these issues. I’m also thinking that it will probably be a good idea to create a debian repository on the VPS so that we can automate the building of packages. Currently our mirror at http://debian.bcfg2.org is very out of date.

My goal is to try and do most of this during the next code sprint. I have found that these types of things are difficult to work on unless I set a specific time. I am hoping that by next year we will finally have a real working example of a live Bcfg2 repository for people to see and use. I think that having these examples will lower the barrier significantly as new users are often tripped up by minor mistakes which can be mitigated more easily if you begin from a working example.

That’s all for now. More to come once I get started working on this.

 Posted by at 20:54
Oct 262010

I have just started creating a new sample Bcfg2 repository on github. This post details the strategy used in this repository in order to make it easy to pull in updates and merge them seamlessly with your running Bcfg2 instance.

The primary goal for this repository is to be nondestructive when someone tries to pull in the changes from upstream. However, in order to do this, there is a slightly more complicated repository structure involved.

The first thing you’ll probably want to do is grab a copy of the repository:

git clone git://github.com/solj/bcfg2-repo.git

As you inspect the repository, you’ll notice that I have tried to make use of Xinclude in order to prevent overwriting custom repository files. The idea is that you will have 3 separate files (file.xml, upstreamfile.xml, and localfile.xml). The basic format for file.xml will be a comment with information about the purpose of the file. Then it will xinclude localfile.xml and have a comment containing the xinclude for upstreamfile.xml. The reasoning behind this is that you will most often have slight changes in your local repository from what is contained upstream. Therefore, you will most likely populate localfile.xml and merge in any changes from upstreamfile.xml manually.

This appears to be working out well so far (although I haven’t really added enough content yet). I am also considering adding paths to default configuration files (e.g. ssh). I am thinking about populating them using group-specific files so that they can be overridden by adding a group file with a higher priority. I am also considering possibly adding a separate layout under Cfg/TCheetah/TGenshi and using altsrc to bind the entries from the Bundles.

I’m hoping this all works out and any comments/criticisms are welcome. I know that the useful examples out there are sparse and widespread. I’m hoping that we can get something together which allows people to collaborate on useful configurations easily so that the initial barrier to using Bcfg2 is decreased.

 Posted by at 17:39
Oct 102010

The xargs page on wikipedia talks mostly about the benefits of using it when running tasks on large lists. However, I have found it to be useful in other situations as well.

Oftentimes I am tasked with suspending accounts at work due to CPU (over)use. During this process, I frequently come across accounts which are processing Awstats statistics for some excessive number of domains (as this is the default setting in CPanel). This is normally because the number of domains for a particular user is something > 1500. In order to kill off the statistics processing, I usually just find and kill all the processes running for that user. I used to use the following to do that:

for p in `ps aux | grep ^user | awk '{print $2}'`; do kill $p; done

This is actually pretty simple and works quite well. However, it normally requires use of the Home/End keys along with ‘`’ because it’s usually after I get the process listing that I realize awstats is running. This is where xargs comes in. Usually the command typed just before the above is this:

ps aux | grep ^user

From there, it would be nice to awk out the process ids and pipe them to kill. Here is some sample output of what I’m talking about:

$ ps aux | grep ^solj | awk '{print $2}'

So, the result is simply a newline-delimited listing of the various process ids running for a particular user. Note that because I am on Linux, awk is actually gawk so I don’t have to worry about the newline separation being an issue. Therefore, I finally came up with the following elegant solution to my problem:

ps aux | grep ^solj | awk '{print $2}' | xargs kill

As you can see, this makes it extremely simple to go from the previous command to the one needed to kill off all the user’s processes. No special characters modifications at the beginning of the line are necessary. While this may seem like a miniscule amount of time saved, it actually adds up quite a bit with the number of suspensions made in this manner :-). I look forward to using xargs for inode abuse as well, but we’ll save that for another post…

 Posted by at 23:10
Sep 212010

I recently set up this blog on a virtual machine which is running on my home computer. Since soljerome.com was already running elsewhere, I decided to serve up the content using the existing apache instance at soljerome.com by setting up apache using a reverse proxy.

This particular setup is on Ubuntu 8.04 as that is the distribution running on my web server. My web server is able to view the virtual machine using an internal IP address of Here is the /etc/hosts entry www.solnet www blog.solnet blog

So, I am able to view the blog from the web server by browsing to http://blog.solnet. Therefore, I needed to tell Apache was to take a URL like http://soljerome.com/blog, internally request http://blog.solnet, and give the result back to the viewer.

The first thing I needed to do was install the Apache mod_proxy module

aptitude install libapache2-mod-proxy-html

Then I enabled the proxy module by running

a2enmod proxy_html

After the module was enabled, I added the following lines to /etc/apache2/sites-available/default

    ProxyRequests off    

    <Proxy *>
        AddDefaultCharset off
        Order deny,allow
        Allow from all
    ProxyPass /blog http://blog.solnet
    ProxyVia off

Once I restarted Apache, I was able to browse to http://soljerome.com/blog successfully. There are still a few things that I have yet to get working properly (although I think most are due to bugs in WordPress). Some of the wp-admin links work but redirect to the internal address which forces me to have to click the back button on my browser (annoying). Also, trying to setup the admin interface to use SSL has proved to be a problem.

 Posted by at 17:37
Sep 142010

I was recently converting a bash script to python. I had a need to grab the last item (and only the last item) off the end of a list in order to implement bash’s basename function since Python’s basename function is not quite the same. The bash script had a line like the following

tmpbase=`basename $0`

I was able to get the information I needed by using the __file__ attribute in the script itself. From this, I was able to split the full pathname like this:

solj@abbysplaything $ cat foo.py
#!/usr/bin/env python3

solj@abbysplaything $ python /home/solj/foo.py
['', 'home', 'solj', 'foo.py']

As you can tell, the length of this path could vary depending on where the user runs the script from. Therefore, I needed to grab the first item from the end of the list in order to properly emulate the basename function of bash. I ended up being able to do the following:

tmpbase = __file__.split('/')[-1:]

The negative index allows you to count from the end of the list (I love Python). However, as it turns out, I am blind and didn’t finish fully reading the os.path documentation. This particular problem was solved in a much more elegant way using os.path.split() although I find the negative index to be an extremely useful thing to know.

 Posted by at 19:04
Sep 072010

I have been used to using KVM when doing virtualization. However, I ran into some performance issues when trying to setup my home machine to run KVM as a normal user. Therefore, I decided to try out VirtualBox again. One issue I had was trying to use an lvm volume as a physical device for the virtual machine. Here is how I solved the problem.

First, I created the lvm volume

lvcreate --name www --size 10G images

Next, I created a vmdk file which describes the disk properties using the VBoxManage command:

VBoxManage internalcommands createrawvmdk -filename /vbox/www.vmdk -rawdisk /dev/images/www

Here are the contents of the vmdk file after running that command:

# Disk DescriptorFile

# Extent description
RW 20971520 FLAT "/dev/images/www" 0

# The disk Data Base

ddb.virtualHWVersion = "4"

Lastly, I made sure the permissions were set so my user could read the file

chown solj. /vbox/www.vmdk

After this, I was able to add the file as a storage device as if I were adding the lvm volume itself. This is great since now I can grow the volume as needed if I end up storing more on the machine than initially planned.

 Posted by at 20:46
Sep 052010

Update: I have created a new github repository which contains all my various dotfiles. You can now find my latest tmux.conf at https://github.com/solj/dotfiles/blob/master/.tmux.conf

I recently made the switch from GNU Screen to tmux. It took some time to get used to it, but it has turned out to be a pleasant experience. I had heard things a few things about tmux before making the switch, but none of them really made me want to switch away from screen. My screenrc took me weeks of time to get “just right” and I didn’t want to lose all that time. It doesn’t look like much, but the hardstatus/caption lines are extremely cryptic and non-intuitive.

One of the issues I had with screen was that it didn’t come with vertical splits by default (I had to patch it in). Even then, once patched, it was almost unusable over the slightly slow network I was using at the time. The claim that tmux handled this better was intriguing. When I also read that tmux used way less memory, I had to try it out.

The first difference I noticed when trying tmux was that when I started it up, the default layout was actually reasonable (I still remember my cluelessness when I first starting using screen). It most definitely uses less RAM

$ tmux ls
0: 20 windows (created Fri Jul 16 19:21:20 2010) [157x50] (attached)

This was a tmux session with 20 windows. Here is the ps output

solj     16390  0.0  0.0  23668  1192 pts/0    S+   15:48   0:00 tmux attach -d

That’s approximately 23MB of RAM. Here is the number of windows in use for a screen session I have open on another machine

Num Name                                                                                                                                                Flags

0 bash                                                                                                                                                    $
1 bash                                                                                                                                                    $
2 bash                                                                                                                                                    $
3 bash                                                                                                                                                    $

..and here is that machine’s ps output

solj      3230  0.0  0.0  24888   348 pts/1    S+   Aug27   0:00 screen -U

That’s approximately 24 MB of RAM which is slightly higher than the tmux session with 20 windows.

I decided to attempt to get my tmux sessions looking similar to my screen sessions. This turned out to be surprisingly easy after reading through the tmux man page. As opposed to the weeks it took me to get my screenrc just right, modifying my tmux.conf with the same options only took about half a day. Not only that, there are subtle improvements made possible by using tmux (such as highlighting the current window and the upcoming window-status-alert options) which improve the visibility of my session.

One issue I did come across was that I was trying to use the #(date) syntax to set the date in my tmux status line, however, this caused my session to become unresponsive after spawning off too many processes. After reading the man page, I realized this was unnecessary as it clearly states

string will be passed through strftime(3) before being used

I am now a happy tmux user and cannot see myself ever switching back to something that is not at all actively developed and which is unable to meet my current needs.

 Posted by at 21:57