Nov 142010
 

I have recently started using ssh multiplexing and thought I’d share this technique with everyone. You will find this technique especially useful in cases where the initial connection negotiation takes longer than expected. You may also find it useful if the remote host forces password authentication and you get tired of typing your password repeatedly.

The first thing you need to do is copy the following lines to your ~/.ssh/config (or the global ssh_config):

Host *
    ControlMaster auto
    ControlPath /tmp/%r@%h:%p

After doing this, go ahead and test that things are working:

solj@abbysplaything $ ssh -f pjacosta-desktop sleep 60
solj@abbysplaything $ ls -l /tmp/solj*
srw------- 1 solj solj 0 Nov 14 13:44 /tmp/solj@pjacosta-desktop:22

Here you can see that a socket has been created for my user which can be reused by any additional ssh connections which go to the same user/host/port combination. Not only does this bypass future negotiations, but it also prevents opening additional connections unnecessarily. This has also really helped out at work because now when I log into a remote machine with an extremely high load, I can simply use the existing connection if I need to open up multiple sessions.

I haven’t found any downsides to using this sort of multiplexing and it certainly has some upside. This feature doesn’t appear to be very popular or publicized, but I think that it provides really useful functionality.

 Posted by at 13:57
Nov 092010
 

The new Bcfg2 VPS is finally setup and active. After a little bit of tweaking, I was able to get the documentation building from the master git branch every five minutes. The current setup is such that the documentation for the latest stable version of Bcfg2 can be found at http://docs.bcfg2.org/ while the latest development branch documentation is at http://docs.bcfg2.org/dev.

I plan on publishing the configuration for the web server soon, however, I want to do this at the same time as the common Bcfg2 repository so that I can finally resolve both these issues. I’m also thinking that it will probably be a good idea to create a debian repository on the VPS so that we can automate the building of packages. Currently our mirror at http://debian.bcfg2.org is very out of date.

My goal is to try and do most of this during the next code sprint. I have found that these types of things are difficult to work on unless I set a specific time. I am hoping that by next year we will finally have a real working example of a live Bcfg2 repository for people to see and use. I think that having these examples will lower the barrier significantly as new users are often tripped up by minor mistakes which can be mitigated more easily if you begin from a working example.

That’s all for now. More to come once I get started working on this.

 Posted by at 20:54
Oct 262010
 

I have just started creating a new sample Bcfg2 repository on github. This post details the strategy used in this repository in order to make it easy to pull in updates and merge them seamlessly with your running Bcfg2 instance.

The primary goal for this repository is to be nondestructive when someone tries to pull in the changes from upstream. However, in order to do this, there is a slightly more complicated repository structure involved.

The first thing you’ll probably want to do is grab a copy of the repository:

git clone git://github.com/solj/bcfg2-repo.git

As you inspect the repository, you’ll notice that I have tried to make use of Xinclude in order to prevent overwriting custom repository files. The idea is that you will have 3 separate files (file.xml, upstreamfile.xml, and localfile.xml). The basic format for file.xml will be a comment with information about the purpose of the file. Then it will xinclude localfile.xml and have a comment containing the xinclude for upstreamfile.xml. The reasoning behind this is that you will most often have slight changes in your local repository from what is contained upstream. Therefore, you will most likely populate localfile.xml and merge in any changes from upstreamfile.xml manually.

This appears to be working out well so far (although I haven’t really added enough content yet). I am also considering adding paths to default configuration files (e.g. ssh). I am thinking about populating them using group-specific files so that they can be overridden by adding a group file with a higher priority. I am also considering possibly adding a separate layout under Cfg/TCheetah/TGenshi and using altsrc to bind the entries from the Bundles.

I’m hoping this all works out and any comments/criticisms are welcome. I know that the useful examples out there are sparse and widespread. I’m hoping that we can get something together which allows people to collaborate on useful configurations easily so that the initial barrier to using Bcfg2 is decreased.

 Posted by at 17:39
Oct 102010
 

The xargs page on wikipedia talks mostly about the benefits of using it when running tasks on large lists. However, I have found it to be useful in other situations as well.

Oftentimes I am tasked with suspending accounts at work due to CPU (over)use. During this process, I frequently come across accounts which are processing Awstats statistics for some excessive number of domains (as this is the default setting in CPanel). This is normally because the number of domains for a particular user is something > 1500. In order to kill off the statistics processing, I usually just find and kill all the processes running for that user. I used to use the following to do that:

for p in `ps aux | grep ^user | awk '{print $2}'`; do kill $p; done

This is actually pretty simple and works quite well. However, it normally requires use of the Home/End keys along with ‘`’ because it’s usually after I get the process listing that I realize awstats is running. This is where xargs comes in. Usually the command typed just before the above is this:

ps aux | grep ^user

From there, it would be nice to awk out the process ids and pipe them to kill. Here is some sample output of what I’m talking about:

$ ps aux | grep ^solj | awk '{print $2}'
2190
2205
2237
2240

So, the result is simply a newline-delimited listing of the various process ids running for a particular user. Note that because I am on Linux, awk is actually gawk so I don’t have to worry about the newline separation being an issue. Therefore, I finally came up with the following elegant solution to my problem:

ps aux | grep ^solj | awk '{print $2}' | xargs kill

As you can see, this makes it extremely simple to go from the previous command to the one needed to kill off all the user’s processes. No special characters modifications at the beginning of the line are necessary. While this may seem like a miniscule amount of time saved, it actually adds up quite a bit with the number of suspensions made in this manner :-). I look forward to using xargs for inode abuse as well, but we’ll save that for another post…

 Posted by at 23:10
Sep 212010
 

I recently set up this blog on a virtual machine which is running on my home computer. Since soljerome.com was already running elsewhere, I decided to serve up the content using the existing apache instance at soljerome.com by setting up apache using a reverse proxy.

This particular setup is on Ubuntu 8.04 as that is the distribution running on my web server. My web server is able to view the virtual machine using an internal IP address of 10.10.10.34. Here is the /etc/hosts entry

10.10.10.34 www.solnet www blog.solnet blog

So, I am able to view the blog from the web server by browsing to http://blog.solnet. Therefore, I needed to tell Apache was to take a URL like http://soljerome.com/blog, internally request http://blog.solnet, and give the result back to the viewer.

The first thing I needed to do was install the Apache mod_proxy module

aptitude install libapache2-mod-proxy-html

Then I enabled the proxy module by running

a2enmod proxy_html

After the module was enabled, I added the following lines to /etc/apache2/sites-available/default

    ProxyRequests off    

    <Proxy *>
        AddDefaultCharset off
        Order deny,allow
        Allow from all
    </Proxy>
    ProxyPass /blog http://blog.solnet
    ProxyVia off

Once I restarted Apache, I was able to browse to http://soljerome.com/blog successfully. There are still a few things that I have yet to get working properly (although I think most are due to bugs in WordPress). Some of the wp-admin links work but redirect to the internal address which forces me to have to click the back button on my browser (annoying). Also, trying to setup the admin interface to use SSL has proved to be a problem.

 Posted by at 17:37
Sep 072010
 

I have been used to using KVM when doing virtualization. However, I ran into some performance issues when trying to setup my home machine to run KVM as a normal user. Therefore, I decided to try out VirtualBox again. One issue I had was trying to use an lvm volume as a physical device for the virtual machine. Here is how I solved the problem.

First, I created the lvm volume

lvcreate --name www --size 10G images

Next, I created a vmdk file which describes the disk properties using the VBoxManage command:

VBoxManage internalcommands createrawvmdk -filename /vbox/www.vmdk -rawdisk /dev/images/www

Here are the contents of the vmdk file after running that command:

# Disk DescriptorFile
version=1
CID=e5ee218c
parentCID=ffffffff
createType="fullDevice"

# Extent description
RW 20971520 FLAT "/dev/images/www" 0

# The disk Data Base
#DDB

ddb.virtualHWVersion = "4"
ddb.adapterType="ide"
ddb.geometry.cylinders="16383"
ddb.geometry.heads="16"
ddb.geometry.sectors="63"
ddb.uuid.image="46527bd3-f962-43cc-8a43-11aafd3425aa"
ddb.uuid.parent="00000000-0000-0000-0000-000000000000"
ddb.uuid.modification="00000000-0000-0000-0000-000000000000"
ddb.uuid.parentmodification="00000000-0000-0000-0000-000000000000"
ddb.geometry.biosCylinders="1024"
ddb.geometry.biosHeads="255"
ddb.geometry.biosSectors="63"

Lastly, I made sure the permissions were set so my user could read the file

chown solj. /vbox/www.vmdk

After this, I was able to add the file as a storage device as if I were adding the lvm volume itself. This is great since now I can grow the volume as needed if I end up storing more on the machine than initially planned.

 Posted by at 20:46
Sep 052010
 

Update: I have created a new github repository which contains all my various dotfiles. You can now find my latest tmux.conf at https://github.com/solj/dotfiles/blob/master/.tmux.conf

I recently made the switch from GNU Screen to tmux. It took some time to get used to it, but it has turned out to be a pleasant experience. I had heard things a few things about tmux before making the switch, but none of them really made me want to switch away from screen. My screenrc took me weeks of time to get “just right” and I didn’t want to lose all that time. It doesn’t look like much, but the hardstatus/caption lines are extremely cryptic and non-intuitive.

One of the issues I had with screen was that it didn’t come with vertical splits by default (I had to patch it in). Even then, once patched, it was almost unusable over the slightly slow network I was using at the time. The claim that tmux handled this better was intriguing. When I also read that tmux used way less memory, I had to try it out.

The first difference I noticed when trying tmux was that when I started it up, the default layout was actually reasonable (I still remember my cluelessness when I first starting using screen). It most definitely uses less RAM

$ tmux ls
0: 20 windows (created Fri Jul 16 19:21:20 2010) [157x50] (attached)

This was a tmux session with 20 windows. Here is the ps output

solj     16390  0.0  0.0  23668  1192 pts/0    S+   15:48   0:00 tmux attach -d

That’s approximately 23MB of RAM. Here is the number of windows in use for a screen session I have open on another machine

Num Name                                                                                                                                                Flags

0 bash                                                                                                                                                    $
1 bash                                                                                                                                                    $
2 bash                                                                                                                                                    $
3 bash                                                                                                                                                    $

..and here is that machine’s ps output

solj      3230  0.0  0.0  24888   348 pts/1    S+   Aug27   0:00 screen -U

That’s approximately 24 MB of RAM which is slightly higher than the tmux session with 20 windows.

I decided to attempt to get my tmux sessions looking similar to my screen sessions. This turned out to be surprisingly easy after reading through the tmux man page. As opposed to the weeks it took me to get my screenrc just right, modifying my tmux.conf with the same options only took about half a day. Not only that, there are subtle improvements made possible by using tmux (such as highlighting the current window and the upcoming window-status-alert options) which improve the visibility of my session.

One issue I did come across was that I was trying to use the #(date) syntax to set the date in my tmux status line, however, this caused my session to become unresponsive after spawning off too many processes. After reading the man page, I realized this was unnecessary as it clearly states

string will be passed through strftime(3) before being used

I am now a happy tmux user and cannot see myself ever switching back to something that is not at all actively developed and which is unable to meet my current needs.

 Posted by at 21:57
Aug 252010
 

While most of this is covered in the Bcfg2 docs, people still ask questions from time to time about writing client tools. In this post, I will cover the answer to a specific question that was posted to the mailing list recently.

We would like to change how ConfigFiles are copied on the client. This
means rewriting the InstallConfigFile method in the POSIX client plugin.
I wanted to get feed back how best to go about doing this. Would it make
sense to create a new Plug-in or modify the current one?

This post will implement a simple client tool which subclasses the POSIX client tool driver and replaces the InstallConfigFile method with a custom method.

The first step is to create the new client tool:

[root@bcfg2] ~ # cat bcfg2/src/lib/Client/Tools/myPOSIX.py
import Bcfg2.Client.Tools
import Bcfg2.Options

class myPOSIX(Bcfg2.Client.Tools.POSIX.POSIX):
    name = 'myPOSIX'
    __execs__ = ['/bin/true']
    conflicts = ['POSIX']
    __handles__ = [('Path', 'file')]

    # Redefine InstallConfigFile here
    def InstallConfigFile(self, entry):
        return True

All this simple client tool does is redefine the InstallConfigFile method from POSIX.py. Everything else is the same. This is extremely useful if you find the behavior of a client tool useful, but wish to redefine how a particular method is implemented.

It is also worth noting that if you have a case where you need to augment something, you might check with others either on the bcfg2-dev mailing list or in #bcfg2 on Freenode as they may have run into similar issues. We are more than willing to accept useful code upstream.

So, getting back to the client tool, if we go ahead and run this with the following Path specified:

[root@bcfg2] /var/lib/bcfg2 # cat Bundler/foo.xml
<Bundle name='foo'>
    <Path name='/root/foo'/>
</Bundle>

[root@bcfg2] /var/lib/bcfg2 # cat Cfg/root/foo/foo
bar

then we get the following:

[root@bcfg2] ~ # bcfg2 -qI
---

+++

@@ -1,1 +1,2 @@

+bar

Install Path /root/foo: (y/N): y

[root@bcfg2] ~ # cat /root/foo
cat: /root/foo: No such file or directory

which is exactly what we expect since we replaced the Install method with what amounts to a noop method. This simple client tool was just used for illustrative purposes. If you were actually implementing this, it would obviously normally be something completely different (and more than likely useful).

That concludes this simple post outlining the basics of modifying existing Bcfg2 client tools.

 Posted by at 21:07
Aug 212010
 

At work the other day, I found myself needing to install a bunch of gems with differing versions. I had a file created that looked something like this which had all the gems (along with specific versions) that were requested to be installed:

foo --version 1
bar --version 2
foobar --version 3

So, I tried running a bash for loop over the items to get them installed. However, I soon found that this wasn’t going to work:

$ for gem in `cat gems`; do echo $gem; done
foo
--version
1
bar
--version
2
foobar
--version
3

Bash was using any whitespace separator as indication of a new item in the loop. After searching around for a bit, I found that you can use the POSIX read utility to make this work as expected.

$ cat gems | while read line; do echo $line; done
foo --version 1
bar --version 2
foobar --version 3

Exactly what I needed.

 Posted by at 18:41