Oct 252011
 

I use Bcfg2 to create and synchronize the /etc/ssh/ssh_known_hosts file across all the machines I manage. The result of this is that the known_hosts file actually contains useful information.

The one case where this bites me is when I want to boot from a live CD and image the drive on the machine itself. Booting into the live CD and starting sshd creates new keys which gives me this ugly message:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
69:38:ba:80:93:b8:2a:29:ec:b3:65:e2:40:da:78:54.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending key in /etc/ssh/ssh_known_hosts:153
Password authentication is disabled to avoid man-in-the-middle attacks.
Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks.
Permission denied (publickey,keyboard-interactive).

I don’t want to go to the trouble of editing the global known_hosts file since it actually contains correct information (and someone may want to use that before bcfg2 runs again). Therefore, I just want to temporarily disable checking of the file. I found a cool little option for ssh to do just that. It’s called GlobalKnownHostsFile and we can set it to /dev/null to temporarily turn off the feature.

ssh -o GlobalKnownHostsFile=/dev/null

You will probably want to use this in conjunction with the UserKnownHostsFile option so that the client doesn’t save the temporary key to your ~/.ssh/known_hosts.

 Posted by at 16:49
Oct 232011
 

UPDATE: In response to a comment below, I have added this warning to the top. Please do not use any of these files unmodified. They have been created/tested for my purposes and are meant to be guides which will help you understand how the Debian preseed process works.

In this post, I will walk through a simple preseed file that can be used to install a very minimal Debian (wheezy) machine in ~10 minutes (depending on the mirror used). The installer will only ask for the hostname. Everything else will be automated.

To get started, you will want to download the netboot ISO. You can get this from http://tinyurl.com/67nlk8q or any other Debian mirror. If all your machines are on the same network, it may make sense to setup gPXE. Details on that will be covered in a later post.

In order to use the preseed file outlined below, you will need to boot with the following appended options (press TAB at the installer screen). Note that the debugging variables are only necessary if you are having trouble.

DEBCONF_DEBUG=5 locale=en_US.UTF-8 console-keymaps-at/keymap=us domain=unassigned-domain url=http://www.siriad.com/preseed/wheezy.cfg

The first thing we will do is configure the networking settings necessary to automate the install.

##############
# Networking
##############

# Uncomment and fill in these in order to preseed the hostname question
#d-i netcfg/get_hostname string unassigned-hostname
#d-i netcfg/get_domain string unassigned-domain
d-i netcfg/choose_interface select eth0
d-i mirror/http/proxy string

I am pointing to the default US Debian archive. You should change this to suit your setup. Also note that here is where we tell the installer to use the “wheezy” installation sources.

########################
# Installation Sources
########################

d-i mirror/country string US
d-i mirror/http/mirror string ftp.us.debian.org
d-i mirror/http/directory string /debian/
d-i mirror/suite string wheezy

Here, I am using the default partitioning scheme and wiping any existing partitions. You may need to change this if you want custom partitions.

#################################
# Disk Partitioning/Boot loader
#################################

d-i partman-auto/disk string /dev/sda
#d-i partman-auto/method string lvm
d-i partman-auto/method string regular
d-i partman-auto/purge_lvm_from_device boolean true

# And the same goes for the confirmation to write the lvm partitions.
#d-i partman-lvm/confirm boolean true

# You can choose from any of the predefined partitioning recipes.
# Note: this must be preseeded with a localized (translated) value.
#d-i partman-auto/choose_recipe \
#       select All files in one partition (recommended for new users)
d-i partman-auto/choose_recipe select /lib/partman/recipes/30atomic
#d-i partman-auto/choose_recipe \
#       select Separate /home partition
#d-i partman-auto/choose_recipe \
#       select Separate /home, /usr, /var, and /tmp partitions

# This makes partman automatically partition without confirmation.
d-i partman/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true

d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean true
d-i grub-pc/install_devices multiselect /dev/sda

Once again, your localization settings will likely differ from these, so modify as needed.

#################
# Localizations
#################

# Keyboard localization
d-i console-keymaps-at/keymap select us
#d-i console-setup/variantcode string dvorak

# Timezone
d-i clock-setup/utc boolean true
d-i time/zone string America/Chicago

d-i apt-setup/wheezy-updates boolean true
d-i apt-setup/non-free boolean true
d-i apt-setup/security-updates boolean true
d-i apt-setup/contrib boolean true

I usually don’t setup a default user when I install servers. These settings just create a root user (with login capabilities) having the password ‘r00tme’. You will not want to use this preseed file unmodified if your machine is connected directly to the internet. You can also configure preseed with a crypted root password, but I still recommend changing it once the install is complete.

#################
# User Creation
#################

d-i passwd/root-login boolean true
d-i passwd/make-user boolean false
d-i passwd/root-password password r00tme
d-i passwd/root-password-again password r00tme
d-i user-setup/allow-password-weak boolean true
d-i user-setup/password-weak boolean true

Setup Bcfg2 to do the post-install business (will be covered in a later post).

#######################
# Software Selections
#######################

tasksel tasksel/first multiselect
d-i pkgsel/include string openvpn vim openssh-server
d-i base-installer/install-recommends boolean false
d-i popularity-contest/participate boolean false

# don't try and do automatic updates; that's bcfg2's job
d-i pkgsel/update-policy select none

d-i finish-install/reboot_in_progress note

d-i preseed/late_command string \
in-target wget http://www.siriad.com/preseed/postinst.sh -O /root/postinst.sh; \
in-target /bin/bash /root/postinst.sh
 Posted by at 17:18
Aug 312011
 

I was unable to find any guides which accurately described setting up a NFSv4 client with Kerberos on Gentoo. There are guides for setting things up on other distros, but I have run into numerous issues which were directly related to using Gentoo. Therefore, I am going to use this guide to document some of those problems. Please note that the NFS server is running Ubuntu 10.04, so there are some parts of this guide which won’t apply to Gentoo.

Setting up the Kerberos server is fairly straightforward, however, there is a difference in the way things are compiled on Gentoo. The OpenAFS guide on the wiki is mostly correct. I’ll reiterate the correct steps here.

Installation

First, you need to install the Kerberos server.

emerge -av mit-krb5

Copy the /etc/krb5.conf.example file that is included over to /etc/krb5.conf and edit it according to your needs.

cp /etc/krb5.conf.example /etc/krb5.conf

The edited file will look similar to this

[libdefaults]
        default_realm = EXAMPLE.COM
        forwardable = true
        renew_lifetime = 7days

[realms]
        EXAMPLE.COM = {
                kdc = krb.example.com
                admin_server = krb.example.com
        }

[domain_realm]
        .example.com = EXAMPLE.COM
        example.com = EXAMPLE.COM

You will need to replace “EXAMPLE.COM”, “example.com”, and “krb.example.com” with appropriate values for your environment. Note that realm names are always uppercase. The name of your KDC (krb.example.com in the example) is arbitrary.

Setting up the primary KDC

This is where the OpenAFS guide is confusing. The kdc.conf file should reside at /var/lib/krb5kdc/kdc.conf, not /etc/kdc.conf. So, go ahead and copy /var/lib/krb5kdc/kdc.conf.example and create a new file. Here are what the contents should look like.

[kdcdefaults]
        kdc_ports = 750,88

[realms]
        EXAMPLE.COM = {
                database_name = /var/lib/krb5kdc/principal
                admin_keytab = FILE:/var/lib/krb5kdc/kadm5.keytab
                acl_file = /var/lib/krb5kdc/kadm5.acl
                key_stash_file = /var/lib/krb5kdc/.k5.EXAMPLE.COM
                kdc_ports = 750,88
                max_life = 10h 0m 0s
                max_renewable_life = 7d 0h 0m 0s
                default_principal_flags = +preauth
        }

[logging]
        kdc = FILE:/var/log/kerberos/kdc.log
        admin_server = FILE:/var/log/kerberos/kadmin.log

Replace “EXAMPLE.COM” with your own realm name. Also note that some of the options above are changed from their default values. I have added a logging section at the end and changed the directory where things reside.

An important difference is that the default_principal_flags has been set to +preauth. The reason for this is that without it, Kerberos is vulnerable to offline dictionary attacks. If you are going to have your KDC publicly accessible, then you definitely want to consider enabling preauthentication. In my opinion, you probably want this even if the KDC is not publicly accessible, but that’s because I trust no one.

After modifying /var/lib/krb5kdc/kadm5.acl to your liking, you can go ahead and create the database.

cd /var/lib/krb5kdc
kdb5_util create -r EXAMPLE.COM -s

As usual, make sure you use your realm name.

Principal Creation

I’ll leave this as an exercise for the reader. I generally create varying policies for services and users and those won’t be entirely useful for most. For a really good guide on creating/using policies, see http://techpubs.spinlocksolutions.com/dklar/kerberos.html#id2500817.

Start Kerberos Server

To start the kdc and kadmind servers, run the following.

/etc/init.d/mit-krb5kadmind start
/etc/init.d/mit-krb5kdc start

Add them to the default runlevel so that they start up after a reboot

rc-update add mit-krb5kadmind default
rc-update add mit-krb5kdc default

Installing NFSv4 client

First install the nfs client utilities

emerge -av nfs-utils

You will want to make sure you have both the kerberos and the nfsv4 USE flags enabled.

Configuring the kernel

You will need to configure the kernel with the appropriate relevant options. I won’t bother going through that entire process. Rather, I’ll point out some things that went wrong for me, but weren’t immediately obvious.

The kernel needs to have the rpcsec_gss_krb5 option configured as a module. I spent quite a while debugging this problem. I had the option compiled directly into the kernel. Looking in the nfs client’s syslog, I also found this obscure error message.

gss_create: Pseudoflavor 390003 not found!
RPC: Couldn't create auth handle (flavor 390003)

Whatever the hell that means. Surprisingly, there are very few references to this error. One of them I found suggested recompiling the kernel with the rpcsec_gss_krb5 module and simple loading it after boot. Surprisingly, this actually worked.

Adding nfs principals

Both the nfs server and the nfs client need nfs principals added to their krb5.keytab. Since my nfs server was running an older kernel (Ubuntu 10.04), I needed to do a couple things to get this to work.

First, you need to add an nfs principal for both the client and the server. In my case, the server needed an encryption type which isn’t generated by default on a Gentoo Kerberos server. Therefore, I generated the principal like this.

addprinc -policy service -randkey -e "des-cbc-crc:normal" nfs/nfsserver

Since I had a service policy defined, this created the nfs/www.siriad.com principal with the “des-cbc-crc” encryption type. This is necessary for the older version of nfs that is available for Ubuntu 10.04. You then need to login to the nfs server, run kadmin, and do the following.

kadmin:  ktadd -e des-cbc-crc:normal nfs/nfsserver

This will add the entry to your nfs server’s host keytab. Using this encryption type is extremely important. If you don’t, you will probably end up with very cryptic errors like the ones I had.

rpc.svcgssd: ERROR: prepare_krb5_rfc_cfx_buffer: not implemented
rpc.svcgssd: ERROR: failed serializing krb5 context for kernel
rpc.svcgssd: WARNING: handle_nullreq: serialize_context_for_kernel failed

This indicates that the NFS server has not implemented the encryption types being used in your keytab.

Now you just need to add an nfs principal for your client. In this case, Gentoo had support for the more recent encryption types, so I didn’t need to do anything special. I just created the principal.

addprinc -policy service -randkey nfs/nfsclient

then added it to the client’s host keytab using kadmin on the client

kadmin:  ktadd nfs/nfsclient

Lastly, you need to make sure you allow for weak encryption types in the /etc/krb5.conf file. Add the following to the [libdefaults] section.

allow_weak_crypto = true

Setting up the NFS server

First, you need to allow for weak encryption types on the NFS server. You can do this by modifying the /etc/krb5.conf file. You will need to add the following two lines in the [libdefaults] section.

allow_weak_crypto = true
permitted_enctypes = "des-cbc-crc arcfour-hmac des3-cbc-sha1 aes128-cts-hmac-sha1-96 aes256-cts-hmac-sha1-96"

Note that the values listed as permitted are those generated by default on my Kerberos server. Please DO NOT set the default encryption type to the weak encryption. I see far too many howtos that tell you to do this and it is NOT a good idea. If you can use the stronger encryption for things other than NFS, there is no reason not to.

On the NFS server, you also need to make sure that rpc.svcgssd is set to start alongside NFS. On Ubuntu, you can do this by editing your /etc/default/nfs-kernel-server file and editing/modifying the following line.

NEED_SVCGSSD=yes

You will also need to edit the following line in the /etc/default/nfs-common file.

NEED_IDMAPD=yes

Edit the /etc/idmapd.conf file and set the Domain line to the appropriate value for your environment. Make sure you restart rpc.idmapd if necessary.

Lastly, you need to modify /etc/exports with the appropriate values. My export looks something like this.

/export/dir        gss/krb5(rw,fsid=0,insecure,no_subtree_check)

You can then restart the nfs-kernel-server service and your NFS server should be ready to go.

Setting up the NFS client

You need to first make sure that rpc.idmapd and rpc.gssd are set to start with nfs. Edit your /etc/conf.d/nfs file and modify the following line.

NFS_NEEDED_SERVICES=”rpc.idmapd rpc.gssd”

You will need to edit /etc/idmapd.conf with the same information from the NFS server. Then you can /etc/init.d/nfs restart and test your NFS mount.

Testing your NFS mount

You can now test your nfs mount with the following command

 mount -vvv -t nfs4 -o sec=krb5 nfsserver:/ test/

This should work successfully and you should be able to see the appropriate requests coming through in your KDC logs.

 Posted by at 16:38
Aug 262011
 

Recently, while trying to resolve a bug in Bcfg2, I ran into a situation which can be summed up by the following:

Python 2.7.1 (r271:86832, Mar 26 2011, 11:26:21)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os, stat
>>> dev = os.makedev(1, 3)
>>> mode = stat.S_IFCHR | 0777
>>> print(mode)
8703
>>> os.mknod('test', mode, dev)
>>> os.stat('test')
posix.stat_result(st_mode=8685, st_ino=1148358, st_dev=12L, st_nlink=1, st_uid=0, st_gid=0, st_size=0, st_atime=1314372451, st_mtime=1314372451, st_ctime=1314372451)

Above, you can see that the mode specified ends up being different than the mode which is set by os.mknod. Instead of a character device with permissions of 0777, I was ending up with permissions of 0755. If you follow the link, you will find no documentation mentioning the umask of the running process in the mknod section. However, you can search around the page and realize that the umask of the running process is masked out for other methods.

The inconsistency arises due to the implementation of mknod used by Python. For instance, if you run the above code on Windows under Cygwin, it does the Right Thing ™. This was my clue that there was something about the implementation that was off. Sure enough, after committing a simple fix, the problem disappeared.

I think this is simply a documentation issue, but I was unable to find any information on the problem while searching around. Hopefully this post will save someone from wasting a ton of time on the same issue.

 Posted by at 20:05
Aug 172011
 

This is just a quick post to show how I go about debugging problems with GSSAPIAuthentication. You want to debug both the server side and the client side, so the first thing to do is start a new instance of the openssh server in the foreground on a different port.

# `which sshd` -o "GSSAPIAuthentication yes" -d -D -p 2222
debug1: sshd version OpenSSH_5.3p1 Debian-3ubuntu7
debug1: read PEM private key done: type RSA
debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048
debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048
debug1: private host key: #0 type 1 RSA
debug1: read PEM private key done: type DSA
debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024
debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024
debug1: private host key: #1 type 2 DSA
debug1: rexec_argv[0]='/usr/sbin/sshd'
debug1: rexec_argv[1]='-d'
debug1: rexec_argv[2]='-D'
debug1: rexec_argv[3]='-p'
debug1: rexec_argv[4]='2222'
debug1: Bind to port 2222 on 0.0.0.0.
Server listening on 0.0.0.0 port 2222.
debug1: Bind to port 2222 on ::.
Server listening on :: port 2222.

This will start up the ssh server listening on port 2222 with debugging turned on. Then you need to try connecting to this instance from the client that is unable to connect.

$ ssh -o "GSSAPIAuthentication yes" -vvv -p 2222 server.example.com

This will output a ton of information on both the server and the client which should help you figure out why you are unable to login using GSSAPIAuthentication. Some common pittfalls to keep in mind

  • Make sure you have GSSAPIAuthentication turned on either globally or for the user trying to login (this is done for you in the examples above, so if things work then this may be your problem).
  • Make sure you have created a host principal for the ssh server and have added it to that machine’s /etc/krb5.keytab
    • You can test this by logging into the ssh server and running klist -k.
      # klist -k
      	Keytab name: WRFILE:/etc/krb5.keytab
      	KVNO Principal
      	---- --------------------------------------------------------------------------
      	   2 host/server.example.com@EXAMPLE.COM
      	   2 host/server.example.com@EXAMPLE.COM
      	   2 host/server.example.com@EXAMPLE.COM
      	   2 host/server.example.com@EXAMPLE.COM
  • If none of these steps turn up anything useful, check the kdc logs for errors.

Please note that the environment referred to above is using MIT Kerberos. I would expect the methods for debugging other software to be similar, but I cannot guarantee that the kerberos-related commands will be the same.

 Posted by at 19:06
Jul 292011
 

Sometimes I want to take an image of an entire disk and back it up to disk on another host which resides on the same network. While one could setup ssh, rsync, or some other mechanism to accomplish this, sometimes it is just easier to pipe dd to nc so that you don’t have to spend a lot of time configuring network settings. So, here’s a simple and quick way to go about backing up an entire disk image to another machine on the same network. On the receiving host, you’ll want to start up nc with the following command.

nc -l 9876 | dd of=/path/to/img

This will get the machine listening for connections on port 9876 and piping everything to dd and into the destination image file. Once you have that running, you will need to boot the source machine into either a live environment off optical media or off a different hard disk than the one you’re trying to backup. In this example, I am backing up /dev/sda on the source machine. So, now that the destination machine is listening, we can start up dd on the source machine and pipe the output to nc.

dd if=/dev/sda | nc destinationip 9876

That’s pretty much all there is to it. Be sure that no other machines send traffic to the destination machine on the port you’ve chosen (9876 in this example).

Once you have backed up the entire image of the drive, you can then use kpartx to make the partitions available for mounting. Running the following command will list the available partitions from the drive image.

kpartx /path/to/img

This should give you output something like the following.

loop0p1: 0 305172 /dev/loop0 63
loop0p2: 0 40965750 /dev/loop0 305235
loop0p3: 0 210322980 /dev/loop0 41270985
loop0p4: 0 33559722 /dev/loop0 251594028

To make them available, run kpart -a. You should then have the mappings available in /dev/mapper/loop0p*.

 Posted by at 09:33
Jun 282011
 

In this post, I will walk through a simple preseed file that can be used to install a minimal Ubuntu machine in ~10 minutes (depending on the mirror used). The installer will only ask for the hostname. Everything else will be automated.

To get started, you will want to download the netboot ISO. You can get this from http://tinyurl.com/62qz9t7 or any other Ubuntu mirror. If all your machines are on the same network, it may make sense to setup gPXE. Details on that will be covered in a later post.

In order to use the preseed file outlined below, you will need to boot with the following appended options (press TAB at the installer screen). Note that the debugging variables are only necessary if you are having trouble.

DEBCONF_DEBUG=5 locale=en_US.UTF-8 console-setup/layoutcode=us url=http://www.siriad.com/preseed/preseed.cfg

The first thing we will do is configure the networking settings necessary to automate the install.

##############
# Networking
##############

# Uncomment and fill in these in order to preseed the hostname question
#d-i netcfg/get_hostname string unassigned-hostname
#d-i netcfg/get_domain string unassigned-domain
d-i netcfg/choose_interface select eth0
d-i mirror/http/proxy string

I am pointing to the default US Ubuntu archive. You should change this to suit your setup.

########################
# Installation Sources
########################

d-i mirror/country string US
d-i mirror/http/mirror string us.archive.ubuntu.com
d-i mirror/http/directory string /ubuntu/

Here, I am using the default partitioning scheme and wiping any existing partitions. You may need to change this if you want custom partitions.

#################################
# Disk Partitioning/Boot loader
#################################

d-i partman-auto/disk string /dev/sda
#d-i partman-auto/method string lvm
d-i partman-auto/method string regular
d-i partman-auto/purge_lvm_from_device boolean true

# And the same goes for the confirmation to write the lvm partitions.
#d-i partman-lvm/confirm boolean true

# You can choose from any of the predefined partitioning recipes.
# Note: this must be preseeded with a localized (translated) value.
#d-i partman-auto/choose_recipe \
#       select All files in one partition (recommended for new users)
#d-i partman-auto/choose_recipe \
#       select Separate /home partition
#d-i partman-auto/choose_recipe \
#       select Separate /home, /usr, /var, and /tmp partitions

# This makes partman automatically partition without confirmation.
d-i partman/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true

d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean true
d-i grub-pc/install_devices multiselect /dev/sda

Once again, your localization settings will likely differ from these, so modify as needed.

#################
# Localizations
#################

# Keyboard localization
d-i console-keymaps-at/keymap select us
#d-i console-setup/variantcode string dvorak

# Timezone
d-i clock-setup/utc boolean true
d-i time/zone string America/Chicago

d-i apt-setup/backports boolean true
d-i apt-setup/contrib boolean true
d-i apt-setup/multiverse boolean true
d-i apt-setup/non-free boolean true
d-i apt-setup/proposed boolean true
d-i apt-setup/universe boolean true

I usually don’t setup a default user when I install servers. These settings just create a root user (with login capabilities) having the password ‘r00tme’. You will not want to use this preseed file unmodified if your machine is connected directly to the internet. You can also configure preseed with a crypted root password, but I still recommend changing it once the install is complete.

#################
# User Creation
#################

d-i passwd/root-login boolean true
d-i passwd/make-user boolean false
d-i passwd/root-password password r00tme
d-i passwd/root-password-again password r00tme
d-i user-setup/allow-password-weak boolean true
d-i user-setup/password-weak boolean true

Setup Bcfg2 to do the post-install business (will be covered in a later post).

#######################
# Software Selections
#######################

tasksel tasksel/first multiselect
d-i pkgsel/include string openvpn vim
pkgsel pkgsel/include/install-recommends boolean false

# don't try and do automatic updates; that's bcfg2's job
d-i pkgsel/update-policy select none

d-i finish-install/reboot_in_progress note

d-i preseed/late_command string \
        in-target wget http://www.siriad.com/preseed/postinst.sh -O /root/postinst.sh; \
        in-target /bin/bash /root/postinst.sh
 Posted by at 14:08
May 072011
 

As mentioned in a previous post, I use LVM volumes directly to store the virtual disks for my Virtualbox VMs. This post will guide you through how to access the contents of the virtual disk directly (so that you don’t need to boot the VM). The disk I’m working with is called ‘debian’.

# lvscan | grep debian
  ACTIVE            '/dev/vbox/debian' [5.00 GiB] inherit

We need to create device maps from this LVM device’s partition tables.

# kpartx -av /dev/vbox/debian
add map vbox-debian1 (253:8): 0 9912042 linear /dev/vbox/debian 63
add map vbox-debian2 (253:9): 0 562275 linear /dev/vbox/debian 9912105
add map vbox-debian5 : 0 562212 linear 253:9 9912168

Now we can mount the image and grab any files we may need.

# mkdir foo
# mount /dev/mapper/vbox-debian1 foo/
# ls foo/
bin   cdrom  etc  home        lib    lost+found  mnt  proc  sbin     srv  tmp  var
boot  dev    foo  initrd.img  lib64  media       opt  root  selinux  sys  usr  vmlinuz

Once we are done accessing our files, we can go ahead and unmount the partition and delete the partition mappings.

# umount foo/
# kpartx -d /dev/vbox/debian
 Posted by at 15:19
Apr 222011
 

I recently spent quite a bit of time installing and configuring request tracker to run on Ubuntu 10.04 using nginx as the web server. The documentation on doing this was scarce (or incorrect), so I thought this would be a good place to centralize all the information needed to replicate my setup.

The first thing to do is install all the required packages. I assume you already know how to do this. For what it’s worth, I’m using a PPA repository for the nginx package since the one available in Ubuntu is extremely out of date. Here are the contents of /etc/apt/sources.list.d/nginx.list.

deb http://ppa.launchpad.net/nginx/stable/ubuntu lucid main
deb-src http://ppa.launchpad.net/nginx/stable/ubuntu lucid main

I have apt configured such that it doesn’t install Recommended or Suggested packages. Therefore, I had to manually install the libcgi-fast-perl package because this will be necessary in order for nginx to run the RT code.

I then had to install the mysql-server package and create the database to be used.

root@rt:~# mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 584
Server version: 5.1.41-3ubuntu12.10 (Ubuntu)

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create database rtdb;
Query OK, 1 row affected (0.00 sec)

mysql> grant all privileges on rtdb.* to 'rt'@'localhost' identified by 'SECRETPASSWORD';
Query OK, 0 rows affected (0.03 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Next, I had to modify /etc/request-tracker3.8/RT_SiteConfig.d/50-debconf to suit our custom environment. I also had to reconfigure /etc/request-tracker3.8/RT_SiteConfig.d/51-dbconfig-common to use mysql with the appropriate values for the database that was created.

# THE DATABASE:
# generated by dbconfig-common

# map from dbconfig-common database types to their names as known by RT
my %typemap = (
    mysql   => 'mysql',
    pgsql   => 'Pg',
    sqlite3 => 'SQLite',
);

Set($DatabaseType, $typemap{mysql} || "UNKNOWN");

Set($DatabaseHost, 'localhost');
Set($DatabasePort, '3306');

Set($DatabaseUser , 'rt');
Set($DatabasePassword , 'SECRETPASSWORD');

# SQLite needs a special case, since $DatabaseName must be a full pathname
#my $dbc_dbname = 'rtdb'; if ( "sqlite3" eq "sqlite3" ) { Set ($DatabaseName, '/var/lib/dbconfig-common/sqlite3/request-tracker3.8' . '/' . $dbc_dbname); } else { Set ($DatabaseName, $dbc_dbname); }
Set ($DatabaseName, 'rtdb');

By default, the RT install uses a simple sqlite database. We just switched it to use our mysql database that we created in the previous step. Once that is complete, you need to update the SiteConfig by running update-rt-siteconfig. Then you can move on to configuring nginx.

Here is the nginx configuration that was necessary to get all aspects (as far as I’ve tested) working with RT:

server {
        listen          [::]:80;
        server_name     rt.siriad.com;
        root            /usr/share/request-tracker3.8/html;

        location / {
                index           index.html;
                fastcgi_pass    unix:/var/run/rt/rt.sock;
                include         /etc/nginx/fastcgi_params;
                fastcgi_param   PATH_INFO       $fastcgi_script_name;
        }

        location ~* .+\.(html|js|css)$  {
                index           index.html;
                fastcgi_pass    unix:/var/run/rt/rt.sock;
                include         /etc/nginx/fastcgi_params;
                fastcgi_param   PATH_INFO       $fastcgi_script_name;
        }

        location /NoAuth/images/ {
                alias /usr/share/request-tracker3.8/html/NoAuth/images/;
        }
}

Here is the upstart script located at /etc/init/rt-fcgi.conf:

# rt-fcgi - test
start on runlevel [12345]
stop on runlevel [0]
respawn

env FCGI_SOCKET_PATH=/var/run/rt/rt.sock

exec su www-data -c "/usr/share/request-tracker3.8/libexec/mason_handler.fcgi"

Once all those are in place, the only thing you need to do is service rt-fcgi start and restart nginx. Then you should be able to login using the default RT username/password.

 Posted by at 09:37
Mar 162011
 

This post discusses the nginx proxy module. I recently setup Nginx as a reverse caching proxy for various sites. Every configuration example I came across online failed to mention using the proxy_cache_key directive. Therefore, I originally ended up with something like this:

# cat /etc/nginx/sites-available/siriad
# You may add here your
# server {
#       ...
# }
# statements for each of your virtual hosts to this file

server {
        listen   [::]:80;
        server_name siriad.com;
        rewrite ^/(.*) http://www.siriad.com/$1 permanent;
}

server {
        listen   [::]:80;
        server_name     www.siriad.com
                        testing.siriad.com;

        access_log      /var/log/nginx/siriad.com/access.log;
        error_log       /var/log/nginx/siriad.com/error.log;

        location / {
                proxy_pass              http://backend;
                proxy_set_header        Host $host;
                proxy_cache             siriad;
                proxy_cache_valid       200 1d;
                proxy_cache_use_stale   error timeout invalid_header updating
                                        http_500 http_502 http_503 http_504;
        }
}

This led to some odd behavior. When I would load www.siriad.com and subsequently load testing.siriad.com, I would end up with the cached content from www.siriad.com for both requests. The cache was working, but was not distinguishing between the two hosts. I spent some time trying different configurations thinking that this was a problem caused by me since I had trouble finding any information on it.

It turns out, this is exactly the use case for the proxy_cache_key directive. By adding the following line, I made sure that the hostname was included in the key used to cache the request so that there were no key collisions during the process.

                proxy_cache_key         "$scheme$host$request_uri";

I was able to find this information after searching around DDG for quite a while. I finally came across this forum post. The result of the above configuration is a working reverse caching proxy using Nginx for siriad.com as well as testing.siriad.com. I am hoping this post is slightly more searchable than the results I was getting while trying to find the answer to this problem.

 Posted by at 18:53