Saturday, December 20, 2008

super cool way to move files with spaces in shell

I had a bunch of files named like;

IMG_2345 2.JPG

I wanted to remove the space and the 2 so that it would become like:

IMG_2345.JPG

How to do this in a one liner?

Can't use usual for file in `ls *\ 2.JPG`;do mv $file etc, as it will
treat the space as a newline (and quoting $file to "$file" won't do it
either.

There is a convenient way of doing this with 'read', as well as using built-in string replacement

ls *\ 2.JPG | while read line;do mv "$line" "${line/\ 2/}";done



Substring Replacement

${string/substring/replacement}

Replace first match of $substring with $replacement.
${string//substring/replacement}

Replace all matches of $substring with $replacement.

http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/string-manipulation.html

Saturday, October 04, 2008

setting pager to 'less' for mysql

Ever wanted to have 'less' functionality when doing mysql queries?

mysql> \P less -S
PAGER set to 'less -S'

So when you do a 'select *', it will automatically output it to less :)

Monday, September 15, 2008

ip_conntrack, /proc and sysctl

sysctl -a shows settings for conntrack, amongst other things. This will show count of number of entries in table as well


ip_conntrack has a default timeout of 5 days (432000 seconds) for *established* connections. That is, the entry will be kept in the table for 5 days before expiring if there is no traffic (?). With a large amount of traffic, this could grow very large.

There also settings that control SYN, ACK etc for conntrack (check this and see below)

AFAIK, the only way to clear the current count is to unload the relevant modules (check this)

You can put customisations that override defaults in /etc/sysctl.conf, which are read when sysctl is run, usually at boot. Warning:

When the ip conntrack module(s) is/are (re)loaded, the defaults are used. You have to run sysctl to read any settings in /etc/sysctl.conf (sysctl -p /etc/sysctl.conf)


net.ipv4.ip_conntrack_max = 1000000
net.ipv4.netfilter.ip_conntrack_tcp_max_retrans = 3
net.ipv4.netfilter.ip_conntrack_tcp_be_liberal = 0
net.ipv4.netfilter.ip_conntrack_tcp_loose = 3
net.ipv4.netfilter.ip_conntrack_tcp_timeout_max_retrans = 3
net.ipv4.netfilter.ip_conntrack_log_invalid = 0
net.ipv4.netfilter.ip_conntrack_generic_timeout = 3
net.ipv4.netfilter.ip_conntrack_icmp_timeout = 3
net.ipv4.netfilter.ip_conntrack_udp_timeout_stream = 2
net.ipv4.netfilter.ip_conntrack_udp_timeout = 3
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close = 2
net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 2
net.ipv4.netfilter.ip_conntrack_tcp_timeout_last_ack = 2
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 2
net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 2
net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 432000
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_recv = 3
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent = 3
net.ipv4.netfilter.ip_conntrack_checksum = 1
net.ipv4.netfilter.ip_conntrack_buckets = 8192
net.ipv4.netfilter.ip_conntrack_count = 598236
net.ipv4.netfilter.ip_conntrack_max = 1000000

Thursday, July 10, 2008

postfix and rate limiting, sender control, etc

postfix has builtin controls restricting certain kinds of things, such as rate at which clients can connect, number of recipients per message, no. of connections per client, etc. These can be seen in man 8 smtpd and anvil.

As I didn't want to interfere too much with what might use the localhost instance, I added an instance of postfix listening on port 10025 of the eth0 address (192.168.10.208). That required adding a line to master.cf (I just added it under the standard smtp line):

192.168.10.208:10025 inet n - n - - smtpd
-o smtpd_client_message_rate_limit=5

The '-o smtpd_client_message_rate_limit=5' bit was to specify an override to limit connections from one client to 5 per the default anvil time (60s). The '-o' simply means override what is set in main.cf

The mail that I was attempting to restrict was being generated by snmpttd reading traps. It calls the 'mail' command for each trap, which can lead to a lot of mail in a large network. Now, the problem was that the 'mail' MUA (which is part of 'mailx' package in RHEL5) does not actually connect to smtp over tcp by default (and nor can I find any way of changing this in RHEL version, since all of the .mailrc and /etc/mail.rc directives, where you can specify smtp server and port, do not apply seemingly. They aren't in the man pages anyway. So I used the Mail::Sendmail module, which is easy to configure. BTW: I have not tried using '-o smtpd_client_message_rate_limit=5' on the 'unix' socket listening in master.cf, I assumed that it would not call smtpd, but that is because I was too lazy to read all the documentation on how postfix works ;)

Anyway, this works well

Friday, July 04, 2008

connection rate limiting to apache and iptables

We needed to create a way of connection rate limiting to a particular web page, from any given IP. There used to be various methods available to do this in Apache, such as mod_throttle/mod_choke/mod_limit, but the problem was these modules only worked with apache 1 or they were not actively developed anymore. We have netscreen firewalls in front of the the servers, but they cannot limit the number of connections per IP. IPtables can do this, via a number of different ways. There are modules in IPTables, such as 'connlimit' (most recent) and 'iplimit' (replaced by connlimit). However, the OS we were using was RHEL5.1 (kernel 2.6.18, and IPT 1.3.5), which, although it appears to support connlimit (type 'iptables -m connlimit --help' and it shows you usage info, as the iptables connlimit library, libipt_connlimit.so, is in /lib/iptables), there is no actual kernel module for it. When attempting to insert a rule with the connlimit module you get 'iptables: Unknown error 4294967295' (this was on a 32-bit machine). So there is a mismatch between kernel and user-space support for modules. The 'connlimit' module became mainstream in kernel 2.6.23. The options were to patch the kernel source tree using 'patch-o-matic' from the netfilter website. But since some doubt has been cast on the stability of these patches by various people, I decided against it, particularly as there is another option. In any case, I am not sure as to whether I was able to deploy a non-standard kernel onto production machines. It certainly would make updating more of a hassle. Inserting a binary module compiled on another devel box could also pose issues, in that there appear to be a lot of changes in the netfilter source tree, and it may invoke functions defined elsewhere lower in the tree than just the xt_connlimit module.

Found this at http://www.debian-administration.org/articles/187

It describes a simple way of connection limiting, using two rules, without using any experimental or recent modules.

iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set
iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --update --seconds 60 --hitcount 50 -j DROP

Monday, March 03, 2008

filesystem has unsupported features - fix with debugfs

On occasion I need to build machines with old distributions - usually, this involves copying the disks using tar or rsync onto a server that has been booted off a live CD such as GRML. When making an ext3 filesystem via a newer distribution such as GRML for what will be an older system such as redhat 7.2, certain ext3 features are not supported by the kernel and/or libraries (not sure which), and so it will complain when starting up, though it will still boot. I am not sure if it affects stability of the system. You can however remove the newer features put by there by the new boot CD by using debugfs. debugfs will do this on mounted filesystems as well. As a basis for comparison, I look at what one of the older servers has as the supported features by running debugfs on filesystems there, and then remove the features on the newly-built machine. You may also be able to get supported features in other ways, perhaps via logs or debugfs or researching what features are supported in the particular kernel/libraries/user-space tools or whatever enables the feature. I took the lazy method that is available though!

There is context sensitive

debugfs comes with its own shell, a bit like grub. You can type 'help' to access list of supported commands/operations. For instance, type 'feature' to list supported fs features, and 'feature - etc..' to delete features. Alternatively, you can run commands via the '-R' option from standard bash or other shell, or you can create a file with the operations you want to perform and send them to debugfs.

Very cool :)

Monday, February 18, 2008

converting an existing single disk machine to software RAID-1

The following is a guide based on my experience of setting up RAID 1 (mirroring) on a Debian Sarge system, with many of the packages coming from backports. I needed to use Sarge with backports for various reasons (rather than just using Etch). Anyways, I was able to do this remotely via ssh. It requires reboots, but no need for a boot CD/DVD or relying on back up data stored off the server (though this is a good idea to have in case something goes wrong!!). *PLEASE NOTE*: I suggest it as a guide only as it worked for me, but YMMV and I take no responsibility if you break your system!

These instructions are based on Debian Sarge 3.1, with some stuff from backports, using mdadm

Some of the software versions I used were:

kernel version: linux-image-2.6.18-4-686 (from backports)
initramfs-tools (version 0.85g~bpo.1) from backports
module-init-tools 3.2-pre1-2 (from backports)
udev 0.056-3 (from backports)

So with a 'pure' Sarge system, it may be different.

The hardware:

IBM x3250 with 1 x 160GB SATA disk, to which an identical one was added. These servers accept hot swap drives.

The initial install was done on /dev/sda. The extra (identical) drive was added and machine was rebooted, so that /dev/sdb was now available, though you may use /proc or /sys interface to find the new drive.


After adding the disk, you need to create the same partition table as on the original disk. One easy way to repartition is:

sfdisk -d /dev/sda | sfdisk /dev/sdb

Next, the kernel needs to support software raid. I was using a stock Debian 2.6.18 kernel, a modular kernel that includes the software raid drivers, and uses an initrd. The modules for raid-1 are 'md_mod' and 'raid1'. Use modprobe to load the modules:

modprobe md_mod raid1

There are a number of steps detailed on various other HOWTOs (that I used as a guide) that I found were not necessary for my particular case. In particular, I found that I didn't need to use MAKEDEV to create the device nodes for the raid devices (/dev/md0, /dev/md1, etc)7: udev takes care of that. Also, I did not need to manually configure raid modules to be loaded at boot time, and nor did I need to create or update the initrd to ensure the raid modules were loaded so that the root filesystem could be mounted. The Debian kernels include the raid drivers, but I am not sure what does the autodetection to load the raid drivers for the initrd. At a guess, I would say it finds a raid signature on the drive. I don't think it uses fdisk labels or info from grub config, since I have booted raid systems with no /dev/md0 in the grub config, or disklabel 'fd' on the disks. Obviously, it can't use anything on the filesystem, like /etc/mdadm/mdadm.conf or /etc/modules, as they aren't available before the root filesystem is mounted.

Anyway, on to the next steps. Since we are running on /dev/sda already, the raid needs to be constructed on using only the unused device (we don't want to obliterate the contents of the disk the system is currently using!). In effect, we initially create a degraded array. The original disk will be added later, once we have copied the data to the new disk and booted from it (detailed later).

My disk layout was as follows:

/dev/sda1 * 1 124 995998+ 82 Linux swap / Solaris
/dev/sda2 125 327 1630597+ fd Linux raid autodetect
/dev/sda3 328 1215 7132860 fd Linux raid autodetect
/dev/sda4 1216 19457 146528865 5 Extended
/dev/sda5 1216 1944 5855661 fd Linux raid autodetect
/dev/sda6 1945 3524 12691318+ fd Linux raid autodetect
/dev/sda7 3525 19457 127981791 fd Linux raid autodetect


Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 124 995998+ 82 Linux swap / Solaris
/dev/sdb2 125 327 1630597+ fd Linux raid autodetect
/dev/sdb3 328 1215 7132860 fd Linux raid autodetect
/dev/sdb4 1216 19457 146528865 5 Extended
/dev/sdb5 1216 1944 5855661 fd Linux raid autodetect
/dev/sdb6 1945 3524 12691318+ fd Linux raid autodetect
/dev/sdb7 3525 19457 127981791 fd Linux raid autodetect


Initially, the partition type was 'Linux' (code 83 in fdisk), but I changed it to fdisk code 'fd'. As before, I don't think this is necessary at least initially (and the system was able to boot fine without those labels on a raid setup), but it might be useful or needed for other things anyway.

Create the raid devices:

mdadm --create /dev/md0 --level 1 --raid-devices=2 missing /dev/sdb2
mdadm --create /dev/md1 --level 1 --raid-devices=2 missing /dev/sdb3
and so on.

Once each raid device was created, I then made a filesystem 'mkfs.ext3 /dev/md0'. Do this for each raid device. Also, run a 'mkswap' on the new swap partition on /dev/sdb. If a disk fails, I think the machine should be able to boot ok, despite one of the swap partitions being dead. Before copying data, /etc/fstab and /boot/grub/menu.lst will need to be modified, and it may not hurt to put some details in /etc/mdadm/mdadm.conf (I am not sure if /etc/mdadm/mdadm.conf is actually needed for anything. test without it)

/etc/fstab - the new mountpoints + swap space:

### END DEBIAN AUTOMAGIC KERNELS LIST
NEW-mydns01:/var/lib# cat /etc/fstab
# /etc/fstab: static file system information.
#
#
proc /proc proc defaults 0 0
/dev/md0 / ext3 defaults,errors=remount-ro 0 1
/dev/md2 /home ext3 defaults 0 2
/dev/md1 /usr ext3 defaults 0 2
/dev/md3 /var ext3 defaults 0 2
/dev/md4 /var/lib/mysql ext3 defaults 0 2
/dev/sda1 none swap sw 0 0
/dev/sdb1 none swap sw 0 0
/dev/hda /media/cdrom0 udf,iso9660 user,noauto 0 0


/boot/grub/menu.lst - Important to have root filesystem as the raid device (in my case, /dev/md0), else you won't be able to hotadd the old disk (see further below for explanation). So the relevant part of /boot/grub/menu.lst looks like:

title Debian GNU/Linux, kernel 2.6.18-4-686
root (hd0,1)
kernel /boot/vmlinuz-2.6.18-4-686 root=/dev/md0 ro
initrd /boot/initrd.img-2.6.18-4-686
savedefault

title Debian GNU/Linux, kernel 2.6.18-4-686 (single-user mode)
root (hd0,1)
kernel /boot/vmlinuz-2.6.18-4-686 root=/dev/md0 ro single
initrd /boot/initrd.img-2.6.18-4-686
savedefault


/etc/mdadm/mdadm.conf:

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

DEVICE partitions
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b196c373:f65bec91:01072554:c62097c9
devices=/dev/sdb2
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=47053389:84f72deb:b47832ba:1ec3ebc1
devices=/dev/sdb3
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=1c8ff28f:f4dae79a:6cbc8acd:caaf42cb
devices=/dev/sdb5
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=ae3882ea:df0f6494:d6b68701:1f922018
devices=/dev/sdb6
ARRAY /dev/md4 level=raid1 num-devices=2 UUID=ab8471a4:1d7d7959:922ec705:4b988a14
devices=/dev/sdb7

Note that there is only /dev/sdb listed here, not /dev/sda. That will be added later.
I am not sure what actually uses this file.

Copying data

rsync is good for this, though you could otherwise use tar or cp. I found one oversight on my part, which was that the mount point /var/lib/mysql was copied with ownership root:root and mode 755. This breaks mysql, so I think something in my rsync was incorrect

Assuming /dev/md0 is mounted at /mnt/root:

rsync -auHxv --exclude=/proc/* --exclude=/sys/* --exclude=/mnt/* / /mnt/root

/dev/md1 will be the new /usr partition, and is mounted at /mnt/usr:

rsync -auHxv /usr/* /mnt/usr



And so on.

Boot block needs to be installed on each disk. Install it on /dev/sdb with grub

grub>root (hd1,1)
grub>setup (hd1)

It will need to find the root of the grub install, otherwise it will complain. But the rsync we did earlier should have taken car of that.

Reboot with changed fstab. Server should be running on 'degraded' raid running off /dev/sdb. Now /dev/sda needs to be added to the array. Use mdadm to do this, and this will copy the data from /dev/sdb over to /dev/sda (wiping whatever was on there)

mdadm /dev/md0 -a /dev/sda2

Due to a step I missed, I actually was using /dev/sda2 as the root filesystem in grub, though the raid was mounting '/' on /dev/md0. This caused an error when I tried to add /dev/sda to the raid:

mdadm: hot add failed for /dev/sda2: Invalid argument

/var/log/messages:

Feb 18 12:41:51 mydns01 kernel: md: error, md_import_device() returned -16

After much searching, I found that the issue was that the machine had root=/dev/sda2 in the grub config, and it will not allow you to use this device. I have read that this is because the device is in use, even though it appears not to be (since you have booted from /dev/sdb). Someone with a similar issue pointed out in dmesg:

Kernel command line: ro root=/dev/sda2

Anyway, change it to /dev/md0 in grub, and problem solved!

Tuesday, January 29, 2008

plesk config information

besides the usual text files in /etc/, /usr/local/psa, some of it is stored in mysql in the psa DB, which can be accessed via Plesk control panel via phpmyadmin