Linux
Boot disk
Grub
Manual Install
- sudo mount /dev/sda1 /mnt - sudo mount -o bind /dev /mnt/dev - sudo mount -o proc none /mnt/proc - sudo chroot /mnt /bin/bash #> grub grub> device (hd0) /dev/sda
grub> root (hd0,0)
grub> setup (hd0) Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/menu .lst"... succeeded Done.
Software RAID
Activating Raid
If RAID array already exist:
mdadm --assemble --scan
If creating a new (mirror) array:
mdadm --create /dev/md0 --level=1 [--force] --raid-devices=2 /dev/hda1 /dev/hdb1
use **--force** if you are using less than 2 devices
Restoring a lost disk on RAID1 (mirror) disk array
1. re-seat the hard drive (hot swapped)
2. reboot the box (it did not want to recognize the drive in the bay)
3. **cat /proc/mdstat**
Personalities : [raid1] md0 : active raid1 sda2[0] 244075456 blocks [2/1] [U_] unused devices: <none>
which shows that our raid device is only using sda2 (1 out of 2 devices for md0)
4. make sure that /dev/sdb exists and that /dev/sdb2 is our RAID partition: **mdadm --examine /dev/sdb2**
/dev/sdb2: Magic : a92b4efc Version : 00.90.00 UUID : feae5a82:2674029d:2b534ce1:b1c670ec Creation Time : Thu Oct 11 10:11:50 2007 Raid Level : raid1 Device Size : 244075456 (232.77 GiB 249.93 GB) Array Size : 244075456 (232.77 GiB 249.93 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Tue Jan 8 23:45:40 2008 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 4886fb71 - correct Events : 0.50293 Number Major Minor RaidDevice State this 1 8 18 1 active sync /dev/sdb2 0 0 8 2 0 active sync /dev/sda2 1 1 8 18 1 active sync /dev/sdb2
5. add the device to the array with: **mdadm /dev/md0 -a /dev/sdb2**
mdadm: re-added /dev/sdb2
6. watch the rebuild process live! **cat /proc/mdstat**
[10:17:58 - root@localhost:~] #> cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[2] sda2[0] 244075456 blocks [2/1] [U_] [>....................] recovery = 0.1% (345536/244075456) finish=94.0min speed=43192K/sec unused devices: <none>
[10:18:06 - root@localhost:~] #> cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[2] sda2[0] 244075456 blocks [2/1] [U_] [=>...................] recovery = 8.6% (21150720/244075456) finish=95.6min speed=38853K/sec unused devices: <none>
Simulating a drive failure
First we mark the drive as "faulty"
1. mdadm --manage --set-faulty /dev/md1 /dev/sdb1
Then we can remove it from the RAID device
2. mdadm --manage /dev/md1 --remove /dev/sdb1
To check the status of the disks on the RAID device
3. mdadm --detail /dev/md1
Stopping all disk arrays
mdadm --stop --scan
Wipe clean a disk array
After stopping a drive you still need to remove the super blocks so they are not recognized as RAID partitions by mdadm on --scan
mdadm --zero-superblock --force /dev/sda1
Fedora Directory Server
SSL Encryption
- http://directory.fedoraproject.org/wiki/Howto:SSL - http://www.redhat.com/docs/manuals/dir-server/ag/7.1/ssl.html#1087158
List Certificates
cd /opt/fedora-ds/alias ../shared/bin/certutil -L -d .
Replication
Serial Consoles
Setup
$> dmesg |grep tty serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A 00:07: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
$> sudo setserial -g /dev/ttyS0 /dev/ttyS0, UART: 16550A, Port: 0x03f8, IRQ: 4
Edit /etc/inittab like:
s0:2345:respawn:/sbin/agetty -L -f /etc/issue.net 9600 ttyS0 vt102
Reboot or run from a console:
sudo init q
If you want to make sure things work or play around with settings, run the command by hand:
sudo /sbin/agetty -L -f /etc/issue.net 9600 ttyS0 vt102
Upstart (Ubuntu)
- Create file /etc/event.d/ttyS0 start on runlevel 2 start on runlevel 3 start on runlevel 4 start on runlevel 5
stop on runlevel 0
respawn exec /sbin/getty 9600 ttyS0 - reboot
Memory Management
Install memstat
memstat -w|grep PID|sort -rn|more
Then for every process ID on the first 10 or 20 entries, find out what they are and why they are running with so much memory.
Disk Management
Batch Partition Creation
- Create a file fdisk.input.create.partition.table with o w - Run this loop for i in /dev/foo /dev/bar; do fdisk $i < fdisk.input; done - Do an input file for the actual partition taking all available space (fdisk tells you how many blocks per disk. Assuming all disk are the same size) n p 1 1 267218 w
(This is a 2.1TB disk and all space will be on primary partition 1)
Batch fstab Entries Creation
This first script calculates the volume UUID from the lunlist file:
#!/bin/sh # Luis Mondesi # 2008-03-19
# Sanity checks [ `uname -s` = "Linux" ] || exit 1 if [ ! -f lunlist ]; then echo "Create a file called lunlist with a list of devices like:" echo "/dev/sda" echo "/dev/sdb" echo "..." exit 2 fi
FILESYSTEM="reiserfs"
# If you need to build the partitions from scratch do something like this: #for i in $(<lunlist); do echo $i; fdisk $i < fdisk.input.create.partition.table; done #for i in $(<lunlist); do echo $i; fdisk $i < fdisk.input.create.partition; done #for i in $(<lunlist); do echo $i; mkfs -t $FILESYSTEM ${i}1; done
# where the files needed for fdisk input look like:
# cat fdisk.input.create.partition.table #o #w
# cat fdisk.input.create.partition #n #p #1 #1 #267218 #w
BACKUPFILE=/etc/fstab.backup.`date +%s`
cp /etc/fstab $BACKUPFILE cat /etc/fstab > /tmp/fstab.$$
j=0
for i in $(<lunlist); do if [ -e "${i}1" ]; then j=$((j+1)) echo "# ${i}1" echo UUID="`/lib/udev/vol_id -u ${i}1`" /export/users/${j} $FILESYSTEM defaults,rw,noatime,notail 0 0 mkdir -p "/export/users/${j}" fi done >> /tmp/fstab.$$
cp /tmp/fstab.$$ /etc/fstab
[ -x /sbin/udevtrigger ] && /sbin/udevtrigger [ -x /usr/sbin/udevtrigger ] && /usr/sbin/udevtrigger
echo "Fstab backup created on $BACKUPFILE" echo "Now verify /etc/fstab and type: mount -a"
Here is another script that only cares about what the system already know (the UUIDs detected by udev):
#!/bin/sh # Luis Mondesi # 2008-03-19
# Sanity checks [ `uname -s` = "Linux" ] || exit 1 [ -d "/dev/disk/by-uuid" ] || exit 2
FILESYSTEM="reiserfs"
BACKUPFILE=/etc/fstab.backup.`date +%s`
cp /etc/fstab $BACKUPFILE cat /etc/fstab > /tmp/fstab.$$
j=0
# this system uses udev and we will use UUIDs to mount disks if [ -x /sbin/udevtrigger ]; then /sbin/udevtrigger sleep 2 /sbin/udevtrigger elif [ -x /usr/sbin/udevtrigger ]; then /usr/sbin/udevtrigger sleep 2 /usr/sbin/udevtrigger else echo "udevtrigger not found in /sbin or /usr/sbin" exit 4 fi
cd /dev/disk/by-uuid for i in *; do j=$((j+1)) echo "# " `ls -l $i|awk '{ print $NF }'` #`/lib/udev/vol_id -u ${i}1` echo UUID="$i" /export/users/${j} $FILESYSTEM defaults,rw,noatime,notail 0 0 mkdir -p "/export/users/${j}" done >> /tmp/fstab.$$
cp /tmp/fstab.$$ /etc/fstab
echo "Fstab backup created on $BACKUPFILE" echo "Now verify /etc/fstab and type: mount -a"
Then run **udevtrigger**
Networking
Bonding Interfaces
- Setup the bonding kernel module - [[http://www.cyberciti.biz/howto/question/static/linux-ethernet-bonding-driver-howto.php#section_4|Modes]] - Alias bond0 to bonding - Setup the ethernet interfaces as SLAVES to the bond0 MASTER interface
[[http://www.cyberciti.biz/howto/question/static/linux-ethernet-bonding-driver-howto.php|HOWTO]]
Example
[[http://www.cyberciti.biz/tips/linux-bond-or-team-multiple-network-interfaces-nic-into-single-interface.html|Example]]
- *Step #1:** Create a bond0 configuration file
Red Hat Linux stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create bond0 config file:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
Append following lines to it:
DEVICE=bond0 IPADDR=192.168.1.20 NETWORK=192.168.1.0 NETMASK=255.255.255.0 USERCTL=no BOOTPROTO=none ONBOOT=yes
Replace above IP address with your actual IP address. Save file and exit to shell prompt.
- *Step #2:** Modify eth0 and eth1 config files:
Open both configuration using vi text editor and make sure file read as follows for eth0 interface
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
Modify/append directive as follows:
DEVICE=eth0 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none
Open eth1 configuration file using vi text editor:
# vi /etc/sysconfig/network-scripts/ifcfg-eth1
Make sure file read as follows for eth1 interface:
DEVICE=eth1 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none
Save file and exit to shell prompt.
- *Step # 3:** Load bond driver/module
Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:
# vi /etc/modprobe.conf
Append following two lines:
alias bond0 bonding options bond0 mode=balance-alb miimon=100
Save file and exit to shell prompt. You can learn more about all bounding options in kernel source documentation file (click here to read file online).
- *Step # 4:** Test configuration
First, load the bonding module:
# modprobe bonding
Restart networking service in order to bring up bond0 interface:
# service network restart
Verify everything is working:
# less /proc/net/bonding/bond0
Output:
Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0
Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:0c:29:c6:be:59
Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:0c:29:c6:be:63
List all interfaces:
# ifconfig
Output:
bond0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59 inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:2804 errors:0 dropped:0 overruns:0 frame:0 TX packets:1879 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:250825 (244.9 KiB) TX bytes:244683 (238.9 KiB)
eth0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59 inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:2809 errors:0 dropped:0 overruns:0 frame:0 TX packets:1390 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:251161 (245.2 KiB) TX bytes:180289 (176.0 KiB) Interrupt:11 Base address:0x1400
eth1 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59 inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:502 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:258 (258.0 b) TX bytes:66516 (64.9 KiB) Interrupt:10 Base address:0x1480
Benchmark
Server:
iperf -s -w 65536 -p 12345 -I 5
Client:
* [1] iperf -c <server> -w 65536 -p 12345 -t 60 * [2] iperf -c <server> -w 65536 -p 12345 -t 60 -d * [3] iperf -c <server> -w 65536 -p 12345 -t 60 -P 4
PXE Boot
http://syslinux.zytor.com/wiki/index.php/PXELINUX
- create /tftpboot - copy the netboot installer to it - edit /tftpboot/pxelinux.cfg/default - (optional) create files for particular system by using their HEX IP address like:
$> gethostip 10.0.0.1 10.0.0.1 10.0.0.1 0A000001 $> sudo cp /tftpboot/pxelinux.cfg/default /tftpboot/pxelinux.cfg/0A000001
Ubuntu netboot images
SMP systems
Bind process to 1 CPU
On a multi-core CPU system you can force a command to execute on a given CPU by doing:
taskset -c 1 command
Note: taskset is part of the util-linux package on Debian