Home » Posts tagged "manage"

Connect to wireless network via the command line

First generate the wpa.conf that will include your network information:

$ wpa_passphrase SSID_NAME PASSPHRASE > wpa.conf

then test the connection to the network:

$ wpa_supplicant -D wext -i IFACE_NAME -c wpa.conf

If that completes successfully, kill with ctrl+c and re-run adding -B option

Then configure the interface, can be done using dhclient if a dhcp server is available on the network

$ dhclient IFACE_NAME

Failing/Replacing a drive in Linux Software Raid

Lets assume this is the drive setup:

/dev/sda1 and /dev/sdb1 make up the RAID1 array /dev/md0.
/dev/sda2 and /dev/sdb2 make up the RAID1 array /dev/md1.

/dev/sda1 + /dev/sdb1 = /dev/md0
/dev/sda2 + /dev/sdb2 = /dev/md1

Run:

# cat /proc/mdstat

and instead of the string [UU] you will see [U_] if you have a degraded RAID1 array.

If it is not showing as degraded and you need to remove /dev/sdb, mark /dev/sdb1 and /dev/sdb2 as failed and remove them from their respective RAID arrays (/dev/md0 and /dev/md1).

First mark /dev/sdb1 as failed:

# mdadm --manage /dev/md0 --fail /dev/sdb1

Now:

# cat /proc/mdstat

should see something like this:

# server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[2](F)
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]
unused devices: <none>

Then remove /dev/sdb1 from /dev/md0 with:

# mdadm --manage /dev/md0 --remove /dev/sdb1

Now we do the same steps again for /dev/sdb2:

# mdadm --manage /dev/md1 --fail /dev/sdb2
# mdadm --manage /dev/md1 --remove /dev/sdb2

NOTE: If you were removing the sda drive for example remember to install Grub on the sdb device prior to rebooting or you going to have a bad time.

Shutdown the server:

# shutdown -h now

and replace the old /dev/sdb hard drive with a new one.

Again if you removed sda drive you might need to change the boot order in the BIOS to the working drive.

After adding The New Hard Disk copy the partition layout to the new drive:

# sfdisk -d /dev/sda | sfdisk /dev/sdb

Check the partitioning on both drives:

# fdisk -l
# server1:~# mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: re-added /dev/sdb1
# server1:~# mdadm --manage /dev/md1 --add /dev/sdb2
mdadm: re-added /dev/sdb2

Now both arays (md0 & md1) will re-sync.

# cat /proc/mdstat

to see the progress

During the sync the output will look like this:

# server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
24418688 blocks [2/1] [U_]
[=>...................] recovery = 9.9% (2423168/24418688) finish=2.8min speed=127535K/sec
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/1] [U_]
[=>...................] recovery = 6.4% (1572096/24418688) finish=1.9min speed=196512K/sec
unused devices: <none>

Fixing missing kernel (Ubuntu server in this case)

Came across that one the other day, after a kernel update someone trying to clean up the unused kernels actually managed to remove the current one.

Required for fixing the issue:

1. CD/DVD with the Distro that is on the server (this case Ubuntu server though I believe it would work the same way for Debain and other Debian based ones)

2. CD/DVD drive (well duh! 😀 )

Boot from the CD/DVD and enter the rescue mode.

Mount the /, /dev, /proc, /sys partitions, if not already done by the rescue wizard. If you had software raid you might need to assemble that one, got some info on how to here

Once that is mounted somewhere, chroot into the directory: (assuming mounted under /mnt/sysimage

# chroot /mnt/sysimage

And install the generic kernel

# apt-get install linux-image-generic

Reboot and remove the CD and this should allow you to boot the server up.