Home » Archive by category "Tutorials"

Mounting system in recovery mode

Needed to mount a system in a recovery mode to be able to chroot into it and run updates to attempt to fix it.
Steps required were:

$ mkdir /oldsystem
$ mount /dev/sdb1 /oldsystem
$ mount --bind /dev /oldsystem/dev
$ mount -t proc none /oldsystem/proc
$ mount -t sysfs none /oldsystem/sys
$ chroot /oldsystem

You can replace ‘/oldsystem’ with the path of your liking really.

Have fun fixing stuff 😀

EDIT: Now it ended being a i686 kernel, my recovery was a 64bit therefore yum update was out of question. Way around it was changing the $basearch in the yum.repos.d files. Quick sed command below will do exactly that:

$ sed -i 's|$basearch|i386|g' /etc/yum.repos.d/CentOS-*

SH module for python

For python3 on arch to install python modules installer:

$ sudo pacman -S pip

To install the module:

$ pip install sh

then in python shell/script:

import sh

Then you can use it as follow:

sh.ls("-l", "/home/user")

You can as well iterate over it, for example:

for line in sh.grep("something", "/home/user/somefile"):

Connect to wireless network via the command line

First generate the wpa.conf that will include your network information:

$ wpa_passphrase SSID_NAME PASSPHRASE > wpa.conf

then test the connection to the network:

$ wpa_supplicant -D wext -i IFACE_NAME -c wpa.conf

If that completes successfully, kill with ctrl+c and re-run adding -B option

Then configure the interface, can be done using dhclient if a dhcp server is available on the network

$ dhclient IFACE_NAME

Reverse SSH connection

So I have installed Linux for a friend of mine, which he was happy with. However as he’s not the most tech savvy as soon as he encountered an issue (installing flash) he straight away contacted me about it. Trying to explain to him how to do it was like talking to a monkey about real estate, same thing goes to attempting to get him to open an ssh port on his firewall so I could log in.

After a little brain storming I figured that since SSH supports Dynamic port forwarding it definitely have a solution of some sort for me. After reading the ssh manual page the -R option sounded like something I was looking for.

Quick test between my local machine and my cloud server resulted in a success.

Here’s how:

Lets call my PC Machine A and his Machine B.

I created him a user on my system and got him got him to run this on his box:

$ ssh -R 2222:localhost:22 [email protected]

This connected him to my machine A and forwarded the local 2222 port to the port 22 on his machine B.

Then it was just a matter of me running the following:

$ ssh localhost -p 2222

There you go, access to his machine without pulling my hair trying to explain stuff to him! win!

See the list of software installed on your Debian|Ubuntu

To see the list of the packages installed on your system running Debian|Ubuntu just run:

$ dpkg --get-selections
acpi install
acpi-support-base install
acpid install
adduser install
analog install
apache2 install
apache2-doc install
apache2-mpm-prefork install
apache2-utils install
apache2.2-bin install
apache2.2-common install

If you are looking for specific package you can pipe the output to grep, for example:

$ dpkg --get-selections | grep nmap
nmap install

You can as well locate the files that belong to the package by running:

$ dpkg -L nmap

To see the version of the package that is installed/available:

$ dpkg --list nmap

This will give you:

| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                              Version               Architecture          Description
ii  nmap                              6.00-0.3              amd64                 The Network Mapper

Failing/Replacing a drive in Linux Software Raid

Lets assume this is the drive setup:

/dev/sda1 and /dev/sdb1 make up the RAID1 array /dev/md0.
/dev/sda2 and /dev/sdb2 make up the RAID1 array /dev/md1.

/dev/sda1 + /dev/sdb1 = /dev/md0
/dev/sda2 + /dev/sdb2 = /dev/md1


# cat /proc/mdstat

and instead of the string [UU] you will see [U_] if you have a degraded RAID1 array.

If it is not showing as degraded and you need to remove /dev/sdb, mark /dev/sdb1 and /dev/sdb2 as failed and remove them from their respective RAID arrays (/dev/md0 and /dev/md1).

First mark /dev/sdb1 as failed:

# mdadm --manage /dev/md0 --fail /dev/sdb1


# cat /proc/mdstat

should see something like this:

# server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[2](F)
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]
unused devices: <none>

Then remove /dev/sdb1 from /dev/md0 with:

# mdadm --manage /dev/md0 --remove /dev/sdb1

Now we do the same steps again for /dev/sdb2:

# mdadm --manage /dev/md1 --fail /dev/sdb2
# mdadm --manage /dev/md1 --remove /dev/sdb2

NOTE: If you were removing the sda drive for example remember to install Grub on the sdb device prior to rebooting or you going to have a bad time.

Shutdown the server:

# shutdown -h now

and replace the old /dev/sdb hard drive with a new one.

Again if you removed sda drive you might need to change the boot order in the BIOS to the working drive.

After adding The New Hard Disk copy the partition layout to the new drive:

# sfdisk -d /dev/sda | sfdisk /dev/sdb

Check the partitioning on both drives:

# fdisk -l
# server1:~# mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: re-added /dev/sdb1
# server1:~# mdadm --manage /dev/md1 --add /dev/sdb2
mdadm: re-added /dev/sdb2

Now both arays (md0 & md1) will re-sync.

# cat /proc/mdstat

to see the progress

During the sync the output will look like this:

# server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
24418688 blocks [2/1] [U_]
[=>...................] recovery = 9.9% (2423168/24418688) finish=2.8min speed=127535K/sec
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/1] [U_]
[=>...................] recovery = 6.4% (1572096/24418688) finish=1.9min speed=196512K/sec
unused devices: <none>

Recovering Linux software raid

Encountered a drive failure in a server the other day and proceeded to the process of swapping it. The server was set up with a software raid and going by how all the servers I work on it should be something along the lines of:

# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[0] sda1[1]
104320 blocks [2/2] [UU]
md2 : active raid1 sdb5[0] sda5[1]
226532416 blocks [2/2] [UU]
md1 : active raid1 sdb2[0] sda2[1]
10241344 blocks [2/2] [UU]
unused devices: <none>

Unfortunately the server kept constantly throwing the following error on the screen:

iscsi 0:0:1:0: rejecting I/O to dead device

which would not allow me to log in and fail the device if needed and remove it form the mdadm.

Shutting it down and swapping the drive should work so proceeded to do so — outcome kernel panic.

Had no idea what caused it, tried single user mode – no luck. Figured it might be due to it trying to look for the partitions on the new hdd to add it to mdadm configuration to boot from it however I was sceptical about that.

Next step boot in recovery and try to access mdadm, of I place the recovery cd in the drive and boot.

Ok I am still pretty new to this but having my hopes high. Booted from cd:

# linux rescue

Set of standard screens to set up keyboard layout and locale, finding root file system — not found.

press any key to go to bash

I’m here, what now? Try mount sda2 (which I knew was root from the standard setup – no luck. Ha knew it wont be this easy, it is a Linux Software Raid after all. I have to mount the md1 device then, now the question it -how?

After a consultation with Google 😀 I have the steps

First I have edited/created the /etc/mdadm.conf

# vi /etc/mdadm.conf

And added the devices that were used to crate the md devices:

DEVICE /dev/sd[ab]1
DEVICE /dev/sd[ab]2
DEVICE /dev/sd[ab]5


# mdadm --examin --scan >> /etc/mdadm.conf

this adds entries for any RAID devices it finds on the disk specified.

# mdadm --assemble --scan /dev/md0
# mdadm --assemble --scan /dev/md1

This will assemble the raid devices to be used.

In my case the / partition was on md1 as suspected however was set up as a RAID 0 and it ended being rebuilt from scratch

Some good info on the way though 😀

How to: SSH with private key auth

To get this set up you will require to make some changes to the SSH configuration file. This how to was set upon Fedora (RedHat based distro) and other Linux distros might have the config file in a different locations so adjust accordingly or ask in comments and I try to assist as much as I can.

You can access the configuration file with your favourite text editor, I myself will be using VIM.

Since I did not set up any users I am logged into the server as root, I will create extra users as soon as one is required.

# vi /etc/sshd/sshd_config

You will be presented with the config file, the lines we are going to concentrate on are:

The Port setting:

My (and I assume most of the other linux users) recommendation is to change this to a non default port. The first time I have set up my ssh access and opened the default (22) port on my router I have received around 360 break in attempts in less then 4 hours. I am not saying that changing the port will render your server 100% secure but will make it a little less visible to others.

For the purpose of this tutorial I have changed the port number to 2222, you are free to change it to one of your liking. I found that adding the extra line instead of un-commenting the one that was there to be a good idea. Helps to bring the config back to default much easier.

#Port 22
Port 2222

LoginGraceTime option is responsible for the time an unauthenticated session can be open. As well I would recommend changing this to something less then 2 minutes, there is no reason an authentication session should be open for that long. Around 30 seconds should be well enough for you to authenticate.

#LoginGraceTime 2m
LoginGraceTime 30

A good idea is to disable the root login, you can always use su, su -c or sudo once authenticated with your server.

#PermitRootLogin yes
PermitRootLogin no

Then we change the amount of failed attempts that can happen before the auth session closes, three should be well enough.

#MaxAuthTries 6
MaxAuthTries 3

Save the file and restart your ssh daemon.

# service sshd restart
Restarting sshd (via systemctl): [ OK ]

Now, since I have disabled the root login via SSH I will require a user that can avail of this as well as set up the password for him.

# useradd -m -s /bin/bashjondoe
# passwd jondoe
Changing password for user jondoe
New password:
new password:
passwd: all authentication tokens updated successful

We are almost ready to ssh back into the system with the newly created user and create the pair of public and a private ssh auth key.

Firstly we need to allow the port specified for the ssh access through the firewall, we do so by running:

# iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 2222 -j ACCEPT

and lets restart iptables with:

# service iptables restart
Restarting iptables (via systemctl): [ OK ]

As well if you are running SELinux you will be required to run this command:

# semanage port -a -t ssh_port_t -p tcp 2222

Now lets ssh into the the server as the user and create a pair of auth keys:

# ssh [email protected] -p 2222
The authenticity of host ‘[localhost]:2222’ can’t be established.
RSA key fingerprint is 47:c6:4f:da:da:4b:5a:da:e8:84:5a:ab:aa:55:2b:68.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘[localhost]:2222′ (RSA) to the list of known hosts.
[email protected]’s password:
$ ssh-keygen -b 4096
Generating public/private rsa key pair .
Enter file in which to save the key (/home/jondoe/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/jondoe/.ssh/id_rsa.
Your public key has been saved in /home/jondoe/.ssh/id_rsa.pub.
The key fingerprint is:
The key’s randomart image is:

This will create the pair of keys and save it in the ~/.ssh/ directory under names id_rsa (private key secured by ) and id_rsa.pub (the public key).

Move the private key to a usb stick/portable HDD so you always have it with you and are able to access your server. We will now change name for the public key so ssh can use it

$ mv .ssh/id_rsa.pub .ssh/authorized_keys
$ exit

This will bring us back to the root prompt. It’s time to disable password authentication in the sshd_config. Again open it with you favourite text editor and look for the line “PasswordAuthentication” lines

# vi /etc/ssh/sshd_config
# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
#PermitEmptyPasswords no
PasswordAuthentication yes 

change the occurrence that has no # in front

save the file and restart sshd service again and you’re done. Next time you would like to ssh into your server from another linux box you will have to use the -i option and point to where your private key is, for example:

$ ssh -i ~/id_rsa [email protected] -p 2222