Setting up a Cisco Switch from Scratch

This blog is probably going to be really “no-duh” for most people. But I’ve had questions over the years on how to setup a switch from scratch and how to enable management from it remotely. So, I wiped my switch config and started over. After reloading the switch I was brought to the “Initial Configuration Dialog”. You can either choose to go through that or not. The initial config is basically just getting an IP address setup for management, setting up a username and setting up the “enable” password. You can see below what the init dialog looks like.


From there, you’ll have just a few more things to do in order to have a base config up and running, and enable remote access. We need to create a certificate, specify the domain name, secure SSH, and then setup the VTY lines. Let’s get that done here:

Erdmanor3750G#  conf t
Erdmanor3750G(config)#  ip domain-name
Erdmanor3750G(config)#  crypto key generate rsa general-keys modulus 2048
The name for the keys will be:

% The key modulus size is 2048 bits
% Generating 2048 bit RSA keys... [OK]
00:15:32 %SSH-5-ENABLED: SSH 1.99 has been enabled

Erdmanor3750G(config)#  ip ssh version 2
Erdmanor3750G(config)#  line vty 0 15
Erdmanor3750G(config-line)#  transport input ssh
Erdmanor3750G(config-line)#  login local
Erdmanor3750G(config-line)#  exit
Erdmanor3750G(config)#  username steve privilege 15 password MyP@ssW0rd
Erdmanor3750G(config)#  service password-encryption

Now we can go back to our Linux box and log in from the command line.

steve @ debianvm ~ :) ##   ssh 3
The authenticity of host ' (' can't be established.
RSA key fingerprint is 11:4e:b6:34:72:23:9a:0f:03:28:f0:e2:c9:b7:cc:20.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (RSA) to the list of known hosts.
steve@'s password:
Connection to closed.
steve @ debianvm ~ :) ##

Hope this was helpful!

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Setting up a TFTP server in Debian/Ubuntu

I’ve needed to setup a TFTP server for various reasons in the past. Most recently, I needed it in order to upload files (OS images, VPN clients, etc.) to Cisco routers, switches and ASA Firewalls. So this blog is for the sole purpose of setting up a TFTP server.

I need to stress and emphasis the security issues that TFTP servers have. There is no logon credentials, the protocol is all in plain text, and there is no file security for any files supplied by the TFTP server. So make sure that you are only putting files on this server that are considered “compromisable”. If you’re going to be backing up files on this server (running configs, especially), then you should do everything in your power to limit access to this machine by use of firewall rules. For large networks, I would recommend using a product like CatTools.

Alright, so lets see here. First off you’re going to need to install some software.

steve @ steve-G75VX ~ :) ##   sudo apt-get update
[sudo] password for steve:
Fetched 916 kB in 8s (112 kB/s)                                                                                                                                                                                                            
Reading package lists... Done
steve @ steve-G75VX ~ :) ##   sudo apt-get install xinetd tftpd tftp
Reading package lists... Done
Building dependency tree      
Reading state information... Done
xinetd is already the newest version.
tftp is already the newest version.
tftpd is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 38 not upgraded.
steve @ steve-G75VX ~ :) ##

Now that we have our software installed, we need to configure our TFTP daemon to run.

Start by creating a new file and paste in this info:

steve @ steve-G75VX ~ :) ##   sudo nano /etc/xinetd.d/tftp
service tftp
protocol        = udp
port            = 69
socket_type     = dgram
wait            = yes
user            = nobody
server          = /usr/sbin/in.tftpd
server_args     = /tftp
disable         = no
steve @ steve-G75VX ~ :) ##

Things to remember here are that you’re specifying the default port of 69/udp and that the user “nobody” is going to be the user of the files.

Now that we have that done, we can create our directory and set permissions:

steve @ steve-G75VX ~ :) ##   sudo mkdir /tftp
steve @ steve-G75VX ~ :) ##   sudo chmod -R 777 /tftp
steve @ steve-G75VX ~ :) ##   sudo chown -R nobody /tftp

All that’s left is that we need to start the service!

steve @ steve-G75VX ~ :) ##   sudo service xinetd restart


steve @ steve-G75VX ~ :) ##   sudo /etc/init.d/xinetd restart

Just test to make sure that the service is running:

steve @ steve-G75VX ~ :) ##   ps aux | grep xinet
root      7049  0.0  0.0  15024   456 ?        Ss   Oct22   0:00 /usr/sbin/xinetd -pidfile /run/ -stayalive -inetd_compat -inetd_ipv6
steve    16301  0.0  0.0  15188  1984 pts/3    S+   17:25   0:00 grep --color=auto xinet
steve @ steve-G75VX ~ :) ##  
steve @ steve-G75VX ~ :) ##   ports | grep 69
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
udp        0      0    *                           -              
steve @ steve-G75VX ~ :) ##

And we’re done!

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Debian Backups, the Command Line Way…

I’ve been wanting to write a blog on this for a long time since I’ve actually had this backup method running in my environment for years. It’s super easy to setup and, while thank god I’ve never had to recover from a backup, I have been able to go back and recover individual files from my backups. What you’ll need from an environment setup is at least one Linux box that you need backed up, and at least one NAS or other file storage server that has an SSH server. I perform all my backups to online disk storage that is based on FreeNAS. There are plenty of NAS environment, and I’m not saying FreeNAS is the best or the worst, but I like it and it works for me. It works extremely well with Linux, Windows and Mac OS X.

There are two parts to this:

  • 1. manual backups
  • 2. automated backups

  • Let’s start with the manual backups, because once we have the manual backups performed, then we can easily turn that into a script and run it in CRON.

    First, we need to specify the directories we don’t want to backup in a file that is accessible to root. Let’s list the directories in “/” first.

    steve @ steve-G75VX ~ :) ##   ll /
    total 18M
    drwxr-xr-x  25 root   root 4.0K Oct 22 14:54 ./
    drwxr-xr-x  25 root   root 4.0K Oct 22 14:54 ../
    drwxr-xr-x   2 root   root 4.0K Aug 14 02:03 bin/
    drwxr-xr-x   4 root   root 3.0K Oct  3 11:39 boot/
    drwxrwxr-x   2 root   root 4.0K May 21 11:52 cdrom/
    -rw-------   1 root   root  18M Oct  3 11:40 core
    drwxr-xr-x  24 root   root 4.8K Oct 31 12:38 dev/
    drwxr-xr-x 148 root   root  12K Oct 27 20:37 etc/
    drwxr-xr-x   3 root   root 4.0K May 21 11:53 home/
    lrwxrwxrwx   1 root   root   33 Aug 14 02:06 initrd.img -> boot/initrd.img-3.19.0-25-generic
    lrwxrwxrwx   1 root   root   33 Jul 10 08:56 initrd.img.old -> boot/initrd.img-3.19.0-22-generic
    drwxr-xr-x  26 root   root 4.0K Oct 13 13:41 lib/
    drwxr-xr-x   2 root   root 4.0K May 21 12:41 lib32/
    drwxr-xr-x   2 root   root 4.0K Apr 22  2015 lib64/
    drwx------   2 root   root  16K May 21 11:47 lost+found/
    drwxr-xr-x   3 root   root 4.0K May 21 12:01 media/
    drwxr-xr-x   2 root   root 4.0K Apr 17  2015 mnt/
    drwxr-xr-x   6 root   root 4.0K Oct 20 11:28 opt/
    dr-xr-xr-x 283 root   root    0 Oct 21 20:30 proc/
    drwx------   4 root   root 4.0K Oct 27 16:57 root/
    drwxr-xr-x  30 root   root 1.1K Oct 27 20:50 run/
    drwxr-xr-x   2 root   root  12K Aug 14 02:03 sbin/
    drwxr-xr-x   2 root   root 4.0K Apr 22  2015 srv/
    dr-xr-xr-x  13 root   root    0 Oct 22 14:55 sys/
    drwxrwxrwx   2 nobody root 4.0K Oct 22 17:55 tftp/
    drwxrwxrwt  18 root   root 4.0K Nov  1 15:17 tmp/
    drwxr-xr-x  11 root   root 4.0K May 21 12:41 usr/
    drwxr-xr-x  13 root   root 4.0K Apr 22  2015 var/
    lrwxrwxrwx   1 root   root   30 Aug 14 02:06 vmlinuz -> boot/vmlinuz-3.19.0-25-generic
    lrwxrwxrwx   1 root   root   30 Jul 10 08:56 vmlinuz.old -> boot/vmlinuz-3.19.0-22-generic

    So, based on this, we’ll exclude like this:

    steve @ steve-G75VX ~ :) ##   sudo mkdir /backups
    [sudo] password for steve:
    steve @ steve-G75VX ~ :) ##   sudo touch /backups/exclude.list
    steve @ steve-G75VX ~ :) ##   sudo nano /backups/exclude.list
    steve @ steve-G75VX ~ :) ##  


    (Ctrl+x to quit, then y to save)

    Now that we have our directory and exclude list setup, now we need to make sure RSYNC is installed on our system.

    steve @ steve-G75VX ~ :) ##   sudo apt-get update
    Fetched 1,743 kB in 21s (79.7 kB/s)
    Reading package lists... Done
    steve @ steve-G75VX ~ :) ##   sudo apt-get install rsync
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    rsync is already the newest version.
    0 upgraded, 0 newly installed, 0 to remove and 38 not upgraded.
    steve @ steve-G75VX ~ :) ##

    Now that we have RSYNC installed and our backup exclusions defined, lets get our backups started.

    First, edit your .bashrc file in your home directory and add this line:

    alias backupall='sudo rsync -athvz --delete / steve@

    “What does all this do?” you might ask… well, it’s quite simple really.

    First, we create an alias for your shell named, “backupall”, because we’ll be performing full system backups from here.

    Next, we call “rsync” to run as root, and ask it to run with the switches -a, -t, -h, -v and -z.

  • -a = run in archive mode, which equals -rlptgoD (no -H,-A,-X)
  • -t = makes sure to preserve modification times on your files
  • -h = ensures that output numbers in a human-readable format
  • -v = trun verbosely.
  • -z = makes sure that file data is compressed during the transfer
  • And lastly, the “–delete” means, “This tells rsync to delete extraneous files from the receiving side (ones that aren’t on the sending side), but only for the directories that are being synchronized. You must have asked rsync to send the whole directory (e.g. lqdirrq or lqdir/rq) without using a wildcard for the directory’s contents (e.g. lqdir/*rq) since the wildcard is expanded by the shell and rsync thus gets a request to transfer individual files, not the files’ parent directory. Files that are excluded from the transfer are also excluded from being deleted unless you use the –delete-excluded option or mark the rules as only matching on the sending side (see the include/exclude modifiers in the FILTER RULES section).” —

    Next is the “/”, which means we’re backing up everything in “/”, which is everything.

    Lastly, we’re specifying the destination. In this case, we’re doing RSYNC over SSH, so we’ll be specifying a location in the way that you would specify a destination in SCP.

    Now test running your backup. I’ve run mine before, so my update is pretty quick. But this is going to backup your whole system for, so expect it to take a while.

    steve @ steve-G75VX ~ :( ᛤ>   backupallnas
    steve@'s password:
    sending incremental file list

    sent 1.09M bytes  received 50.77K bytes  58.56K bytes/sec
    total size is 1.91G  speedup is 1673.17
    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1070) [sender=3.0.9]
    steve @ steve-G75VX ~ :( ᛤ>

    Now we need to create our script, and make it executable.

    root @ steve-G75VX ~ :) ##   nano /backups/backupall
    root @ steve-G75VX ~ :) ##   chmod +x /backups/backupall
    root @ steve-G75VX ~ :) ##   ll /backups/backupall
    -rwxr-xr-x 1 root root 96 Nov  1 17:02 /backups/backupall*
    root @ steve-G75VX ~ :) ##

    I added this one line to the backup file:

    sudo rsync -athvz --delete / steve@

    This looks pretty good! Now that we have a full backup of our machine, lets get this setup in CRON.

    steve @ steve-G75VX ~ :) ##   sudo su
    root @ steve-G75VX ~ :) ##   crontab -l
    no crontab for root
    root @ steve-G75VX ~ :( ##   crontab -e
    no crontab for root - using an empty one

    Select an editor.  To change later, run 'select-editor'.
      1. /bin/ed
      2. /bin/nano        <---- easiest
      3. /usr/bin/vim.tiny

    Choose 1-3 [2]: 2
    crontab: installing new crontab
    root @ steve-G75VX ~ :) ##

    The line that I added to CRON was this:

    0 3 * * * /backups/backupall >/dev/null 2&>1

    This basically states that every day at 3am, this script will be run.

    From here we need to make sure our local system can perform password-less logon to the SSH server. To do that we’ll be working off of a prior blog I wrote on SSH Keys, here: Using SSH Keys to simplify logins to remote systems.

    You’ll want to test that your system can SSH to your remote system without entering a password. As long as that works, we’re good to go!

    That’s it! It’s that simple!

    I have run into issues on some machines where SSH keys don’t work. I haven’t had the time to troubleshoot why, so I got a different way to figure out how to make backups work, without using SSH keys. The down side is that this is MUCH less secure, and I really don’t recommend running this in a production setting. But for home or non-business use, you’re probably just fine.

    So to do this, we’re going to use “SSHPASS” package. It’s out there for Debian and Ubuntu, so I’m sure it’s out there for other Linux/Unix systems as well.

    root @ steve-G75VX ~ :) ##   sudo apt-get install sshpass
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    The following NEW packages will be installed:
    0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded.
    Need to get 10.5 kB of archives.
    After this operation, 56.3 kB of additional disk space will be used.
    Get:1 vivid/universe sshpass amd64 1.05-1 [10.5 kB]
    Fetched 10.5 kB in 0s (65.3 kB/s)  
    Selecting previously unselected package sshpass.
    (Reading database ... 258807 files and directories currently installed.)
    Preparing to unpack .../sshpass_1.05-1_amd64.deb ...
    Unpacking sshpass (1.05-1) ...
    Processing triggers for man-db ( ...
    Setting up sshpass (1.05-1) ...
    root @ steve-G75VX ~ :) ##

    Go ahead and test logging into your NAS box, or any box really, with this. The idea is that, when you’re scripting you need to logon to remote systems without a password. If you can’t use SSH keys, then this is your next best bet. Create a file in “root’s” home dir and name it whatever you want. I named mine, “backup.dat”. It must contain only the password you use to log into your remote machine, on one line, all by itself.

    root @ steve-G75VX ~ :) ##   nano ~/backup.dat
    root @ steve-G75VX ~ :) ##   chmod 600 backup.dat

    You’ll call “sshpass”, -f for the file with the password, the location of your “ssh” program, -p and the port number (default port for ssh is 22), followed by the username you login with (make sure it’s in the format of, “user@machine-ip”).

    root @ steve-G75VX ~ :) ##   sshpass -f /root/backup.dat /usr/bin/ssh -p 22 steve@
    Last login: Sun Nov  1 17:22:08 2015 from
    FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20 12:48:50 PST 2013

        FreeNAS (c) 2009-2013, The FreeNAS Development Team
        All rights reserved.
        FreeNAS is released under the modified BSD license.

        For more information, documentation, help or support, go here:
    Welcome to FreeNAS
    [steve@freenas ~]$ exit
    Connection to closed.
    root @ steve-G75VX ~ :) ##

    Okay, now that we’ve tested this and know it’s working, lets modify our script here and get this working with “sshpass”.

    root @ steve-G75VX ~ :) ##   /usr/bin/rsync -athvz --delete --rsh="/usr/bin/sshpass -f /root/backup.dat ssh -o StrictHostKeyChecking=no -l YourUserN@me" /home/steve steve@

    Now test to make sure the script is working (as soon as you see the incremental file list being sent, you know it’s working properly):

    root @ steve-G75VX ~ :) ##   /usr/bin/rsync -athvz --delete --rsh="/usr/bin/sshpass -f /root/backup.dat ssh -o StrictHostKeyChecking=no -l steve" /home/steve steve@
    sending incremental file list
    ^Crsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(632) [sender=3.1.1]
    root @ steve-G75VX ~ :) ##
    root @ steve-G75VX ~ :) ##
    root @ steve-G75VX ~ :) ##   /backups/backupall
    sending incremental file list
    ^Crsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(632) [sender=3.1.1]
    root @ steve-G75VX ~ :( ##


    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    How-to: SCP files from ASA

    This is a quick and simple blog. Just notes really on how to use SCP/SSH to download files off of an ASA. It comes in handy for scripting purposes, but I thought I would at least share for everyone to see.

    First things first, we need to enable SSH and SCopy on our ASA. We can accomplish this by entering config mode, and then issuing 2 different “ssh” commands:

    steve @ phiberoptiklmde ~ :) ##  ssh steve@
    pomeroy@'s password:
    Type help or '?' for a list of available commands.
    MyASA5510> en
    Password: ***********
    MyASA5510# conf t
    MyASA5510(config)#ssh Inside
    MyASA5510(config)#ssh scopy enable
    Cryptochecksum: 0d46cc75 79177ae7 9069c9a8 94153d78

    8184 bytes copied in 0.690 secs

    The first “ssh” command allows anyone to connect to this from the “Inside” interface of our ASA. This is NOT secure. In a real production environment, we should lock this down to a specific IP address, a handful of IP addresses, or a management network.

    The second “ssh” command tells the ASA to enable “scopy”. Which basically means that you can connect to the ASA with a SCP client and download files.

    From here we can just use our Linux machine to download the file to whatever folder you want to save your files to. See below on how to do that.
    Start with “scp”, then your user account at the IP of the machine: “scp steve@”.
    From here, it needs to call an actual file that exists on the ASA. If you log into the ASA and issue the “dir” command from enable mode, you can get a listing of all files on the local flash drive on the machine.
    Lastly, you just need to specify the path that you want to save the file to.

    It’s that easy!

    steve @ phiberoptiklmde ~ :) ##   scp steve@ /home/steve/Desktop/anyconnect-win-3.1.05152-k9.pkg
    serdman@'s password:
    anyconnect-win-3.1.05152-k9.pkg                                                                                                                                                                           100%   34MB 212.0KB/s   02:42    
    Connection to closed by remote host.
    steve @ phiberoptiklmde ~ :) ##   scp steve@ /home/steve/Desktop/penvpn01-anyconnect/anyconnect-macosx-i386-3.1.02040-k9.pkg
    serdman@'s password:
    anyconnect-macosx-i386-3.1.02040-k9.pkg                                                                                                                                                                   100%   11MB 226.7KB/s   00:48    
    Connection to closed by remote host.
    steve @ phiberoptiklmde ~ :) ##   scp steve@ /home/steve/Desktop/anyconnect-linux-3.1.02040-k9.pkg
    serdman@'s password:
    anyconnect-linux-3.1.02040-k9.pkg                                                                                                                                                                         100%   11MB 317.9KB/s   00:34    
    Connection to closed by remote host.
    steve @ phiberoptiklmde ~ :) ##   scp steve@ /home/steve/Desktop/anyconnect-linux-64-3.1.02040-k9.pkg
    serdman@'s password:
    anyconnect-linux-64-3.1.02040-k9.pkg                                                                                                                                                                      100% 9735KB 314.0KB/s   00:31    
    Connection to closed by remote host.
    steve @ phiberoptiklmde ~ :) ##   scp steve@ /home/steve/Desktop/anyconnect-macosx-i386-3.1.05152-k9.pkg
    serdman@'s password:
    anyconnect-macosx-i386-3.1.05152-k9.pkg                                                                                                                                                                   100%   11MB 334.6KB/s   00:34  
    Connection to closed by remote host.  
    steve @ phiberoptiklmde ~ :) ##   scp steve@ /home/steve/Desktop/anyconnect-linux-64-3.1.05152-k9.pkg
    serdman@'s password:
    anyconnect-linux-64-3.1.05152-k9.pkg                                                                                                                                                                      100%   10MB 343.9KB/s   00:31  
    Connection to closed by remote host.
    steve @ phiberoptiklmde ~ :) ##   scp steve@ /home/steve/Desktop/anyconnect-linux-3.1.05152-k9.pkg
    serdman@'s password:
    anyconnect-linux-3.1.05152-k9.pkg                                                                                                                                                                         100%   10MB 341.5KB/s   00:31    
    Connection to closed by remote host.
    steve @ phiberoptiklmde ~ :) ##

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Backing up Cisco Configurations for Routers, Switches and Firewalls

    I will add more about this when I have time. Until then, you should be able to just install python, paramiko and pexpect and run this script as-is (obviously changing the variables).

    This should give you all the software you need:

    sudo apt-get update
    sudo apt-get install python python-pexpect python-paramiko

    I plan on GREATLY increasing the ability of this script, adding additional functionality, as well as setting up a bash script that will be able to parse the configs, and perform much deeper backup abilities for ASAs.

    I have not tested this on Routers and Switches. I can tell you that the production 5520 HA Pair that I ran this script against was running “Cisco Adaptive Security Appliance Software Version 8.4(2)160”. Theoretically, I would believe that this would work with all 8.4 code and up, including the 9.x versions that are out as of the writing of this blog.

    Here you go! Full Scripted interrogation of Cisco ASA 5520 that can be setup to run on a CRON job.

    import paramiko, pexpect, hashlib, StringIO, re, getpass, os, time, ConfigParser, sys, datetime, cmd, argparse




    parser = argparse.ArgumentParser(description='Get "show version" from a Cisco ASA.')
    parser.add_argument('-u', '--user',     default='cisco', help='user name to login with (default=cisco)')
    parser.add_argument('-p', '--password', default='cisco', help='password to login with (default=cisco)')
    parser.add_argument('-e', '--enable',   default='cisco', help='password for enable (default=cisco)')
    parser.add_argument('-d', '--device',   default=asahost, help='device to login to (default=')
    args = parser.parse_args()


    #python $currentdate $currentipaddress $tacacsuser $userpass $enpass $currenthostname

    def asaLogin():
        #start ssh")
        child = pexpect.spawn ('ssh '+tacacsuser+'@'+asahost)
        #testing to see if I can increase the buffer
        #expect password prompt")
        child.expect ('.*assword:.*')
        #send password")
        child.sendline (userpass)
        #expect user mode prompt")
        child.expect ('.*>.*')
        #send enable command")
        child.sendline ('enable')
        #expect password prompt")
        child.expect ('.*assword:.*')
        #send enable password")
        child.sendline (enpass)
        #expect enable mode prompt = timeout 5")
        child.expect ('#.*', timeout=10)
        #set term pager to 0")
        child.sendline ('terminal pager 0')
        #expect enable mode prompt = timeout 5")
        child.expect ('#.*', timeout=10)
        #run create dir function")
        #run create show version")
        #run create show run")
        # run showCryptoIsakmp(child)
        # run dirDisk0(child)
        # run showInterfaces(child)
        #run  showRoute")
        #run showVpnSessionDetail")
        # run showVpnActiveSessions(child)
        # run showVpnActiveSessions(child)
        #send exit")
        #close the ssh session")
    def createDir():
        if not os.path.exists(currentdate):
        if not os.path.exists(currentdate+"/"+currenthostname):
    def showVersion(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/""%m-%d-%Y")+"sh-ver.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show version")
        child.sendline('show version')
        #expect enable mode prompt = timeout 400")
        child.expect(".*# ", timeout=50)
        #closing the log file")
    def showRun(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/""%m-%d-%Y")+"sh-run.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending more system running-config")
        child.sendline('more system:running-config')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=999)
        #closing the log file

    def showCryptoIsakmp(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/""%m-%d-%Y")+"cryptoisakmp.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show crypto isakmp sa")
        child.sendline('show crypto isakmp sa')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=50)
        #closing the log file

    def dirDisk0(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/""%m-%d-%Y")+"dirdisk0.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending dir disk0:")
        child.sendline('dir disk0:')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=75)
        #closing the log file

    def showInterfaces(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/""%m-%d-%Y")+"interfaces.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show interface")
        child.sendline('show interface')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=100)
        #closing the log file

    def showRoute(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/""%m-%d-%Y")+"show-route.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show route")
        child.sendline('show route')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=300)
        #closing the log file

    def showVpnSessionDetail(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/""%m-%d-%Y")+"vpnsession.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show vpn-sessiondb detail")
        child.sendline('show vpn-sessiondb detail')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=50)
        #closing the log file

    def showWebVpnSessions(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/""%m-%d-%Y")+"webvpns.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show vpn-sessiondb webvpn")
        child.sendline('show vpn-sessiondb webvpn')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=200)
        #closing the log file

    def showAnyConnectSessions(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/""%m-%d-%Y")+"anyconnectvpns.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show vpn-sessiondb anyconnect")
        child.sendline('show vpn-sessiondb anyconnect')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=999)
        #closing the log file

    def main():
        #Nothing has been executed yet
        #executing asaLogin function
        #Finished running parTest\n\n Now exiting


    Here are all the websites that have provided help to me writing these scripts:

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Creating a basic monitoring server for network devices

    I’ve recently been working more and more with network device management. So, to help with up-time monitoring, interface statistics, bandwidth utilization, and alerting, I’ve been building up a server with some great Open Source tools. My clients love it because it costs virtually nothing to run these machines, and it helps keep the network running smoothly when we know what is going on within the network.

    One thing I haven’t been able to do yet is SYSLOG monitoring with the ability to generate email alerts off of specific SYSLOG messages. That’s in the work, and I’ll be adding that information into this blog as soon as I get it up and running properly.

    I am using Debian 7.6 for this Operating System. Mainly because it’s very stable, very small, and doesn’t update as frequently (making it easier to manage). You can follow a basic install of this OS from here: Debian Minimal Install. That will get you up and running and we’ll take it from there.

    Okay, now that you have an OS running, go ahead and open up a command prompt and log in as your user account or “root”. Go ahead an then “sudo su”.

    Now we will update apt:

    apt-get update


    From here, let’s get LAMP installed and running so our web services will run properly.

    apt-get install apache2
    apt-get install mysql-server
    apt-get install php5 php-pear php5-mysql


    Now that we have that all setup, lets secure MySQL a bit:



    When you run through this, make sure to answer these questions:

    root@testmonitor:/root# mysql_secure_installation


    In order to log into MySQL to secure it, we'll need the current
    password for the root user.  If you've just installed MySQL, and
    you haven't set the root password yet, the password will be blank,
    so you should just press enter here.

    Enter current password for root (enter for none):
    OK, successfully used password, moving on...

    Setting the root password ensures that nobody can log into the MySQL
    root user without the proper authorisation.

    You already have a root password set, so you can safely answer 'n'.

    Change the root password? [Y/n] n
     ... skipping.

    By default, a MySQL installation has an anonymous user, allowing anyone
    to log into MySQL without having to have a user account created for
    them.  This is intended only for testing, and to make the installation
    go a bit smoother.  You should remove them before moving into a
    production environment.

    Remove anonymous users? [Y/n] y
     ... Success!

    Normally, root should only be allowed to connect from 'localhost'.  This
    ensures that someone cannot guess at the root password from the network.

    Disallow root login remotely? [Y/n] y
     ... Success!

    By default, MySQL comes with a database named 'test' that anyone can
    access.  This is also intended only for testing, and should be removed
    before moving into a production environment.

    Remove test database and access to it? [Y/n] y
     - Dropping test database...
    ERROR 1008 (HY000) at line 1: Can't drop database 'test'; database doesn't exist
     ... Failed!  Not critical, keep moving...
     - Removing privileges on test database...
     ... Success!

    Reloading the privilege tables will ensure that all changes made so far
    will take effect immediately.

    Reload privilege tables now? [Y/n] y
     ... Success!

    Cleaning up...

    All done!  If you've completed all of the above steps, your MySQL
    installation should now be secure.

    Thanks for using MySQL!


    Let’s test the server and make sure it’s working properly. Using nano, edit the file “info.php” in the “www” directory:

    nano /var/www/info.php


    Add in the following lines:



    Now, open a web browser and type in the server’s IP address and the info page:



    Now let’s get Cacti installed.

    apt-get install cacti cacti-spine

    Make sure to let the installer know that you’re using Apache2 as your HTTP server.

    Also, you’ll need to let the installer “Configure database for cacti with dbconfig-common”. Say yes!

    After you apt is done installing your software, you’ll have to finish the install from a web browser.


    After answering a couple very easy questions, you’ll be finished and presented with a login screen.

    The default credentials for cacti are “admin:admin”

    From there you can log in and start populating your server with all the devices that you want to monitor. It’s that easy.





    Now, let’s get Nagios installed. Again, it’s really easy. I just install everything nagios (don’t forget the asterisk after nagios):

    apt-get install nagios*

    This is what it will look like:

    root@debiantest:/root# apt-get install nagios*
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    Note, selecting 'nagios-nrpe-plugin' for regex 'nagios*'
    Note, selecting 'nagios-nrpe-doc' for regex 'nagios*'
    Note, selecting 'nagios-plugins-basic' for regex 'nagios*'
    Note, selecting 'check-mk-config-nagios3' for regex 'nagios*'
    Note, selecting 'nagios2' for regex 'nagios*'
    Note, selecting 'nagios3' for regex 'nagios*'
    Note, selecting 'nagios-snmp-plugins' for regex 'nagios*'
    Note, selecting 'uwsgi-plugin-nagios' for regex 'nagios*'
    Note, selecting 'ndoutils-nagios3-mysql' for regex 'nagios*'
    Note, selecting 'nagios-plugins' for regex 'nagios*'
    Note, selecting 'gosa-plugin-nagios-schema' for regex 'nagios*'
    Note, selecting 'nagios-nrpe-server' for regex 'nagios*'
    Note, selecting 'nagios-plugin-check-multi' for regex 'nagios*'
    Note, selecting 'nagios-plugins-openstack' for regex 'nagios*'
    Note, selecting 'libnagios-plugin-perl' for regex 'nagios*'
    Note, selecting 'nagios-images' for regex 'nagios*'
    Note, selecting 'pnp4nagios-bin' for regex 'nagios*'
    Note, selecting 'nagios3-core' for regex 'nagios*'
    Note, selecting 'libnagios-object-perl' for regex 'nagios*'
    Note, selecting 'nagios-plugins-common' for regex 'nagios*'
    Note, selecting 'nagiosgrapher' for regex 'nagios*'
    Note, selecting 'nagios' for regex 'nagios*'
    Note, selecting 'nagios3-dbg' for regex 'nagios*'
    Note, selecting 'nagios3-cgi' for regex 'nagios*'
    Note, selecting 'nagios3-common' for regex 'nagios*'
    Note, selecting 'nagios3-doc' for regex 'nagios*'
    Note, selecting 'pnp4nagios' for regex 'nagios*'
    Note, selecting 'pnp4nagios-web' for regex 'nagios*'
    Note, selecting 'ndoutils-nagios2-mysql' for regex 'nagios*'
    Note, selecting 'nagios-plugins-contrib' for regex 'nagios*'
    Note, selecting 'ndoutils-nagios3' for regex 'nagios*'
    Note, selecting 'nagios-plugins-standard' for regex 'nagios*'
    Note, selecting 'gosa-plugin-nagios' for regex 'nagios*'
    The following extra packages will be installed:
      autopoint dbus fonts-droid fonts-liberation fping freeipmi-common freeipmi-tools gettext ghostscript git git-man gosa gsfonts imagemagick-common libavahi-client3 libavahi-common-data libavahi-common3 libc-client2007e
      libcalendar-simple-perl libclass-accessor-perl libclass-load-perl libclass-singleton-perl libconfig-tiny-perl libcroco3 libcrypt-smbhash-perl libcups2 libcupsimage2 libcurl3 libcurl3-gnutls libdata-optlist-perl libdate-manip-perl
      libdatetime-locale-perl libdatetime-perl libdatetime-timezone-perl libdbus-1-3 libdigest-hmac-perl libdigest-md4-perl libencode-locale-perl liberror-perl libfile-listing-perl libfont-afm-perl libfpdf-tpl-php libfpdi-php
      libfreeipmi12 libgd-gd2-perl libgd2-xpm libgettextpo0 libgomp1 libgs9 libgs9-common libhtml-form-perl libhtml-format-perl libhtml-parser-perl libhtml-tagset-perl libhtml-tree-perl libhttp-cookies-perl libhttp-daemon-perl
      libhttp-date-perl libhttp-message-perl libhttp-negotiate-perl libice6 libijs-0.35 libio-pty-perl libio-socket-ip-perl libio-socket-ssl-perl libipc-run-perl libipmiconsole2 libipmidetect0 libjansson4 libjasper1 libjbig0 libjbig2dec0
      libjpeg8 libjs-jquery-ui libkohana2-php liblcms2-2 liblist-moreutils-perl liblqr-1-0 libltdl7 liblwp-mediatypes-perl liblwp-protocol-https-perl liblwp-useragent-determined-perl libmagickcore5 libmagickwand5 libmail-imapclient-perl
      libmailtools-perl libmath-calc-units-perl libmath-round-perl libmcrypt4 libmemcached10 libmodule-implementation-perl libmodule-runtime-perl libnet-dns-perl libnet-http-perl libnet-ip-perl libnet-libidn-perl libnet-smtp-tls-perl
      libnet-snmp-perl libnet-ssleay-perl libodbc1 libpackage-deprecationmanager-perl libpackage-stash-perl libpackage-stash-xs-perl libpaper-utils libpaper1 libparams-classify-perl libparams-util-perl libparams-validate-perl
      libparse-recdescent-perl libpgm-5.1-0 libpq5 libradiusclient-ng2 libreadonly-perl libreadonly-xs-perl librecode0 librrds-perl librtmp0 libruby1.9.1 libslp1 libsm6 libsocket-perl libssh2-1 libsub-install-perl libsub-name-perl
      libsystemd-login0 libtalloc2 libtdb1 libtiff4 libtimedate-perl libtry-tiny-perl libunistring0 liburi-perl libwbclient0 libwww-perl libwww-robotrules-perl libxpm4 libxt6 libyaml-0-2 libyaml-syck-perl libzmq1 mlock ndoutils-common
      perlmagick php-fpdf php5-curl php5-gd php5-imagick php5-imap php5-ldap php5-mcrypt php5-recode poppler-data python-httplib2 python-keystoneclient python-pkg-resources python-prettytable qstat rsync ruby ruby1.9.1 samba-common
      samba-common-bin slapd smarty3 smbclient ttf-liberation uwsgi-core x11-common
    Suggested packages:
      dbus-x11 freeipmi-ipmidetect freeipmi-bmc-watchdog gettext-doc ghostscript-cups ghostscript-x hpijs git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb gosa-si-server
      cyrus21-imapd postfix-ldap gosa-schema php5-suhosin php-apc uw-mailutils cups-common libgd-tools libdata-dump-perl libjasper-runtime libjs-jquery-ui-docs libkohana2-modules-php liblcms2-utils libcrypt-ssleay-perl
      libmagickcore5-extra libauthen-sasl-perl libmcrypt-dev mcrypt libio-socket-inet6-perl libcrypt-des-perl libmyodbc odbc-postgresql tdsodbc unixodbc-bin libscalar-number-perl slpd openslp-doc libauthen-ntlm-perl backuppc perl-doc
      cciss-vol-status expect ndoutils-doc imagemagick-doc ttf2pt1 rrdcached libgearman-client-perl libcrypt-rijndael-perl poppler-utils fonts-japanese-mincho fonts-ipafont-mincho fonts-japanese-gothic fonts-ipafont-gothic
      fonts-arphic-ukai fonts-arphic-uming fonts-unfonts-core python-distribute python-distribute-doc ri ruby-dev ruby1.9.1-examples ri1.9.1 graphviz ruby1.9.1-dev ruby-switch ldap-utils cifs-utils nginx-full cherokee libapache2-mod-uwsgi
      libapache2-mod-ruwsgi uwsgi-plugins-all uwsgi-extra
    The following NEW packages will be installed:
      autopoint check-mk-config-nagios3 dbus fonts-droid fonts-liberation fping freeipmi-common freeipmi-tools gettext ghostscript git git-man gosa gosa-plugin-nagios gosa-plugin-nagios-schema gsfonts imagemagick-common libavahi-client3
      libavahi-common-data libavahi-common3 libc-client2007e libcalendar-simple-perl libclass-accessor-perl libclass-load-perl libclass-singleton-perl libconfig-tiny-perl libcroco3 libcrypt-smbhash-perl libcups2 libcupsimage2 libcurl3
      libcurl3-gnutls libdata-optlist-perl libdate-manip-perl libdatetime-locale-perl libdatetime-perl libdatetime-timezone-perl libdbus-1-3 libdigest-hmac-perl libdigest-md4-perl libencode-locale-perl liberror-perl libfile-listing-perl
      libfont-afm-perl libfpdf-tpl-php libfpdi-php libfreeipmi12 libgd-gd2-perl libgd2-xpm libgettextpo0 libgomp1 libgs9 libgs9-common libhtml-form-perl libhtml-format-perl libhtml-parser-perl libhtml-tagset-perl libhtml-tree-perl
      libhttp-cookies-perl libhttp-daemon-perl libhttp-date-perl libhttp-message-perl libhttp-negotiate-perl libice6 libijs-0.35 libio-pty-perl libio-socket-ip-perl libio-socket-ssl-perl libipc-run-perl libipmiconsole2 libipmidetect0
      libjansson4 libjasper1 libjbig0 libjbig2dec0 libjpeg8 libjs-jquery-ui libkohana2-php liblcms2-2 liblist-moreutils-perl liblqr-1-0 libltdl7 liblwp-mediatypes-perl liblwp-protocol-https-perl liblwp-useragent-determined-perl
      libmagickcore5 libmagickwand5 libmail-imapclient-perl libmailtools-perl libmath-calc-units-perl libmath-round-perl libmcrypt4 libmemcached10 libmodule-implementation-perl libmodule-runtime-perl libnagios-object-perl
      libnagios-plugin-perl libnet-dns-perl libnet-http-perl libnet-ip-perl libnet-libidn-perl libnet-smtp-tls-perl libnet-snmp-perl libnet-ssleay-perl libodbc1 libpackage-deprecationmanager-perl libpackage-stash-perl
      libpackage-stash-xs-perl libpaper-utils libpaper1 libparams-classify-perl libparams-util-perl libparams-validate-perl libparse-recdescent-perl libpgm-5.1-0 libpq5 libradiusclient-ng2 libreadonly-perl libreadonly-xs-perl librecode0
      librrds-perl librtmp0 libruby1.9.1 libslp1 libsm6 libsocket-perl libssh2-1 libsub-install-perl libsub-name-perl libsystemd-login0 libtalloc2 libtdb1 libtiff4 libtimedate-perl libtry-tiny-perl libunistring0 liburi-perl libwbclient0
      libwww-perl libwww-robotrules-perl libxpm4 libxt6 libyaml-0-2 libyaml-syck-perl libzmq1 mlock nagios-images nagios-nrpe-plugin nagios-nrpe-server nagios-plugin-check-multi nagios-plugins nagios-plugins-basic nagios-plugins-common
      nagios-plugins-contrib nagios-plugins-openstack nagios-plugins-standard nagios-snmp-plugins nagios3 nagios3-cgi nagios3-common nagios3-core nagios3-dbg nagios3-doc nagiosgrapher ndoutils-common ndoutils-nagios3-mysql perlmagick
      php-fpdf php5-curl php5-gd php5-imagick php5-imap php5-ldap php5-mcrypt php5-recode pnp4nagios pnp4nagios-bin pnp4nagios-web poppler-data python-httplib2 python-keystoneclient python-pkg-resources python-prettytable qstat rsync ruby
      ruby1.9.1 samba-common samba-common-bin slapd smarty3 smbclient ttf-liberation uwsgi-core uwsgi-plugin-nagios x11-common
    0 upgraded, 196 newly installed, 0 to remove and 0 not upgraded.
    Need to get 81.9 MB of archives.
    After this operation, 272 MB of additional disk space will be used.
    Do you want to continue [Y/n]?



    Now to test, just login at http://your-server-ip/nagios3/

    You’ll have to look up tutorials on configuring Nagios and Cacti. Of the two, Cacti is much easier because it’s all web based. But Nagios isn’t too difficult once you get used to playing around with config files.

    One last thing I did was setup a landing page to point at the services. To do that just edit the index.php file in your www folder like this:

    root@testdebian:/etc/nagios3/conf.d/hosts# cat /var/www/index.html
    <html><body><h1>TEST Monitoring Server</h1>
    <p>This is the landing page for the TEST Monitoring server.</p>
    <p>Please use the following links to access services:</p>
    <p><a href="/nagios3"> 1. Nagios</a></p>
    <p><a href="/cacti"> 2. Cacti</a></p>

    Now you can browse to the IP address and get a easy to use page that will forward you to which ever service you want!

    Let me know if you have any questions!

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Creating a Reverse Proxy with Apache2

    Sometimes there is a need for hosting multiple websites from one server, or from one external IP address. For whatever your reason or need is, in this tutorial, I’ll just go through what I did to setup Apache server to forward requests.

    In my setup here, I have a Debian Wheezy server in my DMZ, and in my tier 2 DMZ I have 5 Web servers. My objective is to host all these server from 1 IP address, and introduce some security.

    I found a ton of info out there on setting up Apache as a reverse proxy, but none of them really spelled out exactly what to do, and what the results would be. Some of them did, but it wasn’t what I was looking for. So I took a bunch of stuff I see others doing, modify it to fit my needs and report back to you. I hope this helps.

    Lets get started.

    You’ll want a base install of Debian Wheezy which you can find at After you download that, just follow my guide for install if you need: Debian Minimal Install: The base for all operations

    As I stated before, I have a bunch of web servers in my tier 2 DMZ, and a Debian box in my Internet facing DMZ. It is my intention that the web servers never actually communicate with the end users. I want my end users to talk to my Debian box, the Debian box to sanitize and optimize the web request, and then forward that request on to the web server. The web server will receive the request from the Debian box, process it, and send back all the necessary data to the Debian server, which will in turn reply to the end user who originally made the request.

    It sounds complicated to some people, but in reality it’s pretty simple, and the reverse proxy is transparent to the end user. Most people out there don’t even realize that many sites out there utilize this type of technology.

    My Debian server needs some software, so I installed these packages:

    sudo apt-get install apache2 libapache2-mod-evasive libapache2-mod-auth-openid libapache2-mod-geoip
    libapache2-mod-proxy-html libapache2-mod-spamhaus libapache2-mod-vhost-hash-alias libapache2-modsecurity

    From here you’ll want to get into the Apache directory.

    cd /etc/apache2

    Let’s get going with editing the main Apache config file. These are just recommendations, so you’ll want to tweak these for what ever is best for your environment.

    sudo vim apache2.conf

    I modified my connections for performance reasons. The default is 100.

    # MaxKeepAliveRequests: The maximum number of requests to allow
    # during a persistent connection. Set to 0 to allow an unlimited amount.
    # We recommend you leave this number high, for maximum performance.
    MaxKeepAliveRequests 500

    Also, what security engineer out there doesn’t know that without logs you have no proof that anything is happening. We’ll cover log rotation and retention in another blog, but for now, I set my logging to “notice”. Default was “warn”.

    # LogLevel: Control the number of messages logged to the error_log.
    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel notice

    Perfect. Now, you may want to tweak your server a little differently, but for now this is all we need for here.

    Now let’s get into some security hardening of the server.

    sudo vim /etc/apache2/conf.d/security

    We do have security in mind, so let’s not divulge any information that we don’t need to. Set “ServerTokens Prod”

    # ServerTokens
    # This directive configures what you return as the Server HTTP response
    # Header. The default is 'Full' which sends information about the OS-Type
    # and compiled in modules.
    # Set to one of:  Full | OS | Minimal | Minor | Major | Prod
    # where Full conveys the most information, and Prod the least.
    #ServerTokens Minimal
    #ServerTokens OS
    #ServerTokens Full
    ServerTokens Prod

    Now let’s set “ServerSignature Off”

    # Optionally add a line containing the server version and virtual host
    # name to server-generated pages (internal error documents, FTP directory
    # listings, mod_status and mod_info output etc., but not CGI generated
    # documents or custom error documents).
    # Set to "EMail" to also include a mailto: link to the ServerAdmin.
    # Set to one of:  On | Off | EMail
    #ServerSignature Off
    ServerSignature On

    And lastly, go ahead and uncomment these three lines in your config. We’ll configure “mod_headers” later.

    Header set X-Content-Type-Options: "nosniff"

    Header set X-XSS-Protection: "1; mode=block"

    Header set X-Frame-Options: "sameorigin"

    Sweet, looking good. Go ahead and save that, and we can get “mod_headers” activated. First, I’d like to point out that you can view what modules you have installed by using the “a2dismod” program. Simply enter the command, and it will ask you what modules you’d like to disable. Obviously, if you see it in the list, it’s already enabled. just hit “Ctrl+C” to stop the program.

    To enable a module in Apache, you need to first made sure it’s installed, then you can just use the program “a2enmod”… like this:

    sudo a2enmod headers

    Now that we’ve enabled “mod_header”, lets verify we have the other necessary modules enabled as well.

    steve @ reverseproxy ~ :) ᛤ>   a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    Enabling module cache.
    Could not create /etc/apache2/mods-enabled/cache.load: Permission denied
    steve @ reverseproxy ~ :( ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    Enabling module cache.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    Considering dependency proxy for proxy_ajp:
    Module proxy already enabled
    Enabling module proxy_ajp.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    Considering dependency proxy for proxy_balancer:
    Module proxy already enabled
    Enabling module proxy_balancer.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    Considering dependency proxy for proxy_connect:
    Module proxy already enabled
    Enabling module proxy_connect.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    Considering dependency proxy for proxy_ftp:
    Module proxy already enabled
    Enabling module proxy_ftp.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    Considering dependency proxy for proxy_http:
    Module proxy already enabled
    Enabling module proxy_http.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    Enabling module rewrite.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    Enabling module vhost_alias.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    Enabling module vhost_hash_alias.
    To activate the new configuration, you need to run:
      service apache2 restart

    Here is a list of the Modules I just enabled:
    cache proxy_ajp proxy_balancer proxy_connect proxy_ftp proxy_http rewrite vhost_alias vhost_hash_alias

    Now let’s just restart Apache, and keep going.

    steve @ reverseproxy ~ :) ᛤ>   sudo service apache2 restart
    [ ok ] Restarting web server: apache2 ... waiting .

    Perfect, moving right along… Now what we need to do is setup a new file in the “/etc/apache2/conf.d/sites-available” directory. I named mine, “reverseproxy”, as it’s easy to figure out what it is.

    Now, to correctly setup your reverse proxy, this server should not be hosting ANY websites. This is a proxy server, not a web host. So go ahead and delete the config sym link for the default website. We don’t want to host that.

    sudo rm /etc/apache2/sites-enabled/000-default

    Now we can edit our “reverseproxy” file.

    sudo vim /etc/apache2/sites-available/reverseproxy

    #enter this code into your file

    <VirtualHost *:80>
      ProxyPreserveHost On
      ProxyPass /
      ProxyPassReverse /
      <Proxy *>
            Order allow,deny
            Allow from all
      ErrorLog /var/log/apache2/
      CustomLog /var/log/apache2/ combined

    <VirtualHost *:80>
      ProxyPreserveHost On
      ProxyPass /
      ProxyPassReverse /
      <Proxy *>
            Order allow,deny
            Allow from all
      ErrorLog /var/log/apache2/
      CustomLog /var/log/apache2/ combined

    <VirtualHost *:80>
      ProxyPreserveHost On
      ProxyPass /
      ProxyPassReverse /
      <Proxy *>
            Order allow,deny
            Allow from all
      ErrorLog /var/log/apache2/
      CustomLog /var/log/apache2/ combined

    Awesome, now save that file and we can get it enabled. Just like setting up new modules, we’re going to sym-link our new file to the “sites-enabled” folder.

    sudo ln -s /etc/apache2/sites-available/reverseproxy /etc/apache2/sites-enabled

    Now we can just reload the Apache server (no restart required) the server so that it picks up the new settings.

    sudo service apache2 reload

    Now we need to edit the /etc/hosts file so that our reverse proxy server knows where to push site traffic to on our DMZ. So lets do that:       localhost       reverseproxy.internal.dmz  reverseproxy

    # The following lines are desirable for IPv6 capable hosts
    ::1     localhost ip6-localhost ip6-loopback
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters

    Sweet, all done!
    Now you can test from a computer that all your sites are working. They *should* be! 🙂

    I’ll work on a blog eventually to show how to enable mod_security with this setup so that we can sanitize user interaction with our site. Our visitors are probably good people, but attackers and skiddies are always out there trying to damage stuff.

    Thanks for reading!!


    VN:F [1.9.22_1171]
    Rating: 5.0/5 (1 vote cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Intro to Linux: File Systems, Permissions, and Hardware Fundamentals

    Hello again everyone. So, for the past few years now I’ve really been getting more and more into working with Linux. I know that’s a broad statement… Linux is just about on every device you see these days; mobile phones, computers, laptops, tablets, supercomputers, refrigerators, cars, custom motorcycles… everything! And how many different distros are there? hundreds!

    I won’t start any debates on how good or bad Linux is as a whole, or how Linux is as an overall Operating System… but I will go into how to use it, understand it, and operate it. This is the first part of many blogs I’ll be posting about how to use Linux, and we’ll start here with the file system. The reason why we’ll start with the file system is that it’s really the basis for everything you’ll be doing in Linux. I say that because Linux thinks everything is a file. Devices, files, folders… everything is a file. And everything can be referenced (pretty much) from the command line.

    It was a toss up for me on whether to start with this or my next blog (the Bash shell). I mean, literally everything you do in Linux requires the file system or Bash, or both, to complete any task. But we’ll start here and build up from there.


    Table of Contents


    Before we really get going, I’ll need you to start the “Terminal” program on your Linux machine. If you’re on Red Hat (or one of its derivatives) it may look something like this:


    What is a File System?

    We won’t touch on any other Operating System here than Linux. Specifically, the Linux kernel really is the Operating System, but we’ll cover that in my third blog (Understanding the Linux Kernel and Processes). For now just realize that the Linux Kernel is the underlying Operating System that allows data to be pulled from the local hard drive, which hosts a file system, and run as a process.

    From here on out when I refer to the Linux file system, I’ll be talking mainly about the EXT3 and EXT4 file systems. Don’t worry about what that means for now, we’ll cover that later.

    The base directory in Linux, is referred to as the “root” directory of the file system and is generally expressed in text as a simple forward slash: “/”. Every file and folder from here is referred to as part of the “directory tree”.

    To see all the objects in any directory, you can use the Bash shell command “ls”, which is short for “list”. This is what it looks like when you list the contents of my current Red Hat home directory:

    This command was run with no “arguments”, it is just a simple command that is just asking for a listing of the files and folders that are in my “home” folder.

    This next picture is a screenshot of what the exact same folder looks like with 3 command line arguments. The ‘a’ is for “all” (including hidden stuff), the ‘l’ is for a “long” listing, and the ‘h’ says that we want to see this in a “human” readable format.

    As you can see, there is a lot more information here, specifically, info regarding a lot of info about the file system. We’ll start with the easy parts of this output. We’ll look at the file, “.bash_logout” for this example.

    -rw-r--r--  1 serdman83 serdman83  176 Jan 27  2011 .bash_logout

    The first serdman83 is my username, and the second one is the group. These represent the user and group that own the file. This is very important to understand that the user and group are two very different things. The username is tied directly to me, which you can probably figure out, and the group is my primary group. Your primary group normally is the same as your username (except in rare cases). We’ll talk more about that later in another blog.

    The number “176” is a dynamic number. It’s the size in bytes of the file, and this file is quite small. As you can probably guess, this file was created on Jan 27, 2011, and obviously, the file name is “.bash_logout”. We’ll talk more about this file later, but try to remember what this “long” listing means.

    All of this information, plus more, is all stored in the file system.



    What is a terminal? According to Wikipedia, the definition of a terminal is, “The Linux console is a system console support in the Linux kernel. The Linux console was the first functionality of the kernel, developed as early as in 1991 (see history of Linux). On PC architecture, it is common to use VGA-compatible video hardware. Implementations of computer graphics on Linux are excluded from the scope of this article. Linux console, like pure text mode, uses monospace fonts.”

    What’s all this mean? You’ve already seen a Linux terminal above. The Linux terminal is what you see when you’re working with the command line. It’s how you open files, view directories, run programs, etc…

    But how do you use a terminal? Well, I’m going to cover that for you real quick here. It’s not going to be horribly in-depth, but we’ll do a 5 second intro.

    The first and most important thing to remember is that if you get stuck in a terminal that you’ve made a bunch of changes to and it’s not working right, you can always just enter the “reset” command to return it to normal behavior.

    There are also Control Sequences that can be passed to the Bash shell. These are most always entered with a key combination which includes the “Ctrl” (Control) key. We’ll cover a handful of the more popular control sequences that you’ll find yourself using.

    Ctrl + C = Probably the most used control sequence you’ll use. This will terminate almost any program that is currently running. There are programs that are setup to ignore this sequence, but just remember that the vast majority of programs do not.

    Ctrl + D = If you’ve entered a command that didn’t work, it may still be waiting for you to complete your input. Try a Ctrl+D to see if that completed the input. This control sequence is used to force complete a user input.

    Ctrl + H = If, for some reason, your backspace key isn’t working, you can use the old Ctrl+H for single character backspace.

    Ctrl + J = This command is an alternative to using the RETURN key. It’s just another way to perform a line feed.

    Ctrl + L = This does the same thing as the “clear” command. It will clear or refresh the screen for you.

    Ctrl + U = There are some commands that you can type into the Bash Terminal that can be very long. If for instance you realize you don’t need that command anymore, you can Ctrl+U to erase the whole line.

    Ctrl + Z = This sequence is used for suspending a program. We’ll be talking about this later. If you suspend a program, you haven’t terminated it, it’s still running in the background.


    Navigating the Filesystem

    The Linux filesystem is actually really a simple concept. Every shell, or terminal, has a current working directory. Current working directory (cwd) is like saying, where am I at. Wouldn’t it be nice to see where you’re at though? We’ll you’re in luck, because Linux has come a long way. While most traditional versions of Linux were totally command line driven, modern versions are rivaling Windows and OSX in spectacular fashion. Without getting into what a window manager is in too great of detail, there are versions that are just as easy to work with as Windows XP, Windows 7 and Apple’s OSX.

    In various versions of Linux, there are, I would say 3 main Window Managers. Gnome (and various forks of the original Gnome), KDE, and (Unity) only because of Ubuntu. There are many others out there, like XFCE, Enlightenment, Fluxbox, and LXDE, but we won’t be getting into those in these blogs. I personally like Gnome, probably because that’s what I started with so many years ago.

    Here’s what some of them Look like:
    Gnome 2.28
    KDE 4.x
    Ubuntu Unity

    But in staying with the command line for the time being, you can view the directory tree quite easily. Below, I have shown what the output of the “tree” command looks like. For a text based output it shows you quite nicely what your directory structure looks like from your Current Working Directory. In this case my “cwd” was my home folder.

    From the command line I can see that the “Desktop”, “Downloads”, “Documents”, and other folders are in my cwd. To go into those folders I can just “cd”, or Change Directory, and tell the Bash Shell, my Terminal, to go there. Like this:

    In there, I’ve introduced two new commands, “cd” and “pwd”, as well as I’ve shown that Linux is CaSe SeNsItIvE… If I had typed, “cd documents”, the command would have failed because there is no “documents” folder. The folder is named, “Documents”. The cd command tells the Bash Shell that I want to move into a new directory, and “Documents/” is the directory I want to move into. “Documents/” is the command line ‘argument’ that I provided the program “cd”. You can issue the “cd” command with no arguments as well; it will take you where ever you are at back to your home directory. The “pwd” command is another program that has a sole purpose of “P”rint “W”orking “D”irectory. The “pwd” command takes no arguments.

    This also brings to mind the relative and absolute paths that you can see in that screenshot. There is an absolute path, “/home/serdman83/Documents”, and a relative path, “Documents/”. The Absolute path is exactly where a object is based on where it is located from the root of the system. In this case the root of the system, as in all Linux systems, is “/”. So my Documents directory is located in “/home/serdman83/Documents”. Because I was already in my home directory, “/home/serdman83/”, I can tell the shell to move into the “Documents/” directory because relative to my “cwd”, “Documents” is a sub folder.


    Other Important Directories

    Lets talk about other important directories in the filesystem. We’ll start with the Root of the system, which is represented by a simple “/”. Root, not the username but the directory, is the holding section of the entire system. Everything you see will always come from the root directory because there is directory higher than the root directory. If you do a listing of the root directory, this is very similar to what you’ll see:

    As you can see there are a lot of folders in here. Let’s talk about some of them. The first one we’ll talk about are the “/bin/ and “/sbin” directories. These directories are special because they hold almost all of the programs that run on a computer. The “/bin” folder holds the programs that normal computer users use, such as “ls”, “pwd”, etc… We’ll definitely cover more later. The “/bin” folder is supplemented by the “/usr/bin” folder also, which holds other programs that normal users can run. Any program that runs with no elevated rights can be put in these two folders. On the other hand, the “/sbin” folder holds many more programs that only the root user can run. It is supplemented with the “/usr/sbin” folder which holds many programs as well.

    To make this easier, just remember that the “/bin” and “/usr/bin” folders hold programs that generally any user should be able to run. And the “/sbin” and the “/usr/sbin” folders hold programs that require elevated rights (such as the Root account) to run.

    The next folder is the “/boot” folder. Generally, you’ll almost never go into this directory and store files. This directory holds information for booting the machine. In every Red Hat and Debian based machine I’m aware of, this folder holds the information for the Linux Kernel, the RAM Drive and some other configuration files such as the Grub boot loader.

    “/Dev” is the next directory we’ll talk about. Similar to “/boot”, you’ll never save anything in this folder either. The purpose of this directory is to hold all the information about every single device that is attached to your computer. We’ll talk much more about this in another blog.

    The next directory is “/etc”. This directory holds all the configuration data for all the programs and software that are installed on your Linux machine. You’ll most likely use this folder frequently if you’re planning on making changes to the way any software runs on your computer. Every single thing that runs or is installed in Linux can be modified with a configuration file. Windows is tied to the Windows Registry and the “C:\Windows\” directory for everything, while Linux uses the “/etc” and “/var” directories. There is no Linux Registry for security and stability reasons, but there are plenty of configuration files that offer the same functionality. We’ll be touching on this much more in the future blogs here.

    Quickly, I’ll touch on the “/lib” directory, which holds all the library files on your computer. Any software that requires extra software libraries will be calling some file that resides in here.

    The “/mnt” and “/media” folders are similar just because when Linux mounts a folder, network share, local USB drive, or CD/DVD drive, you’ll most likely find it in one of these two folders. If you’re virtualizing your Linux install, and you’re sharing folders with your host machine, those folders will appear in one of these two directories as well.

    Next is “/tmp”. Just as you would expect, this directory is for temporary files. By default, any file that is put in here has a life span of 10 days. More accurately, if a file’s access date is 10 days or more, that file will be deleted. So if I create a file today, and don’t touch it for 10 days, Linux will automatically delete it after that. This is the only directory that you’ll find that anyone has rights to write to. By default, all other folders in a Linux system can only be written to by the Root user. The exception to that is every user’s personal home directory, and the “/tmp” directory.

    The last directory we’ll cover here is the “/var” directory. If you’re hosting a web server, it’ll be in here. Your system mail is delivered here. Many things happen in this directory. You’ll find that many configuration files are also in here, but there are also log files, news group information (if it’s setup), ftp files that your machine is hosting and many other things too. We’ll talk much more on this in other blogs.

    So you’re thinking, “dude, this is so boring, when are we going to get to the fun stuff?” And here’s my answer: “We’re there, you just don’t know it yet.”

    All of this stuff is the core building blocks of Linux. If you understand this stuff at a good level, you’ll be so much better off using Linux in the long run.


    How to Manage Files and Directories

    We’ll touch first on redirection of output. The thing to remember here is that output in Linux defaults to the console. To redirect the output you use the “>” greater than or “<" less than signs. Here you can see that I've run the "pwd" command to print my working directory. I've them redirected my output to the pwd.txt file. Then I used "cat" (short for concatenate) to print the pwd.txt file back out to the screen. While this doesn't seem to be that important, you'll surely find it useful down the road.


    We can now try to copy that file to a new directory. But first let’s create the new directory, then we’ll copy the file into it.


    Let’s cover copying directories while we’re talking about copying.


    As you can see, I listed my “newdir/”, then I copied my “newdir/” to another folder named, “newdir2/” and then I listed my current working directory recursively.

    Now that we have two copies of the same “pwd.txt” we can delete one of them. So let’s go over how to remove directories too. In order to remove a file you use the “rm” command.

    What if you wanted to move a file instead of copying it? How about renaming a file? Well, Linux doesn’t have a rename command, you just move a file to a new name. Like this.

    Here you’ll see that I first Changed Directory (cd) into my “newdir/” directory, then I listed in Long format, all the files in human readable format of that directory. Then I moved the pwd.txt file into a new name (newpwd.txt). Following that I moved the newpwd.txt file back up one directory (to my home directory). Lastly, I showed what tab complete does by changing directories back to my home folder and issuing the “ls -alh” command again. But this time when I issued the “ls -alh” command, I typed the word “new” behind it, and pressed tab twice.


    I hope I didn’t move too fast through that last screenshot. Changing directories backwards is easy because the Linux kernel understands two periods “..” as “go back one directory”. And the Tab complete is extremely useful because it will attempt to complete whatever it is you’re typing. Try it in almost any command, at almost any time. You’ll find it very useful.


    Don’t get too hasty in moving files around though. Be absolutely sure that what you are doing is exactly right. In Linux, there is no “undo” function. If you move a file to a directory that contains a file with the same name you can overwrite, or “clobber”, the original file with the one you moved.


    File Globbing and File Names

    Unlike Windows, Linux files can contain just about every character on the keyboard. If you wrap a file name in single quotes (‘ ‘) you can use any of these characters in a file: ‘!@#$%^&*()_+-=\|][}{:;?><,.~`' In doing that you can cause a nightmare for developers and users of files with those characters in them. So while you can technically use those chars, I really recommend NOT doing so. One special char that I want to touch on here is the period (.). The reason why is that, like Windows and Mac, there can be files that are "hidden". In Linux, you can't really hide a file. There are no Alternate Data Streams (ADS) in the Linux File System, so a “ls -alh” will show you every file in a folder. But if you want to “hide” a file from a regular “ls” command you can start the file name with a period. Files like “.bash_history”, “.bashrc” and folders like “.ssh/” (all of these should be in your home directory), are not visible with a plain “ls” command.

    Now we’ll talk about file globbing. This is really simple, and a really simple concept, but you need to understand the ramifications of what you’re doing. By using the asterisk (*), you can specify many files at the same time. And while we’re at it, let’s introduce “tab-complete” since they’re pretty similar.

    Let’s see a screenshot of tab complete, then a screenshot of file globbing:

    So as you can see in the first picture, the tab complete helps because I know there is a folder that starts with “lab” but I’m not sure exactly what it is. So if I type “lab” and then hit the “Tab” key twice I can see what other files and folders start with the letters “lab”.

    The file globbing was nice because I wanted to move all the files and folders that start with “lab” into a folder called “all-labs”. I was able to do this as you can see.

    File globbing is also nice to use when you have a folder with a ton of files in it and you’re looking for all the files that end in “.conf”. So to find them all, you could issue this command:

    ls -alh *.conf


    File Ownership

    Before we get too much further, how a file can be managed by permissions as well as ownership.

    Let’s first talk about Linux Users. All of the users for a system reside in the “/etc/passwd” file and in modern Linux (and UNIX) systems their passwords are managed in the “/etc/shadow” file. We’ll talk more about both of these files later, but you should at least know that these two files are extremely important.

    As you well know, with any computer system, you log on with a username. The /etc/passwd file holds all the information about the user. As you can see from this screenshot there is a standard format to the file as well.

    As you can see in the above screenshot, there are 7 columns in every single line item, and they are separated by colons (:). Let’s review these fields real quick.

    Field 1 is your username. Pretty strait forward.
    Field 2 is your password. But it isnt stored here. Remember, it’s in the /etc/shadow file, and the “x” designates that.
    Field 3 is your user ID. When your account is created, you’re assigned a unique number. While it can be changed, it’s highly advisable not to.
    Field 4 is your primary group ID. This is normally the same number as your user ID, but it can be different for special circumstances.
    Field 5 is the GECOS field. It’s deprecated, meaning it’s not used anymore, but it needs to be there for backward compatibility. Normally it just holds the users full name.
    Field 6 is for your home folder. It tell the Operating System where your home folder is located. For the VAST MAJORITY of the time, your folder will be a sub-directory of the “/home/” folder.
    Field 7 is the shell that you’re assigned. Most of the time it’s the bash shell, but on other systems it can be others. We’ll talk about shells later.

    As a NOTE on Field 7, if you see that a user or service has the shell “/sbin/nologin”, that user’s account is basically disabled.


    As for the password field (field 2), whenever you change your password, the password is stored encrypted in the “shadow” file. You can change your password with the “passwd” command. See here:

    I cheated a bit, because I’m lazy and don’t feel like changing my password, but you would be prompted for your current password, then your new password, then your new password again (just to make sure that you didn’t fat finger it).


    I just mentioned that there is a system account. There are actually three different types of accounts: Normal User, System User and the Root User. They are different and in the case of the Root user, it is the user that has more privileges than any other user on the machine.

    Normal user accounts and groups usually start their UID and GID numbers above the number 500, service accounts are usually below 500, and the Root user account is ALWAYS 0 (zero).


    Groups in Linux

    I mentioned groups and Group IDs above because part of the file permissions includes group permissions. You’re user account will always be part of at least one group: your primary group. We talked about your primary group in the last section, but now we’ll get into the secondary groups.

    All the users on the Linux system you’re working on have the option of being placed into a secondary group, which is controlled by the “/etc/group” file. This file looks fairly similar to the “/etc/passwd” file, but it plays an entirely different role. Lets look at the “/etc/group” file and dig into what it does.

    As you can see above, there are a lot of groups on the system. In total on my test box, you can see 106 groups defined. The file itself, like the “/etc/passwd” file, is comprised of many fields. While the “/etc/passwd” file has 7 fields, the “/etc/group” file only has 4.

    Field 1 is the group name.
    Field 2 is the group password. This field is rarely ever used. It is normally filled with an “X” just as a placeholder.
    Field 3 is the Group ID, or GID. It’s always a whole integer value.
    Field 4 is comprised of all the secondary groups that a user is also part of. Make sure this field always ends in a real group name. If it ends in a “,” you’ll be booting to recovery mode.

    Overall, this is really all you need to know about Linux groups. It’s pretty easy, you’re either in a group, or you’re not.

    So what if you’re not in a group that you want to be in? Lets say you want to be part of the “Motorcycle” group. First, if you have the password for the Root account, you need to log in as root, and then you can use the “usermod” and “groupmod” programs to modify your information.

    The “usermod” program is very powerful. We’ll only touch on what it can do for groups here; we’ll cover the rest of it as we move forward.

    The “usermod -g” will change the primary group membership for the user you’re changing (remember, the primary group is stored in the “/etc/passwd” file). The “usermod -G” will take a list of comma separated group names and overwrite the secondary group memberships for whatever user you’re referencing. And lastly, the “usermod -a” will take a list of comma separated groups and APPEND them to the already existing secondary groups for the user you are changing.

    Not to get too in depth on this, we’ll move forward, but we’ll be back to this later.

    File Owners

    Now that you know what is needed about Users and Groups, lets talk about file ownership.

    If you look at a file with a long listing, you’ll see that same information I showed you before:

    serdman83 @ newstudent05 ~ :) ?> ls -alh
    total 62244
    drwxr-xr-x 11 serdman83 serdman83     4096 2013-04-22 12:27 ./
    drwxr-xr-x  3 serdman83 serdman83     4096 2011-12-15 14:47 ../
    -rw-------  1 serdman83 serdman83    24309 2013-06-14 17:23 .bash_history
    -rw-r--r--  1 serdman83 serdman83      220 2011-12-15 14:47 .bash_logout
    -rw-r--r--  1 serdman83 serdman83     3860 2012-11-09 15:16 .bashrc
    drwx------  2 serdman83 serdman83     4096 2011-12-15 16:02 .cache/

    As you can see, in here, you see my username listed twice. That’s actually not my username in the 4th field, it’s my primary group name.

    Before we get to my username, lets look at the columns that are there.

    The first column is the file permissions. It specifies what the owner, group and other permissions are for the file. We’ll cover this more in a minute.
    The Second column is the number of hard links. We’ll get into file linking in a little bit as it is also very important.
    The third column is the file owner. This output shows that I am the file owner.
    The fourth column shows the group owner. In this case, my group is the owner of this file, but it could be changed to some other group.
    The fifth column is the size of the file in bytes.
    The Sixth and Seventh columns are the date and time the file was created.
    The Eighth and final column is the file name.


    User and Group Information

    We’re going to cover some commands here that will help you down the road for system administration. First off, we’ll discuss information about the “whoami” command.

    It’s pretty easy to figure out what it does. You issue the command “whoami” to the command line and Linux will tell you who you are.


    So what if you know who you are, but you want to know what information there is about your user account? Or someone else’s account?

    This is where the ID command comes into play. The “id” command has 4 arguments, and we’ll cover them here.

    -g will tell you the the primary group for a user.
    -G will tell you all the groups a user is part of.
    -u will tell you the user’s UID number.
    -n will tell you the user’s username or group name instead of just printing out the UIDs and/or GIDs.

    Let’s see some examples:


    So how do you know who to look up if you don’t want to look through all the user and group information held in the “/etc/passwd” and “/etc/group” files?

    That’s easily done by just finding other users that are logged into a computer. You can do that with 3 other commands. Those commands are “users”, “w” and “who”.

    The “users” command will output all the users logged into the system at the moment the command was issued.


    Don’t let it deceive you if you see the same user logged on more than once. You can see more than one person logged in multiple times if they have multiple shells (terminals) open.

    Next is the “w” command. As you can see below, it’s much more detailed than the “users” command. It also has a nice header to tell you what each of the columns are telling you. In addition to that, it tells you system up time, what users are currently logged in and it tells you the load averages on the CPU for the last minute, 5 minutes and 15 minutes.


    The next command is the “who” command. It’s slightly different than the “w” command, but is equally important.


    As you can see from the screenshot I’ve provided, there are multiple columns, but this time no header.

    The first column is the username for who is logged in.
    The second column is the terminal that they’ve logged in to.
    The third and fourth column are the date and time that the user logged in.


    Logging in as a Different User

    The last thing I want to cover here is logging in as a different user. We’re straying away from actually talking about the file system, but I did bring up a couple things regarding the “root” account so I feel it’s only fair that I tell you how to log in as root (if you don’t already know).

    It’s real easy actually. See below.


    You can do that for any account you know the password for. You can “su” and then any account name you know is on the system.


    File Permissions

    Now that we’ve covered user identities, file ownership, groups, and all that stuff, let’s get back to the file system and file permissions.

    There’s two way to control file system permissions for a file. The first way is called generic permissions. I call them generic because you’re using letters to map permissions. The other way is with Octal permissions. This is where you use numbers to modify the file permissions.

    Let’s start by discussing what we’re doing here. Below is a folder called “newdir” and a file called “newpwd.txt”.

    drwxrwxr-x. 2 serdman83 serdman83 4.0K May 13 17:14 newdir/
    -rw-rw-r--. 1 serdman83 serdman83   16 May 13 16:59 newpwd.txt

    Lets look at the file permissions before we look at the folder permissions.

    The file permissions are “-rw-rw-r–“.

    You will always see these 10 spaces filled with some characters.
    The first character is a hyphon (-). The reason why is that it’s a file. You notice on the directory it’s a “d” (for directory).
    The next three characters are “rw-” These are the permissions associated with the owner of the file. It means the owner (a user) is allowed to read and write to the file, but can’t execute or run the file.
    The next three characters are also “rw-“. These permissions are associated with the group of the file. This means that the group that owns the file is allowed to read and write to the file, but again, cant execute it.
    The next three characters are “r–“. These permissions are Everyone else. This means that anyone else is allowed to read the file, but can’t write (or change) or execute the file.

    I need to cover that first column better so that you know what you’re looking at here. Below is a table of the possible characters that you’ll see in the first character’s position.

    Regular File             -      Storing data
    Directories              d      Organizing files
    Symbolic Links           l      Referring to other files
    Character Device Nodes   c      Accessing devices
    Block Device Nodes       b      Accessing devices
    Named Pipes              p      Interprocess communication
    Sockets                  s      Interprocess communication


    After the first bit you will always, always, always, have the options of read, write and execute, for each of the Owner, Group and Other of a file.

    Let’s say that you want only yourself to be able to read and modify a file, the permissions would look like this, “-rw——-”

    Let’s say you want you and the group to be able to read and modify a file, but nobody else… the permissions would look like this, “-rw-rw—-”

    Here’s a graphic I found at Oracle’s website and then doctored up for understanding this.


    You’re probably wondering why I have the “421 421 421” and the “7 5 0” on there too.

    The reason why is that when you look at binary, the first three positions are 1, then 2, then 4 (binary is read right to left). If you add up the values that are present in a file’s permissions, you’ll end up with a value between 0 (represented by a hyphon (-)) and 7. The Read position is 4, the write position is 2 and the execute bit is 1. And if you add up the three positions, you’ll find a number between 0 and 7.


    Using CHOMD to Change File Permissions

    So that’s great, now that we understand what the permissions look like after they’re set, you’re probably wondering how to change them.

    This is where CHMOD comes into play.

    As I said before, “chmod” (a program that stands for “CHange MODe”) takes different types of arguments. The first type is what I call generic. Personally, I never use this. I always use the second type of argument, Octal. But lets look at what we have here:

    u    user
    g    group
    o    other
    a    all
    +    add
    -    remove
    =    set
    r    read
    w    write
    x    execute

    Now that we know what types of abilities we have, lets test this stuff out and change some permissions.

    Below, you see a whole list of commands you can run for “chmod” with the effective permissions at the end. We’re working with an imaginary file named “linux.dat”. Make sure to look at the file’s starting permissions and ending permissions.

    serdman83 @ newstudent05 ~ :) ?> ls -l linux.dat
    -rw-rw-r-- 1 serdman83 serdman83 42 Apr 15 12:12 linux.dat

    chmod arguments                       result of command                     effective permissions
    chmod o-r linux.dat         remove readability for others                   rw-rw----
    chmod g-w linux.dat         remove writability for group                    rw-r--r--
    chmod ug+x linux.dat        add executability for user and group            rwxrwxr--
    chmod o+w linux.dat         add writability for other                       rw-rw-rw
    chmod go-rwx linux.dat      remove readability, writability,
                                and executability for group and other           rw-------
    chmod a-w linux.dat         remove writability for all                      r--r--r--
    chmod uo-r linux.dat        remove readability for user and other           -w-rw----
    chmod go=rx linux.dat       set readability and executability but no
                                writability for group and other                 rw-r-xr-x


    I hope you can see from this output that you can effectively change permissions for any file using this technique. Test it out on your own and see what you can do!


    Now lets talk about Octal permissions. I think Octal is easier, but maybe that’s because I use it all the time, and I rarely ever use the other method.

    With Octal you can specify permissions for entire folders of files as well as individual files. I believe it’s more powerful, and easier to script with octal notation. As we saw in the graphic before, you have User, Group and Other permissions. Lets look at that again:


    750 isn’t actually, seven hundred and fifty. It’s 7-5-0. the 7 means that the owner user is allowed to Read, Write and Execute the file. The 5 means that everyone in the group that owns the file is allowed to Read and Execute the file. And if you’re not the owner or in the owner group, you aren’t allowed to do anything with the file.

    664 would mean that the owner has read and write permissions, the group has read and write permissions and everyone else has read permissions.

    Now, lets look at the “chmod” command with octal notation. Remember that with the other way of changing file permissions you have to calculate what the current file permissions are, and then figure out what your command should add or remove. Here, with Octal notation, you don’t have to worry about how to change the permissions, you just have to figure out what the end result should be. We’ll use the same chart as we used above.

    serdman83 @ newstudent05 ~ :) ?> ls -l linux.dat
    -rw-rw-r-- 1 serdman83 serdman83 42 Apr 15 12:12 linux.dat

    chmod arguments             effective permissions
    chmod 660 linux.dat         rw-rw----
    chmod 644 linux.dat         rw-r--r--
    chmod 774 linux.dat         rwxrwxr--
    chmod 666 linux.dat         rw-rw-rw
    chmod 600 linux.dat         rw-------
    chmod 444 linux.dat         r--r--r--
    chmod 260 linux.dat         -w-rw----
    chmod 655 linux.dat         rw-r-xr-x

    Make sense?


    Changing Ownership of Files

    Well that’s great, we can now work on files permissions and we understand how to interpret long listings of files using the “ls -alh” command. Now let’s look at changing ownership of files.

    We know that there are two owners. There’s the actual User that owns a file, and there is the group who owns the file. There always has to be both.

    With the “chown” command, you can either change the owner of one file or directory, or you can add in a “-R” and change all the files and folders recursively (starting with everything in your current working directory). Be very careful, you may have some unintended consequences by using the “-R” argument. Make sure you understand what you’re doing.

    So lets look at some examples.

    Below you see that I have changed the ownership of a file from me to root. See here that I was logged in as Root to do that.


    On this screenshot below I showed the use of the “-R” so that I could change the ownership of the whole “all-labs” directory and all the files and folders below them.


    Changing Group Ownership of Files

    Now that we know how to change the ownership of a file, how about changing the group owner of a file? That is done in the exact same way as the “CHOWN” command, but instead of “chown”, we’ll use “CHGRP” (which is short for CHange GRouP).

    Below, I changed the group owner from my personal group, to the Root group.

    And here, I showed how to use the “-R” for recursion.


    File System File Information

    Before we go any further, we need to touch on some information that I brought up before. As you can see below, I’ve created a new file by echoing some data into it. The first line started the file, and the three following lines added to it. Then I showed a long listing of the file to show you that information.

    If you notice, my file is 23 bytes in size, which is the Data portion of the file. There is also metadata for the file; the owner, group owner and permissions on the file. You don’t see it here, but there is other data about the file too, such as the creation data, modify data and read date. The last piece of info is the file name, commonly referred to as a “Dentry”, which is a combination of the file name and the “Inode” that it refers to.

    Inode is a new word here, as well as Dentry. As I said before a Dentry is a combination of the filename and the Inode. The Inode is the file’s meta data and holds a reference to the file’s Data. Those three things are what makes up a file. I hope I explained that so you can understand. Just remember that a file will always have those three things: Inode, Dentry and Data.

    As I mentioned before, an Inode contains information about what a file is. As I mentioned before, everything in Linux is a file, just that there are different types of files. Here are the different types of files that you’ll see in Linux:

    Regular File             -      Storing data
    Directories              d      Organizing files
    Symbolic Links           l      Referring to other files
    Character Device Nodes   c      Accessing devices
    Block Device Nodes       b      Accessing devices
    Named Pipes              p      Interprocess communication
    Sockets                  s      Interprocess communication

    You must remember that an Inode carries the File Type (as mentioned above), the owner and group owner of the file referenced, the times about the file (atime (access/read time), mtime (Last Modified time) and ctime (Last time the Inode information was changed)), the file length (measured in bytes) and the total amount of disk space the file uses, and lastly the link count (which we’ll talk about in the next section).


    File System Linking: Difference between Hard and Soft Links

    Now we’re going to talk about file linking. Just how Windows and Mac have links, so does Linux. Windows has shortcuts on the desktop (ruined by the Windows 8 UI) which are similar to links in Linux.

    In Linux, there are two different kinds of links: Hard and Soft. Lets dive in and look at the difference and how you can apply them in your Linux box.

    Hard Links can be used when the same file needs to be located in two different locations. Lets say there is a program that runs, that needs to see a configuration file that is located in another folder for a different program. Instead of keeping the file up to date, and replicating the changes in two different spots, you could create a hard link. Every time the configuration file is updated in one location, the changes are automatically seen in the other. The other benefit to this is if the config file is referenced by one program as “program1.conf” and the other program see’s the file as “other-program.conf” I know, this is a bad example, but stay with me here.

    So the file is created for the first program in “/etc/new-program/program1.conf”. It’s just a regular file on the system’s hard drive. Let’s pretend that the file I just created in the last section (newfile-test.dat) is this program1.conf file. Now we’ll create a hard link to the file to pretend that the file is in a different location.

    You can create a hard link by using the “ln” command. It’s very similar to the way that the Move command “mv” works. See here how I’ve done it:

    Always remember when making links, the rule of thumb is,

    "ln" <spacebar> real-file-name <spacebar> linked-file-name


    Soft links are very similar to hard links. The difference between them is what happens when they are deleted. Lets look at Soft Links first, then we’ll talk about deleting them.

    Soft Links can be created very similarly, but the difference is the underlying structure of the link. A soft link, or symbolic link, is like your shortcut on the desktop of your Windows box. When you create a soft, or symbolic, link, you’re just putting a file in the location that you want it, that points to the real location of the file. Let me show you:

    As you can see, I created a file as root in my home folder and then created a symbolic link to that file that is named “linked-newfile.dat”.

    Now that you see how to create hard and soft links, let’s talk about the differences, and why they matter.

    With soft links, if I delete the source (original) file, then the link is dead. That is called a dangling link. The linked file still exists, but it’s not linked to anything. That can’t happen with a hard link because the file exists in two locations.

    The other issue is, lets say I create a link to a link. And the first link references the second link, and the second link refers to the first. That’s an infinite loop and it’s called a recursive link. Unless you’re trying to wreak havoc on your machine, it’s pretty hard to do, but it is possible.

    Hard Links                                                     Soft Links
    Directories may not be hard linked.                            Soft links may refer to directories.

    Hard links have no concept of "original" and "copy".           Soft links have a concept of "referrer" and
                                                                   "referred". Removing the "referred" file results in a
    Once a hard link has been created, all instances               dangling referrer.
    are treated equally.                                          

    Hard links must refer to files in the same filesystem.         Soft links may span filesystems (partitions).

    Hard links may be shared between "chroot"ed                    Soft links may not refer to files outside of a
    directories.                                                   "chroot"ed directory.


    Linux File Systems, Disks and Mounting Them


    Before we get into mounting disks, we need to look at how Linux looks at Disks. As we mentioned in the section named, “Other Important Directories”, there is a directory named “dev” at the root of the file system (/dev). That is where you’re going to find all the disks located by default. But that’s not how you access the data on the disk.

    Before we talk about how to access the data on a disk, we need to talk about some other stuff.

    Disks are devices within your computer system, and if you look at the long listing of the /dev directory, you’ll see something interesting.

    As we mentioned above, there is an object that is a “block level device” in Linux, and that is a hard disk. Most of your systems these days deal in Sata devices, so that’s why we see “sd”. If it were and IDE Hard Drive, you would see “hd” there. If you saw a floppy disk, it would start with an “fd”. And if you saw a CD-Rom device, you would see a “cdrom”. Pretty strait forward.

    But why is there “sda” and “sda1” and “sda2” (and so on) in there? Those are all significant in their own way and we’ll cover what all of that stuff means.

    I’m not going to get too granular here, but I will say that the main thing you need to understand here is that by default no one but the Root account and the “Disk” account on your Linux box have access to do anything in this folder. That’s really important because accessing data on these devices shouldn’t be allowed to just anyone. If someone wants to access the data, they will have to see where the disk is mounted to and then see if they are allowed to write data to the mounted area.

    Before a disk can be mounted, it must have been formatted with a file system…


    File Systems and EXT4


    What’s the big deal about File Systems? Well, the big deal is that without a file system, you wouldn’t be able to store data logically on a disk, you wouldn’t be able to easily recall that data later, and you wouldn’t be easily able to search for data on that disk.

    A file system provides a template of “blocks” where the Operating System is allowed to store data. The default file system on Linux is the EXT4 File System. EXT stands for Extended, and the number 4 is the version number. So EXT4 is the fourth extended file system. It supports a lot of options that I’m not going to get super deep into here, but you can read all about it on other websites.

    Essentially, before a disk can be used in Linux, it must have a file system setup on it. In Linux, this is really easy to accomplish. There are many GUI tools out there, such as GParted, but the one I’m going to cover here is the “mkfs” command line toolset. I say toolset because there are actually many “mkfs” programs in the /sbin/ directory.

    You must be logged in as “root” in order to use the “mkfs” program (remember “su root”), otherwise it won’t work properly and throw errors that you may not be expecting. The programs all live in the “/sbin” directory, and if you recall, the /sbin directory is where all the programs live that only Root is allowed to run. Let’s look at the mkfs programs:

    As you can see above, there are many different file systems that Linux is able to make.


    Mounting File Systems and Viewing Mount Points

    Since we have the ability to make filesystems, now let’s mount them!

    As you may or may not know, the “mount” program is used to mount filesystems. But how do you see the partition you’ve mounted? There are no drive letters like in the Windows world.

    Filesystems and partitions are actually quite simple in Linux. Recall that the root of the filesystem is always in “/”. So whenever you mount a filesystem, you mount it to a folder that is somewhere in the root of your drive. In many single Operating System desktops, you wouldn’t normally have multiple partitions in your filesystem. But in more advanced systems, there could literally be over a dozen partitions, and the end user wouldn’t know.

    In my system, I actually separate out many partitions so that I can easily upgrade or migrate Operating Systems. This makes it especially easy when you move your home folder to a new machine. Imagine if the “/home” directory was actually a different Hard Drive that was automatically mounted upon the system booting. This way if you reinstalled your Linux OS, say to a different one all together (Maybe you were switching from Fedora to Ubuntu, or Debian to SUSE Linux), you could keep all the data in your “home” folder intact while reloading your OS.

    That “/home” folder would be considered a mount point. You can see your mount points by just issuing the “mount” command at the terminal.

    As you can see above, the mount command gives some good information to the end user. You can see that there is a single hard drive in my machine (it’s a virtual machine, but its all the same), and it is “/dev/sda”. On that disk, you can see that there are two partitions that are mounted: sda1, mount point is “/boot”; sda2, mount point at “/” (root partition).

    There are some other mount points listed here. For instance, the CDROM is mounted at “/media/RHEL_6.1 x86_64 Disk 1”.

    In most Linux distributions CD or DVD Rom devices are mounted in either the “/media” directory, or in the “/mnt” directory. Just from habit, I normally mount devices (DVDs, CDs, USB drives, etc..) in the “/mnt” directory.

    Some people say that it’s easier in Windows to view disk drives through the “My Computer” icon that is on the desktop. In Linux it’s really easy too. The “df” command will tell you everything you want to know about your disk’s free space. Let’s take a look at what that looks like. When I use the “df” command I normally append an “-ah” on the back so that I can see everything in human readable format. But let’s look at both here:

    AS you can see, the DF command can come in very handy. Also, as I said before, I almost always use “df -ah” because it’s normally the information I’m looking for. Play around with the other options though, you may find them useful.


    System Hardware

    As long as we’re on the subject of hard drives, why don’t we slide right into system hardware? It’s not really filesystem related but, we might as well cover some things such as… information about the hardware in your computer.

    If you’re on a machine that you’ve never used before, you can find out what hardware is in it with a few different commands, and looking at a couple different log files.

    When a Linux system boots, many times you’ll see a bunch of messages that the system is processing. There are portions of the hardware starting up, drivers being activated, network interfaces being brought up, services starting and many other tasks as well. All of these messages are produced by the Kernel, and they are logged in a file called “dmesg” which is located in the “/var/log/” directory. This file is different than many other logs in that it only can grow to a certain size, and it is wiped clean on every boot so you can only see logs since the most recent boot.

    According to Henry’s blog site, the default size is 32K, which can be changed in a couple ways if you so choose. I don’t particularly see the need for that, but check out his blog if you want more info on that.

    The dmesg log (also referred to as a buffer) can offer a lot of insight into what hardware is installed in your computer. Go ahead and check it out!

    To view to contents of that log you can either “cat /var/log/dmesg” or you can just issue the “dmesg” command to the command line. Pending what version of Linux you’re using, you may need to run that as “root” or “sudo” the command.

    steve @ mintdebianvm ~ :) ᛤ>   sudo cat /var/log/dmesg
    [sudo] password for steve:
    [    0.000000] Initializing cgroup subsys cpuset
    [    0.000000] Initializing cgroup subsys cpu
    [    0.000000] Linux version 3.2.0-4-amd64 ( (gcc version 4.6.3 (Debian 4.6.3-12) ) #1 SMP Debian 3.2.32-1
    [    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-4-amd64 root=UUID=6336df47-4713-4fe1-8327-93cbc721c8ef ro quiet
    [    0.000000] BIOS-provided physical RAM map:
    [   10.620051] Bluetooth: BNEP (Ethernet Emulation) ver 1.3
    [   10.620051] Bluetooth: BNEP filters: protocol multicast
    [   10.624155] Bluetooth: RFCOMM TTY layer initialized
    [   10.624155] Bluetooth: RFCOMM socket layer initialized
    [   10.624155] Bluetooth: RFCOMM ver 1.11
    [   10.692260] lp: driver loaded but no devices found
    [   10.825230] ppdev: user-space parallel port driver
    [   10.972943] e1000: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
    [   10.976905] ADDRCONF(NETDEV_UP): eth1: link is not ready
    [   10.972943] ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready

    As you can see above, the file is pretty long. My dmesg buffer is over 400 lines long. For some Linux boxes that’s short… for others it’s long. It all depends on what the system is doing, and what software and hardware you have installed.


    Another way to see what hardware is in your computer is to look at the Hardware Abstraction Layer. There is a process that runs on every single Linux box named, “hald”. This is the Hardware Abstraction Layer Daemon. You can actually query the “hal daemon” with a command: “lshal”, short for “list hal”. Check out the command below and try it on your computer too.

    steve @ mintdebianvm ~ :) ᛤ>   lshal

    Dumping 64 device(s) from the Global Device List:
    udi = '/org/freedesktop/Hal/devices/computer'
      info.addons = {'hald-addon-cpufreq', 'hald-addon-acpi'} (string list)
      info.callouts.add = {'hal-storage-cleanup-all-mountpoints'} (string list)
      info.interfaces = {'org.freedesktop.Hal.Device.SystemPowerManagement'} (string list)
      info.product = 'Computer'  (string)
      info.subsystem = 'unknown'  (string)
      info.udi = '/org/freedesktop/Hal/devices/computer'  (string)
      org.freedesktop.Hal.Device.SystemPowerManagement.method_argnames = {'num_seconds_to_sleep', 'num_seconds_to_sleep', '', '', '', 'enable_power_save'} (string list)
      org.freedesktop.Hal.Device.SystemPowerManagement.method_execpaths = {'hal-system-power-suspend', 'hal-system-power-suspend-hybrid', 'hal-system-power-hibernate', 'hal-system-power-shutdown', 'hal-system-power-reboot', 'hal-system-power-set-power-save'} (string list)
      org.freedesktop.Hal.Device.SystemPowerManagement.method_names = {'Suspend', 'SuspendHybrid', 'Hibernate', 'Shutdown', 'Reboot', 'SetPowerSave'} (string list)
      org.freedesktop.Hal.Device.SystemPowerManagement.method_signatures = {'i', 'i', '', '', '', 'b'} (string list)
      org.freedesktop.Hal.version = '0.5.14'  (string)
      org.freedesktop.Hal.version.major = 0  (0x0)  (int)
      org.freedesktop.Hal.version.micro = 14  (0xe)  (int)
      org.freedesktop.Hal.version.minor = 5  (0x5)  (int)

    steve @ mintdebianvm ~ :) ᛤ>   lshal --help
    lshal version 0.5.14

    usage : lshal [options]

        -m, --monitor        Monitor device list
        -s, --short          short output (print only nonstatic part of udi)
        -l, --long           Long output
        -t, --tree           Tree view
        -u, --show <udi>     Show only the specified device

        -h, --help           Show this information and exit
        -V, --version        Print version number



    You’ll also notice a filesystem mounted on your machine named “/proc”. This is an interesting virtual directory. It, too, is much like the “dmesg” log, in that it is wiped clean on every boot. The purpose of the /proc filesystem is to hold information generated and used by the Kernel. If you do an “ls -alh” on the “/proc” directory, you’ll notice many folders named with only numbers. You’ll notice quickly that if you issue the command, “ps aux”, that those numbered folders directly correspond to the Process ID (PID) number of every process running on your computer. Web Browsers, Terminal sessions, etc… everything running is issued a PID, and every process has a folder in /proc with information about that process.

    You’ll also notice that there are a ton of files in there too. Let’s examine some of them!

    As you can see above, there are many files in there. I couldn’t fit all of them neatly into a screenshot, but you can look at them on your computer.

    The one I thought you may be interested in was the “uptime” file. As you can see, the file outputs in seconds, how long the system has been up and running. My test system has been up for

    Let’s look at a couple more files:

    As you can see here, I am showing the “cpuinfo” and “meminfo” files. Both of them show some good details about the CPU and Memory installed in the system we’re using here.


    Disk and USB Information

    There is also information you can find about Hard Drives and USB devices. We’ll start with USB devices. Issue the “lsusb” command on your computer and look at the output.

    steve @ mintdebianvm ~ :) ᛤ>   lsusb
    Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 001 Device 002: ID 80ee:0021 VirtualBox USB Tablet

    As you can see from the output, there aren’t many USB devices on my local computer. On my Red Hat test server, there was actually nothing to report, which is why I showed the output from my Debian box.

    Now lets look at the “/proc/scsi/” folder. Since I don’t know anyone who uses IDE drives anymore, I’m not going to cover that. SATA is pretty much the defacto standard for laptop and desktop systems these days. See below for some of the outputs.

    There are only a couple files and sub directories in the “/proc/scsi” directories, but they are valuable knowledge for a system administrator looking to gain knowledge about the hard disks in the system.


    PCI Devices and Resources

    As you most likely have noticed, the “lspci” command is much like the “lsusb” command. It is a listing of all of the PCI Devices within the system you’re working on. It’s pretty strait forward, so I won’t spend much time here.

    Notice in the screenshot above the devices in my test server.

    Going back to the “/proc” virtual filesystem, there is a file that tracks IRQs, or Interrupt Request Lines. An IRQ is used by the hardware in your computer to get the attention of the CPU. According to WikiPedia, “… on the Intel 8259 family of PICs there are eight interrupt inputs commonly referred to as IRQ0 through IRQ7. In x86 based computer systems that use two of these PICs, the combined set of lines are referred to as IRQ0 through IRQ15. … Newer x86 systems integrate an Advanced Programmable Interrupt Controller (APIC) that conforms to the Intel APIC Architecture. These APICs support a programming interface for up to 255 physical hardware IRQ lines per APIC, with a typical system implementing support for only around 24 total hardware lines.”

    If you’re running a multi-processor system, you’ll notice that there are a set of IRQs for each processor. My system only has 1 CPU so I only have one set of IRQs. You’ll also notice that IRQ 0 (zero) is always running the timer service. The reason for this is that your CPU needs to time slice every process in order to process all the information that your system is computing. The timer runs at a guaranteed rate of 1000 interrupts per second.

    If you want more information about IRQs, WikiPedia has a great write-up on the subject and you can learn more about them there, but for this discussion this is about as much as you need to know.

    The file I was speaking of earlier was “/proc/interrupt” and if you view it, you’ll be able to see all the IRQs and what they are tied to on your system.

    There is also a file for memory information. Back in the day, RAM was in high value and hard to come by in large quantities in desktop computers. These days, just about every peripheral in your computer probably comes with it’s own memory buffer. And Linux needs to know how to handle that memory, what drivers are using memory, and how data will flow through that memory.

    As you can see below, that memory is mapped in the “/proc/iomem” file.


    How Filesystems Manage Devices

    As I stated earlier in the File System File Information section, everything is a file. Linux needs to use a device, it calls a file. Linux needs to write data to a hard disk, it writes it to a file. Linux needs to post data to your terminal, it writes it to a file (stdout). Everything is a file.

    There are virtual consoles in your system as well. These virtual consoles can be accessed by using your “Ctrl” + “Alt” + F# (where # is a number 1-8 by default. So if you press “Ctrl” + “Alt” + “F6” your screen will turn black, and then a prompt will appear waiting for you to log in. Then you need to press “Ctrl” + “Alt” + “F4” to get back to your desktop, which is just another virtual console but is running a service called “” which is what displays your GUI.

    Regardless, all of those virtual consoles are actually just files. Strange, maybe, but try to echo some text to “/dev/tty6” and see what happens when you look at virtual console 6.


    I hope this is a convincing fact to show you that everything is a file. We just “echoed” text into the file “/dev/tty6” and it showed up on the VC6 screen.


    Again, going back to what I said before, everything is a file, just that there are different types of files. Here are the different types of files that you’ll see in Linux:

    Regular File             -      Storing data
    Directories              d      Organizing files
    Symbolic Links           l      Referring to other files
    Character Device Nodes   c      Accessing devices
    Block Device Nodes       b      Accessing devices
    Named Pipes              p      Interprocess communication
    Sockets                  s      Interprocess communication

    The two that we are going to work with now are Device Nodes and Character Device Nodes. As you see from the table above, they both deal in accessing devices. But How?

    We’ll cover Block Devices first because we’re talking about hard drives. You’ll notice that any hard disk in your system, such as “/dev/sda1”, is a block level device. That means that information is transferred to and from the device that file is “attached” to in groups, or blocks. Another important fact to block level devices is that the Linux drivers allow for random access to the device, as opposed to sequential access. This is a huge benefit. Could you imagine if your computer had to read all the data on the drive before being able to pull a file located at the very end?

    As for Character devices, these have to do with things like keyboard input and output, such as the virtual console (or virtual terminal) that we just “wrote” data to in the example above. Another type of Character Device would be a printer.


    We’re getting there… slowly but surely! We’re on the home stretch, so lets finish this up with the last part of file system management, disk partitioning and encryption!!


    More on Partitions

    As I mentioned before, Linux sees hard disks through block devices that you can list in the “/dev” directory.

    To expand on this, lets look at the screenshot I have for “sda” again:

    As you can see from the screenshot, there is a device referred to as “/dev/sda”. That is one disk in the machine. If there was another, it would be “/dev/sdb”, and then, “/dev/sdc”, and so on.

    The partitions are listed after that. You can see there are multiple partitions on “sda”, and they are “/dev/sda1”, “/dev/sda2”, and “/dev/sda3”. Using the “mount” command you can see that those three partitions are mounted to “/boot”, “/” (root), and “swap”, respectively. We’ll talk about swap space here in a bit.


    Disk Partition Alignment

    Every disk has something called a Master Boot Record, or MBR for short. This tells the disk exactly where certain things are located on the disk, such as the Bootloader and the Partition Table.

    The Bootloader only exists on disks that are marked as bootable. The Bootloader is a low level executable that the BIOS transfers control to upon its boot cycle, and then the bootloader passes control of the boot to the partition for which an operating system is present on.

    Sixty four (64) bytes of the MBR is reserved for the partition table. The partition table is just like a map, and holds information as to where partitions on the disk start and stop. 64 bytes isn’t a lot of room, which is why there is a limit to how many partitions are allowed to be made on the disk. Disks are only allowed to have 4 primary partitions.

    There’s a way to get more partitions on your disk though, using “Extended Partitions”. This has been around for many years and is a genius way to fit more partitions on a disk. According to DOS partitioning, you can pick any one of the 4 partitions as an Extended Partition. This Extended Partition can be thought of as a container for other partitions that are referred to as “logical partitions”.

    There is a program that you can use to alter or view partition information. That program is “fdisk”. You must be root to run the program because it queries the disks in your machine at a low level that normal users don’t have access to. Many times you’ll see people call the “fdisk” program in one of two ways:


    The reason for the “fdisk -cul” is that the “c” disables some old DOS compatibility that isn’t required anymore, and the “u” prints out the information in the number of sectors, as opposed to the number of cylinders. Back in the day, and even back in OpenBSD versions 3.6 or 3.8, I remember having to partition disks by specifying the number of Cylinders, Heads and Sectors. These days, it’s so much easier… you can specify size in a variety of ways, such as K for Kilobytes, M for Megabytes and G for Gigabytes.

    But we’re not even at that part yet. So let’s keep moving!

    In the output of the last screenshot you can see a lot of information. You can see the total size of “sda” is 6442MB. You can see that there are three partitions on “sda”. You can see that there is a second disk in the system (sdb) that is just about 1GB in size and it has 7 partitions.


    Making New Partitions

    With “fdisk” you can also specify new partitions. I’ll do my best in describing this…

    To start the “fdisk” utility, you need to call “fdisk” with a few arguments. See my screenshot below:

    Now that we’re inside the fdisk editor, you can do a lot of damage if you’re not careful, so… be careful!

    As you can see, I told fdisk that I want to edit the disk “/dev/sdb”. The first thing I want to do is look at the partition table.

    So press “p” and then on the keyboard to show the partition table.

    In this case, I don’t want any of these partitions on here, so I’m going to delete them all. Lets see what that looks like:

    As you can see, now we have disk “sdb” with no defined partitions on it.

    Now that we have an empty disk, let’s create some new partitions to see what that looks like.

    As you can see from that screenshot, I chose “n” for new partition, then “p” for primary partition, then “1” for the first partition in the disk. Then I specified “+200M” to say, I want the Partition to be 200 Megabytes in size. after that, I printed the partition table again for you to see the new partition.


    Making New File Systems

    Now that we’ve got new partitions we can go back to our discussion on File Systems and EXT4. So just to clarify, now that we actually have a partition, you still can’t store data on there yet. Well… you could, but you wouldn’t be able to retrieve it very easily. You have to give your new partition a valid file system. The one that is pretty much the Linux standard these days is the EXT4 filesystem, so that’s the one I’ll show you how to use.

    The EXT4 file system is the predecessor to the EXT3 and 2 file systems. Those file systems used what is referred to as “block mapping scheme”. The EXT4 file system now uses “extents” and also adds journaling to a long list of other add-ons, improvements and scalability.

    EXT4 also supports huge file sizes (around 16TB or 16 Terabytes), huge file system sizes (1 Exabyte, which is 1024PB (Petabytes). A Petabyte is 1024TB (Terabytes). A Terabyte is 1024GB (Gigabytes)).

    As we spoke about before, there are many things a file system provides. The first thing we spoke of was the structure of the file system. There is the root of the file system that is located at “/” and every folder, file, and device is located below that. And since Linux looks at everything as a file, we can also recall that every file has a number of attributes including an “Inode”, a “Dentry” and the data. We covered these words in a previous section, but you really should know and understand the meaning, so lets cover them again:

    • Inode: is a location to store all of the information, or metadata, about a file. Well, at least most of the metadata about a file. The Inode doesn’t store the actual data portion of a file or the file name. It does, on the otherhand, store the file permissions, the user and group ownership data, and three different types of data about when the file was create, modified and so-forth.
    • Dentry: is the section that stores the file name of a file. It also stores information about what folder structure is associated with a file, such as “/usr/bin/”.
    • Data: is pretty strait forward. It is the total data that is associated with a file, such as a configuration text file, or a Libre Office file, or any other user type file.

    Anyway, now we need to create our file system on our new partition. See below how to do that:


    Just an update. I’m not finished with this blog, but I felt there was enough starter information here to help people get going with Linux. Enjoy and keep coming back for more info as I’ll be adding to this and releasing new blogs all the time!!!




    • No Startch Press
    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    SAMBA 4 Released! Let’s get installing!

    So, as many of you have heard, SAMBA 4 was finally released… and holy crap, it’s the closest LDAP service I’ve ever seen to the real Active Directory. As well it should be too, I mean, Microsoft actually helped work on it! This release of SAMBA is huge. It’s really going to change the game of LDAP, file sharing between Linux/Unix and Windows, and authentication. You can read the news release from the SAMBA team HERE or visit their website HERE

    What is really huge about it all is that you can setup a SAMBA 4 server to take over, literally, all functions of a Windows AD Domain Controller. It can process authentication requests, hand out Group Policies, process MSRPC communications and more. Think about if you could replace most of your Windows AD DCs with free software. How much will that save you in cost?

    So naturally, I’m in the process of getting it up and running. I figure I just got my home systems to authenticate to Active Directory, why not replace one of the Domain Controllers with a SAMBA Domain Controller? So, I’m basing this on a Debian 6 machine. I figure that’s the best place I can put it since I plan on it being around for a while. Why do I plan on it being around for a while? Because rebuilding AD from scratch sucks! And with Debian being a rolling-release operating system, I’ll never have to reinstall the OS on the next release! Pretty damn convenient if you ask me.

    So I downloaded the small .iso installer file from Debian , provisioned a new VM, and installed Debian 6. It really doens’t take too long, and when it’s done, you dont have to update it, it’s totally patched and ready to go from the start, it’s so easy to work with!

    After getting the OS up and running, I had some house keeping to do:

    sudo apt-get install gcc make python-dev linux-headers-2.6.32-5-all

    Then I was able to install VMware Tools. You’d have to do the same thing on a VirtualBox system, and you’ll need that stuff for installing SAMBA anyways, so you might as well just install this stuff now and get it over with.

    So now we need to actually get the SAMBA4 code, which we can do two separate ways. I’m sure in the future that SAMBA 4 is going to start becoming available in repositories so how you do this is up to you. The two options I recommend are GIT and WGET, which are outlined here:

    My Debian 6 machine didn’t have GIT installed so a simple “sudo apt-get install git” solved the issue.

    cd ~
    mkdir samba4
    cd samba4
    git clone git:// samba-master


    cd ~
    mkdir samba4
    cd samba4

    From here it’s simple. If you downloaded the tarball, just extract it like this:

    tar -zxvf samba-4.0.0.tar.gz

    Now, whether you have performed either the GIT or the TAR method, you’re in the same place.
    Just enter your “samba-master” or “samba-4.0.0” directory to continue.

    From here we’re going to compile everything, starting with





    sudo make install clean

    Durring the configure, make and install you’ll see a ton of scrollback. I set my scrollback to “Unlimited” in my terminal so that I can go back through it if there are issues. I forgot that the “make install clean” needs to be run as root or you can sudo that command:

    steve@ncis-samba:~/samba/samba-master$ sudo !!
    sudo make install clean
    [sudo] password for steve:
    WAF_MAKE=1 python ./buildtools/bin/waf install
    ./buildtools/wafsamba/ DeprecationWarning: the md5 module is deprecated; use hashlib instead
      import md5
    Waf: Entering directory `/home/steve/samba/samba-master/bin'
    * creating /usr/local/samba/etc
    * creating /usr/local/samba/private
    * creating /usr/local/samba/var
    * creating /usr/local/samba/private
    * creating /usr/local/samba/var/lib
    * creating /usr/local/samba/var/locks
    * creating /usr/local/samba/var/cache
    * creating /usr/local/samba/var/lock
    * creating /usr/local/samba/var/run
    * creating /usr/local/samba/var/run
        Selected embedded Heimdal build
    Checking project rules ...
    Project rules pass
    (scrollback omitted)
    Waf: Leaving directory `/home/steve/samba/samba-master/bin'
    'install' finished successfully (1m42.653s)
    WAF_MAKE=1 python ./buildtools/bin/waf clean
    ./buildtools/wafsamba/ DeprecationWarning: the md5 module is deprecated; use hashlib instead
      import md5
        Selected embedded Heimdal build
    'clean' finished successfully (0.765s)

    And we’re done! Well, at least for installing this software.

    You now have SAMBA 4 installed on a Debian 6 System! 🙂

    steve@ncis-samba:~/samba/samba-master$ ls -alh /usr/local/samba/
    total 40K
    drwxr-sr-x 10 root staff 4.0K Dec 15 13:18 .
    drwxrwsr-x 11 root staff 4.0K Dec 15 13:18 ..
    drwxr-sr-x  2 root staff 4.0K Dec 15 13:20 bin
    drwxr-sr-x  2 root staff 4.0K Dec 15 13:18 etc
    drwxr-sr-x  7 root staff 4.0K Dec 15 13:18 include
    drwxr-sr-x 14 root staff 4.0K Dec 15 13:19 lib
    drwxr-sr-x  2 root staff 4.0K Dec 15 13:18 private
    drwxr-sr-x  2 root staff 4.0K Dec 15 13:20 sbin
    drwxr-sr-x  7 root staff 4.0K Dec 15 13:20 share
    drwxr-sr-x  7 root staff 4.0K Dec 15 13:18 var

    You probably want to make SAMBA start when your server boots, right? Well, lets get that going.

    The people over at SAMBA have made this super easy. So lets get some wget action going on this.

    Just use wget like this:

    wget -O /etc/init.d/samba4

    If that link doesnt work, I’ve posted the script on my site, here:

    From here, you just need to make sure this script is executable:

    chmod 755 /etc/init.d/samba4

    Now you can add this to your init scripts:

    update-rc.d samba4 defaults

    As for configuring SAMBA 4, that’ll be in my next blog. If it’s anything like setting up and configuring a Microsoft AD Domain Controller (which I’m sure it’ll be much MORE difficult than that) then I’m sure the next blog will be pretty long…



    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Open Source: Managing Debian and Ubuntu Linux with Active Directory

    I talked about this in my last blog post: We had a need for Authentication on our Linux/Unix systems to be done by Active Directory. So my co-worker and I set off on a mission to fulfill this request. We’d tried some software that wasn’t free, heard about some other software that wasn’t free and then is struck us. “Why Pay?”

    All the work had previously been done for us in the Open Source community… why not leverage them directly? So this is my homage to the Open Source community. I’m going to try to give back by writing this blog about my trials and tribulations in setting up this functionality. I’ll forewarn you, this blog entry is very long and gets into a lot of detail, but I assure you, at the end of the day, this works!

    My testbed here is my home network. I’m running a 2008 Server with AD installed. Nothing special, very vanilla, no crazy GPOs to deal with, no delegations to worry about and I’ve secured the environment fairly well (IMHO). There are virtually no extra roles, services or features installed other than a base install of AD Services, but I do have Exchange Server 2010 installed, so the schema has been extended for that. But it shouldn’t affect your environment if you aren’t running Exchange.

    I want to get one last statement in here: I am by no means a Linux or Unix Expert, but I can troubleshoot and read. The way I have this setup here is the way I figured out to do it and the best I can say is that it works, it’s secure, and it doesn’t take long to do. I’ve done a bunch of research and I’m going to attempt to regurgitate that knowledge back into this blog as best I can. If you know how to do something better here, please contact me at my LinkedIn page 🙂 .

    So lets get down to brass tacks here… I have some Debian based systems (Linux Mint 13, Debian 6 and two Ubuntu 10.04 Servers), a Red Hat server (REL 6), an Oracle Enterprise Linux 6 Server, 3 Windows Server 2008 domain controllers, an Exchange 2010 server and some other systems on my home network. I wanted to extend my AD capabilities by getting my Debian based systems to authenticate to my 2008 Domain Controllers (DCs).

    To start, you’ll need to know a couple peices of information. You’ll need to know what DC is holding the PDC FSMO role. Easiest way to do that is to log onto a DC, fire up AD Users and Computers, right click on the domain name and then click on Operations Masters. In the window that appears on your screen click on the PDC tab and document the FQDN of the server that currently holds that role.

    Operations Master

    After you identify this system, the next best thing to do is create a DNS entry pointing to your PDC Server. This way if you ever need to decommission your current PDC server, you can just change the DNS record and not have to go back to all your Linux systems to update the system they authenticate to.

    From here, everything you’re going to do, aside from creating new AD users and security groups, will all be done at the Linux command line. There’s a couple of conf files that we need to configure after installing some software on each of the systems. In one of my future blog posts, I’m (hopefully) going to be going over using Chef to distribute configuration files <>.

    This whole process isnt all that difficult as long as you have a decent understanding of the services and subsystems that you’re relying on. Here they are:

    • Pluggable Authentication Modules (PAM)
    • Server Message Block (SMB, Samba)
    • WinBIND (part of Samba)
    • Kerberos 5 (By MIT, with Microsoft compatibility hacks)

    SO, lets get some software installed. Below is the EXACT command line that I used on my Ubuntu servers (10.04).

    sudo apt-get install krb5-user libkrb53 krb5-config winbind samba ntp ntpdate nss-updatedb libnss-db libpam-ccreds libnss-ldap ldap-utils


    After installing that software, you’ll want to stop all the services while you configure them:

    sudo /etc/init.d/samba stop
    sudo /etc/init.d/winbind stop
    sudo /etc/init.d/ntp-server stop


    Each server in a Kerberos authentication realm must be assigned a Fully Qualified Domain Name (FQDN) that is both forward- and reverse-resolvable.

    Note: Active Directory depends heavily on DNS, so it is likely that the Active Directory Domain Controller is also running the Microsoft DNS server package. If this is the case, verify that each server has a FQDN assigned to it before performing the tests outlined in this section.

    If the server already has an FQDN assigned to it, test forward and reverse look-up with the following commands:

    nslookup  (ip address of server)

    The output of the first command should contain the IP address of the server. The output of the second command should contain the FQDN of the server. If this is not the case, Kerberos authentication will not function properly. Next, we’ll be configuring the Kerberos Config file which is located here: /etc/krb5.conf Here’s what mine looks like (Make sure to read the comments I put in there):

    default_realm = ERDMANOR.COM #Kerberos is CASE sensitive; this must be all UPPERCASE!
    default = FILE:/var/log/krb5.log
    kdc = FILE:/var/log/krb5kdc.log
    kdc = #You really only need 1 kerberos domain controller
    kdc = #but in my network there are three, so I listed
    kdc = #all of them in here.
    admin_server = #This should be set to the DC that holds the PDC Role
    default_domain = #

    krb4_convert = true
    krb4_get_tickets = false



    Active Directory, for as long as I can remember, is time sensitive to about +/- 5 minutes. You can adjust that window to anything you want by editing your Domain Policies (Group Policies (GPOs)), but there’s no need to really do that. Anything outside that window of time and your Domain Controllers will deny any kerberos ticket requests. This is why you need to make sure and setup your NTP daemon to point at your domain controller. I recommend setting it up with a DNS name, but you can get by with an IP address. Reason is, if the PDC ever changes, you dont need to go back to all your old machines and update conf files. Run this command: “sudo nano /etc/ntp.conf”

    # /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help

    driftfile /var/lib/ntp/ntp.drift
    statistics loopstats peerstats clockstats
    filegen loopstats file loopstats type day enable
    filegen peerstats file peerstats type day enable
    filegen clockstats file clockstats type day enable

    # Specify one or more NTP servers.

    server #insert your PDC here
    server #secondary DC
    server #third DC
    server #fall back to Ubuntu's NTP
    server #
    server #


    So, we’re on our way here. Without saying, you’re probably getting a DHCP address from a Domain Controller if you’re already on a Windows network. If you’re setting up a server with a Static address, then make sure to setup your DNS nameservers in your /etc/resolv.conf file so that you’re getting DNS from your PDC and any other Domain Controllers which host DNS. I DONT recommend using your “/etc/hosts” file for this.


    So lets get to testing! From the command line issue this command:

    kinit -p username@MYDOMAIN.COM
    #obviously changing to your username and domain name on your network.
    #Notice the UPPERCASE spelling of MYDOMAIN.COM?

    After that command is entered you should be getting prompted for your DOMAIN password. From here just make sure that you’re not getting any errors (which you shouldn’t). If you’re looking to verify that you have a valid ticket, then issue this command:

    klist -e

    Now that we have Kerberos and NTP working properly, we can move onto the next portion of authentication: PAM. If you dont know anything about PAM then you can safely move on to the configuration portion of this part. But for those of you wanting more of an understanding, here you go. I got this information from, and it’s VERY good info. Also, verify that your “/etc/skel/” directory is setup properly. You can get creative with this and have some pretty neat options rolled out to all your users if you prefer.

    #I took out all the #comments for this blog, but I HIGHLY recommend that you leave them in!

    so here are what my PAM modules look like in /etc/pam.d/:

    # /etc/pam.d/common-account - authorization settings common to all services
    session required skel=/etc/skel/ umask=0022 #VERY IMPORTANT!
    account [success=3 new_authtok_reqd=done default=ignore]
    account [success=2 new_authtok_reqd=done default=ignore]
    account [success=1 default=ignore]
    account requisite
    account required
    account required minimum_uid=1000


    # /etc/pam.d/common-auth - authentication settings common to all services
    # here are the per-package modules (the "Primary" block)
    session required skel=/etc/skel/ umask=0022
    auth [success=6 default=ignore] minimum_uid=1000
    auth [success=5 default=ignore] nullok_secure try_first_pass
    auth [success=4 default=ignore] krb5_auth krb5_ccache_type=FILE cached_login try_first_pass
    auth [success=3 default=ignore] use_first_pass
    auth [success=2 default=ignore] minimum_uid=1000 action=validate use_first_pass
    auth [default=ignore] minimum_uid=1000 action=update
    auth requisite
    auth required
    auth optional minimum_uid=1000 action=store
    auth optional
    auth optional


    # /etc/pam.d/common-password - password-related modules common to all services
    password [success=4 default=ignore] minimum_uid=1000
    password [success=3 default=ignore] obscure use_authtok try_first_pass sha512
    password [success=2 default=ignore] use_authtok try_first_pass
    password [success=1 user_unknown=ignore default=die] use_authtok try_first_pass
    password requisite
    password required
    password optional


    # /etc/pam.d/common-session - session-related modules common to all services
    session [default=1]
    session requisite
    session required
    session optional
    session required skel=/etc/skel/ umask=0022
    session optional minimum_uid=1000
    session required
    session optional
    session optional
    session optional
    session optional nox11


    # /etc/pam.d/common-session-noninteractive - session-related modules
    # common to all non-interactive services
    session [default=1]
    session requisite
    session required
    session optional
    session optional minimum_uid=1000
    session required
    session optional
    session optional
    session optional


    This should be everything you need for PAM to work properly. Now we need to work on Samba. The Samba config is stored at “/etc/samba/smb.conf”. Again, I stripped my Samba config down and made a backup of the original. I dont want my end users sharing data between themselves, I want them using corporate file shares where I know that the data is backed up. Also, I want them using Print Servers, not hosting printers from their machines. So this smb.conf is pretty short compared to the original. If you visit the Samba website, you’ll even see that they want people to keep this file short and simple. According to the Samba Team, the longer this file is, the more it impacts performance of the system. Please heed the warnings in your smb.conf as well as the notes I post below:

    # NOTE: Whenever you modify this file you should run the command
    # "testparm" to check that you have not made any basic syntactic
    # errors.
    #======================= Global Settings =======================


    security = ads
    realm = MYDOMAIN.COM #Must be UPPER case
    password server = #PDC that we mentioned earlier
    workgroup = MYDOMAIN #This is the NetBIOS name of your Domain
    idmap uid = 10000-20000
    idmap gid = 10000-20000
    winbind enum users = yes
    winbind enum groups = yes
    template homedir = /home/MYDOMAIN/%U #Dont forget to update this directory!
    template shell = /bin/bash #You can use whatever shell you'd like
    client use spnego = yes
    client ntlmv2 auth = yes
    encrypt passwords = yes
    winbind use default domain = yes
    restrict anonymous = 2

    server string = %h server (Samba, Ubuntu)
    dns proxy = no
    log file = /var/log/samba/log.%m
    max log size = 1000
    syslog only = yes
    syslog = 4
    panic action = /usr/share/samba/panic-action %d
    encrypt passwords = true
    passdb backend = tdbsam
    obey pam restrictions = yes
    unix password sync = yes
    passwd program = /usr/bin/passwd %u
    passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
    pam password change = yes
    map to guest = bad user
    domain logons = no #Extremely important that this is NO.
    usershare allow guests = yes



    Next we’ll be setting up the “/etc/nsswitch.conf” file. This file does a few things to help communications with your LDAP server (AD in this case) as well as tell your local Linux system where to look for password information.

    When fiddling with /etc/nsswitch.conf, it is best to turn the Name Services Caching Daemon off or you will be confused by cached results. Turn it on afterwards.

    /etc/init.d/nscd stop

    Now edit the nsswitch.conf file:

    # /etc/nsswitch.conf
    passwd: files winbind
    group: files winbind
    shadow: compat
    hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
    networks: files
    protocols: db files
    services: db files
    ethers: db files
    rpc: db files
    netgroup: nis

    And Turn back on your service:

    /etc/init.d/nscd start


    Assuming that all goes well and Kerberos, Winbind and Samba are setup properly, you should be able to join your linux system to the domain. Due to restrictions in the NetBIOS protocol, the hostname must contain no more than 15 characters. If you see a STATUS_BUFFER_OVERFLOW message in the winbind log, odds are the hostname is invalid. Now would also be a good time to clear whatever cache files, if any, Winbind had previously generated. The Winbind cache is located in /var/lib/samba/. Backup this directory to /var/lib/samba.bak/ and delete all the files in the original. Now you can issue this command:

    sudo net ads join -S MYDOMAIN.COM -U {domain-admin-user}

    Couple things here.
    First, you may need to change MYDOMAIN.COM to KERBEROS.MYDOMAIN.COM. If it doesn’t work the first way, try the next. Second is, {domain-admin-user} MUST be a Domain Admin account in Active Directory. Otherwise you’ll fail.

    Now, I’ve gotten mixed results here… My Mint 12 and 13 boxes joined and I actually got a “Domain Joined!” message in the shell.

    My Debian 6 machine threw an error:

    steve @ mintdebianvm ~ :) ᛤ>   sudo net ads join -S ERDMANOR.COM -U administrator
    [sudo] password for steve:
    Enter administrator's password:
    kinit succeeded but ads_sasl_spnego_krb5_bind failed: Server not found in Kerberos database
    Failed to join domain: failed to connect to AD: Server not found in Kerberos database

    I haven’t had much time to look into why this is happening, but I can assure you the system joined the domain, the computer account was created in AD and I’m able to SSH to this machine with domain creds… If anyone knows why this is happening, PLEASE contact me! Thanks!


    Look up Windows Ports needed for Active Directory. Need Microsoft Link!
    After your join to the domain is successfull, you can startup your services:

    sudo /etc/init.d/samba start
    sudo /etc/init.d/winbind start



    From this point, you should be able to test some querys against the domain:

    getent passwd
    getent shadow
    getent group

    At this point, you should be able to resolve users and groups from the Windows Active Directory domain using getent passwd and getent group. If these commands don’t display your Windows accounts, try to resolve them using wbinfo -u and wbinfo -g. These commands query the Winbind service directly, bypassing the name service switch. If you can resolve users and groups with wbinfo, go back and make sure you configured /etc/nsswitch.conf properly.


    Now with EVERYTHING setup properly, you *should* be able to fire up an SSH session to your linux box and log in with AD Credentals. BUT! Your Domain Users are NOT going to be able to “sudo” any commands. For the sake of security, you dont want ALL your domain users to be able to sudo commands, so what I did is create a domain security group, mine is named “linux-sudo”. Then I added in only the users I want to be able to sudo commands to that group. Then I edited my “sudoers” file to include the domain security group “linux-sudo”. So make sure to edit your “/etc/sudoers” file, and add this line:

    %linux-sudo     ALL=(ALL:ALL) ALL

    Now, I’m able to log into my Debian, Mint and Ubuntu Linux systems with Domain Credentials! 🙂

    EDIT: In looking for information regarding this entire process on a RED HAT system. (RHEL 5 or 6), please refer to this guide:

    Here are all the sites that I used in the making of this blog:

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: +1 (from 1 vote)

    Open Source can save you millions: Part 1, the intro…

    After dealing with some vendors in the last couple years, I’ve come to realize one major tone keeps rearing it’s ugly head: Vendor sales people will tell you anything to get you to buy their product or service, regardless as to whether or not their product/service is the best solution at the best price out there.

    Now, wait just a minute. I’m not going to demonize salesmen or be some hippie tree hugger and say, “don’t buy commercial products, man!”. Some companies and products are pretty damn good. Some are definitely not. Some are ridiculously expensive; some are not. But How do you know which ones to actually spend money on, or not to spend money on, if your company, or personal outlook on life, is telling you to just listen to a vendor and buy his products? When was the last time you went to your grey beards and asked them if they have a solution to your problem?

    Well, I’m not a grey beard, but I am a big proponent of the “DIY” projects. I try to do things around my house all the time, and that includes my home network. I also carry that philosophy into work.

    This is a multi-part blog that is going to attempt to outline why I’d rather spend $100,000/yr on a Salary for a good worker than to spend that same amount on some appliance to install in the Data Center. Here we’ll be talking about replacing products from companies like CA, Centrify and others with some already built-in modules in your Linux/Unix environments that many people don’t even know they have. We’ll talk about that topic in the next blog though, because I really want to focus on the fact that good Security and IT products can be difficult to come by. And sometimes you have a solution to your problem inside your organization already, but don’t know it yet. Don’t automatically think that if there is a problem, your solution is to buy another product or service from your vendor supply chain. Stop throwing money at the solution hoping it will work out!

    Here’s what I started with. There is a large need to get all of our Linux/Unix environment to authenticate to Active Directory (AD). Just like the VAST majority of companies out there, we are largely a Microsoft shop. News Flash: Almost everyone is. And that’s because AD is the best at what it does; no one comes close. Same for Microsoft Exchange; I beg you to tell me who makes a product that comes anywhere close to what Exchange does. Regardless, we need to auth to AD from Linux/Unix, and the costs surrounding 3rd party vendors is ridiculous. Now I know people need to make money, but over $100 grand every couple years for software and support is insane to do such a simple task as this. I talked to a co-worker and he led me down the path of, “Why pay to do it when you can do it for, well, basically free?”

    Free is a relative term, right? I mean, “there is no such thing as a free lunch.” So you’re paying my salary, and the salary of a Linux/Unix admin, and whoever else, but weren’t you already paying those salaries? And How much does it cost your company to have a (most likely well paid) Linux/Unix admin sitting around all day doing account provisioning, password resets and setting up users to have the specific access they need? Shouldn’t your account provisioning team be doing that? The costs of that are pretty high. According to a Gartner study it could cost up to $600,000/yr just sitting there resetting passwords on 300 Linux/Unix systems. Now, that number is pretty high. They are basing that on $17/password reset X 300 servers X 30 accounts per server which is $153,000/yr times 4 times a year = $612 grand.

    Whether or not you’re doing that many password resets is irrelevant, and lets say a password reset costs $10 in time, and lets say you’re resetting 50 passwords a week. You’re still spending over $25,000/year on performing password resets! And that doesn’t even account for user account management, managing the rest of your server fleet, managing all the “passwd” and “shadow” files on those servers, etc… So in reality, are you going to spend $125 grand on a solution to save $25 grand? I don’t think so. But How about spending $0 to save $25 grand? 🙂

    So, at the end of the day, all I’m trying to convey here is that you need to rely on your employees. If you give them the tools to succeed, you allow them the latitude to innovate  and you treat your business like a small business, I promise you that you’ll get cost savings and better service.

    VN:F [1.9.22_1171]
    Rating: 5.0/5 (1 vote cast)
    VN:F [1.9.22_1171]
    Rating: +1 (from 1 vote)

    Linuxy Stuff: DavMail

    So I actually have a few things I’m working on here, but I’ll focus this on just 1 topic. In talking with a coworker a couple weeks ago, he introduced me to some great software that acts as a proxy to Microsoft Exchange. I’ve tested it with Exchange 2010, but I’m sure it works in previous releases as well. The name of the software is DavMail and it works pretty damn well.

    I do hate POP mail, since you can only sync the Inbox folder. So if your already existing Exchange account has multiple folders setup and rules moving mail around, have fun with that. For sanity, I separate my mail quite a bit. For projects and certain people I get a lot of mail from, I create folders. For people or departments I don’t get much mail from, I make folders for them. It makes searching and archiving much easier.

    So after I tested out the POP connector, I promptly switched to IMAP (not that I like IMAP any better, but it can sync multiple folders). The sync still isn’t blazing fast, but it’s not a “push” service either. Your system will check every 10 minutes (that setting is configurable) and allow your mail client to download the mail from DavMail. The process for syncing every folder if quite lengthy, but once everything is setup it’s pretty nice after that.

    The main shining point here is for people who use mail clients and phones that don’t support Exchange integration. Evolution, Thunderbird, some older Android phones, etc… That’s what I’m using it for (Thunderbird). I have multiple VMs that I’m in all day long, and I cant keep switching back and forth with my Windows VM running Office 2010. So when I’m in my Linux Mint VM, I can still get updates to mail while I’m working.

    At the end of the day, I’ll be honest, I’m not sure how happy I’d be exposing this to the Internet to publish for mobile phones. By all means, please try to connect first with your phone to the Exchange server. There’s no need to throw more middle-ware out there and open up more ports inbound to your organization (even if you’re just a home user).

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Linux Apache2: Mod_rewrite for WordPress

    So I’ve been having all kinds of issues with getting Word-Press “permalinks” working. I could’ve sworn that that I had my “.htaccess” file setup properly, my Word-Press install seemed to be working just fine, and everything else on the server worked. So what to do?




    First off, if you’re like me, you already installed Apache like this:

    apt-get install apache2


    You should already have Apache’s mod_rewrite installed on your box. If so, it will found in “/usr/lib/apache2/modules”


    Now, go into your “mods-enabled” directory and create a rewrite file.

    cd /etc/apache2/mods-enabled
    touch rewrite.load
    sudo nano rewrite.load


    Now paste this following line, then save and close this file:

    LoadModule rewrite_module /usr/lib/apache2/modules/


    Now we need to make sure that our Apache config is setup properly:

    sudo nano /etc/apache2/sites-available/default


    Find the following:

    Options Indexes FollowSymLinks MultiViews
    AllowOverride None
    Order allow,deny
    allow from all


    Now, change the “AllowOverride” from “None” to “ALL”

    Options Indexes FollowSymLinks MultiViews
    AllowOverride all
    Order allow,deny
    allow from all


    and finally restart Apache:

    /etc/init.d/apache2 restart


    Now you can go into your Word-Press Administration area and change your “Permalinks” to be whatever you’d like them to be! 🙂



    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Debian Minimal Install: The base for all operations

    This blog is really just a place holder for many blogs to be written in the future. In some of the future “How-To” blogs I plan on writing, I’m going to need to ensure that we start with a perfectly clean install of Debian. So from here we’ll start from a completely fresh install of a Debian 6 (Squeeze) OS.


    For this you’ll need the newest version of Virtual Box installed on your machine. You’ll also want to download the Small Debian ISO from Debian’s Download page.


    Let’s start with getting your Debian server built and running. Start with getting a Virtual Machine up and running. We’ll start with the basics of provisioning a Virtual Machine in Virtual Box:
    Name your Server


    Alocate some RAM to it:


    Create a Virtual Hard Drive:


    I normally stay with Virtual Box VDI disk images:


    Dynamic Allocation is sufficient:


    Store it in your preferred location. I store mine on a separate Solid State Disk:


    After Completing that, right click on your new virtual machine, and click on “Settings…”


    Get rid of your floppy drive and make sure your RAM and CPU are setup properly:


    Add the Debian ISO to the CD Rom Drive:


    The easiest thing to do it Bridge the network adapter to a physical wired Ethernet port.


    Go ahead and start your Virtual Machine, and when you get to the boot screen just press “Enter”:

    Select your language:


    Keyboard Layout:

    Configure the Host-name of your new Debian Server:

    Setup the Root password:

    Setup your name and user account, password, etc…:

    What Time zone are you in?

    Just for the sake of simplicity, use the whole disk:

    Use the Virtual disk you just made:

    Again, for simplicity, all files in the same partition.

    Finish partitioning:

    I like using MIT’s mirror, but choose whatever one you want:

    You shouldn’t have a proxy, but if you do, fill it out here:

    I normally dont participate in anonymous surveys, but you can if you want:


    Setup the GRUB boot loader:

    Finish the install, hit enter and watch your new system boot up!

    Watch until you get to the login prompt.

    Go ahead and log into your machine with the ROOT user, and the password you setup earlier.

    So Let’s get a static Address on this thing by editing this file: /etc/network/interfaces

    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).

    # The loopback network interface
    auto lo
    iface lo inet loopback

    # The primary network interface
    #allow-hotplug eth0
    #iface eth0 inet dhcp
    auto eth0
    iface eth0 inet static


    And you can restart networking with this:

    /etc/init.d/networking restart


    You’ll probably want to sudo from this user, so if that’s the case:

    apt-get install sudo


    After that software installs, you can edit your sudoers file like this:

    #nano /etc/sudoers


    When Editing the sudoers file, if you break it, have fun! Just copy the line where root is and paste it right below, change the name root to your username. Like this:

    # User privilege specification
    root ALL=(ALL) ALL
    steve ALL=(ALL) ALL



    I like to spice up the environment a little bit. Personalize it, ya know?

    So, what I do is edit the ~/.bashrc file and add in some code.

    nano ~/.bashrc


    Then you can add in some code that will make your life a bunch easier:
    (if there is already code in your .bashrc file, just append this to the bottom of the file!)

    #                               #
    #       BashRC File created by  #
    #           Steve Erdman        #
    #                               #
    #                               #
    #       Edited on Dec 13 2012   #
    #                               #


    #[Color Prompt]




    #[Good Command]

    #[Bad Command]

    #[Command Judge]
    SELECT="if [ \$? = 0 ]; then echo \"${SMILEY}\"; else echo \"${FROWNY}\"; fi"

    #[Working PS1 output]
    PS1="${RESET}${LCYAN}\u ${RED}@ ${LCYAN}\h ${YELLOW}~ \`${SELECT}\` ${YELLOW}ᛤ> ${GREEN} ${NORMAL} "


    alias ll="ls -alh"
    alias ..='cd ..'
    alias ...='cd ../..'
    alias dfah='df -ah'



    Next we’ll get the SSH Server installed so we can get some remote access to this server from our Linux Desktop.

    apt-get install ssh openssh-server openssh-client


    When that’s done test out connecting from your local machine to this virtual host using:

    ssh steve@

    Now we can setup SSH keys on this system so that you can easily log in from your main Linux Desktop machine.


    So go to your home directory on your local machine (NOT THE SERVER!) and your navigate to your home folder. From here CD into your .ssh directory and we’ll create your SSH Certificates.

    cd ~/.ssh/
    ssh-keygen -t rsa
    {save as default file, press enter}
    {enter your own password and hit enter}
    {confirm your password}


    Once this is done we’ll setup your host with keys to stay authenticated

    cat ~/.ssh/ | ssh steve@ "cat - >> ~/.ssh/authorized_keys"


    And now you can test your new ssh keys by doing this:

    ssh steve@{server-IP-Address}


    I know this Blog is kinda dumb, but you’d be surprised how much of the future Blogs will be based off of this point.


    If I ever start a blog saying, “From a fresh Debian Install…” you’ll know you should start here! 🙂





    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)