AT&T u-Verse Static IP work around with pfSense

First off, I’d like to give AT&T an honorable mention (sarcasm) for using the fucking worst, P.O.S. garbage, DSL Modems on the planet: 2WIRE. These things are ridiculous. You’d think that if a provider was able to route a /28 subnet to your home/business, that they’d be able to properly manage that subnet through their “firewall” or whatever you want to call it. The way this normally works is through routing a network range to your device. But AT&T and 2WIRE ensure that for every public static IP address you have, it has to have a unique MAC address and it must look like a different device all together. This is asinine.

So, with the help of my business partner, we’ve come up with a solution on how to get a set of static IP addresses to work so that you can host services on AT&T u-Verse. The way we accomplished this was through the use of an open source and free operating system named, “pfSense”. I’m sure there are other systems out there that we could have used, or just done it in Linux, but pfSense is really robust and has a nice interface. So that’s what we went with.

Additionally, I’m sure not everyone and their mother have an HP DL380 running in their basement, but… welcome to the Erdmanor. I have a DL380 in my basement. So what we’ve done is virtualized a firewall. We’re running pfSense in a virtual machine on the DL 380, which is running ESXi 5.5. I know ESXi 6.0 has been out for a few months now, but to be honest, I’m just too damn lazy to upgrade my box.

Anyways, here’s how we configured the virtual firewall. In ESX, we provisioned the system to have 8 network adapters, a 10GB HDD, 2GB RAM, and 1 virtual CPU. From there we added the VM to access the three different network segments (DMZ, Internal, Outside), and created the interfaces within pfSense. Then we programmed the AT&T gateway to use the external addresses that were provided by them, making sure that the proper interfaces and MAC addresses lined up between the ESX server, the AT&T gateway and the pfSense console. Also, in the AT&T gateway, we setup the system to be in DMZplus Mode, which you can read about in the screenshot below.

pfSense1

pfSense2

pfSense3

att-config0

att-config1

att-config2

att-config3



Now that our AT&T gateway is properly forwarding External IP traffic to the proper interfaces on our pfSense firewall, we can go through and create all the inbound NATs, firewall rules and network security that we wish to have.

If you have any further questions on how to set this up, just ask!

Thanks!





VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Debian Backups, the Command Line Way…

I’ve been wanting to write a blog on this for a long time since I’ve actually had this backup method running in my environment for years. It’s super easy to setup and, while thank god I’ve never had to recover from a backup, I have been able to go back and recover individual files from my backups. What you’ll need from an environment setup is at least one Linux box that you need backed up, and at least one NAS or other file storage server that has an SSH server. I perform all my backups to online disk storage that is based on FreeNAS. There are plenty of NAS environment, and I’m not saying FreeNAS is the best or the worst, but I like it and it works for me. It works extremely well with Linux, Windows and Mac OS X.

There are two parts to this:

  • 1. manual backups
  • 2. automated backups

  • Let’s start with the manual backups, because once we have the manual backups performed, then we can easily turn that into a script and run it in CRON.


    First, we need to specify the directories we don’t want to backup in a file that is accessible to root. Let’s list the directories in “/” first.

    steve @ steve-G75VX ~ :) ##   ll /
    total 18M
    drwxr-xr-x  25 root   root 4.0K Oct 22 14:54 ./
    drwxr-xr-x  25 root   root 4.0K Oct 22 14:54 ../
    drwxr-xr-x   2 root   root 4.0K Aug 14 02:03 bin/
    drwxr-xr-x   4 root   root 3.0K Oct  3 11:39 boot/
    drwxrwxr-x   2 root   root 4.0K May 21 11:52 cdrom/
    -rw-------   1 root   root  18M Oct  3 11:40 core
    drwxr-xr-x  24 root   root 4.8K Oct 31 12:38 dev/
    drwxr-xr-x 148 root   root  12K Oct 27 20:37 etc/
    drwxr-xr-x   3 root   root 4.0K May 21 11:53 home/
    lrwxrwxrwx   1 root   root   33 Aug 14 02:06 initrd.img -> boot/initrd.img-3.19.0-25-generic
    lrwxrwxrwx   1 root   root   33 Jul 10 08:56 initrd.img.old -> boot/initrd.img-3.19.0-22-generic
    drwxr-xr-x  26 root   root 4.0K Oct 13 13:41 lib/
    drwxr-xr-x   2 root   root 4.0K May 21 12:41 lib32/
    drwxr-xr-x   2 root   root 4.0K Apr 22  2015 lib64/
    drwx------   2 root   root  16K May 21 11:47 lost+found/
    drwxr-xr-x   3 root   root 4.0K May 21 12:01 media/
    drwxr-xr-x   2 root   root 4.0K Apr 17  2015 mnt/
    drwxr-xr-x   6 root   root 4.0K Oct 20 11:28 opt/
    dr-xr-xr-x 283 root   root    0 Oct 21 20:30 proc/
    drwx------   4 root   root 4.0K Oct 27 16:57 root/
    drwxr-xr-x  30 root   root 1.1K Oct 27 20:50 run/
    drwxr-xr-x   2 root   root  12K Aug 14 02:03 sbin/
    drwxr-xr-x   2 root   root 4.0K Apr 22  2015 srv/
    dr-xr-xr-x  13 root   root    0 Oct 22 14:55 sys/
    drwxrwxrwx   2 nobody root 4.0K Oct 22 17:55 tftp/
    drwxrwxrwt  18 root   root 4.0K Nov  1 15:17 tmp/
    drwxr-xr-x  11 root   root 4.0K May 21 12:41 usr/
    drwxr-xr-x  13 root   root 4.0K Apr 22  2015 var/
    lrwxrwxrwx   1 root   root   30 Aug 14 02:06 vmlinuz -> boot/vmlinuz-3.19.0-25-generic
    lrwxrwxrwx   1 root   root   30 Jul 10 08:56 vmlinuz.old -> boot/vmlinuz-3.19.0-22-generic


    So, based on this, we’ll exclude like this:

    steve @ steve-G75VX ~ :) ##   sudo mkdir /backups
    [sudo] password for steve:
    steve @ steve-G75VX ~ :) ##   sudo touch /backups/exclude.list
    steve @ steve-G75VX ~ :) ##   sudo nano /backups/exclude.list
    steve @ steve-G75VX ~ :) ##  

    /cdrom
    /dev
    /lost+found
    /proc
    /run
    /sys
    /tmp

    (Ctrl+x to quit, then y to save)


    Now that we have our directory and exclude list setup, now we need to make sure RSYNC is installed on our system.

    steve @ steve-G75VX ~ :) ##   sudo apt-get update
    ...
    ...
    Fetched 1,743 kB in 21s (79.7 kB/s)
    Reading package lists... Done
    steve @ steve-G75VX ~ :) ##   sudo apt-get install rsync
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    rsync is already the newest version.
    0 upgraded, 0 newly installed, 0 to remove and 38 not upgraded.
    steve @ steve-G75VX ~ :) ##


    Now that we have RSYNC installed and our backup exclusions defined, lets get our backups started.

    First, edit your .bashrc file in your home directory and add this line:

    alias backupall='sudo rsync -athvz --delete / steve@1.1.1.1:/mnt/Backups/laptop/


    “What does all this do?” you might ask… well, it’s quite simple really.

    First, we create an alias for your shell named, “backupall”, because we’ll be performing full system backups from here.

    Next, we call “rsync” to run as root, and ask it to run with the switches -a, -t, -h, -v and -z.

  • -a = run in archive mode, which equals -rlptgoD (no -H,-A,-X)
  • -t = makes sure to preserve modification times on your files
  • -h = ensures that output numbers in a human-readable format
  • -v = trun verbosely.
  • -z = makes sure that file data is compressed during the transfer
  • And lastly, the “–delete” means, “This tells rsync to delete extraneous files from the receiving side (ones that aren’t on the sending side), but only for the directories that are being synchronized. You must have asked rsync to send the whole directory (e.g. lqdirrq or lqdir/rq) without using a wildcard for the directory’s contents (e.g. lqdir/*rq) since the wildcard is expanded by the shell and rsync thus gets a request to transfer individual files, not the files’ parent directory. Files that are excluded from the transfer are also excluded from being deleted unless you use the –delete-excluded option or mark the rules as only matching on the sending side (see the include/exclude modifiers in the FILTER RULES section).” — http://linux.die.net/man/1/rsync

    Next is the “/”, which means we’re backing up everything in “/”, which is everything.

    Lastly, we’re specifying the destination. In this case, we’re doing RSYNC over SSH, so we’ll be specifying a location in the way that you would specify a destination in SCP.


    Now test running your backup. I’ve run mine before, so my update is pretty quick. But this is going to backup your whole system for, so expect it to take a while.

    steve @ steve-G75VX ~ :( ᛤ>   backupallnas
    steve@1.1.1.1's password:
    sending incremental file list
    ./
    var/lib/mysql/blog/wp_AnalyticStats.MYD
    var/lib/mysql/blog/wp_AnalyticStats.MYI
    var/lib/mysql/blog/wp_options.MYD
    var/lib/mysql/blog/wp_options.MYI
    var/lib/mysql/blog/wp_postmeta.MYD
    var/lib/mysql/blog/wp_postmeta.MYI
    var/lib/sudo/steve/0
    var/log/auth.log
    var/log/apache2/access.log
    var/log/apache2/error.log

    sent 1.09M bytes  received 50.77K bytes  58.56K bytes/sec
    total size is 1.91G  speedup is 1673.17
    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1070) [sender=3.0.9]
    steve @ steve-G75VX ~ :( ᛤ>



    Now we need to create our script, and make it executable.

    root @ steve-G75VX ~ :) ##   nano /backups/backupall
    root @ steve-G75VX ~ :) ##   chmod +x /backups/backupall
    root @ steve-G75VX ~ :) ##   ll /backups/backupall
    -rwxr-xr-x 1 root root 96 Nov  1 17:02 /backups/backupall*
    root @ steve-G75VX ~ :) ##


    I added this one line to the backup file:

    sudo rsync -athvz --delete / steve@1.1.1.1:/mnt/Backups/laptop/



    This looks pretty good! Now that we have a full backup of our machine, lets get this setup in CRON.

    steve @ steve-G75VX ~ :) ##   sudo su
    root @ steve-G75VX ~ :) ##   crontab -l
    no crontab for root
    root @ steve-G75VX ~ :( ##   crontab -e
    no crontab for root - using an empty one

    Select an editor.  To change later, run 'select-editor'.
      1. /bin/ed
      2. /bin/nano        <---- easiest
      3. /usr/bin/vim.tiny

    Choose 1-3 [2]: 2
    crontab: installing new crontab
    root @ steve-G75VX ~ :) ##


    The line that I added to CRON was this:

    0 3 * * * /backups/backupall >/dev/null 2&>1


    This basically states that every day at 3am, this script will be run.


    From here we need to make sure our local system can perform password-less logon to the SSH server. To do that we’ll be working off of a prior blog I wrote on SSH Keys, here: Using SSH Keys to simplify logins to remote systems.

    You’ll want to test that your system can SSH to your remote system without entering a password. As long as that works, we’re good to go!

    That’s it! It’s that simple!



    I have run into issues on some machines where SSH keys don’t work. I haven’t had the time to troubleshoot why, so I got a different way to figure out how to make backups work, without using SSH keys. The down side is that this is MUCH less secure, and I really don’t recommend running this in a production setting. But for home or non-business use, you’re probably just fine.

    So to do this, we’re going to use “SSHPASS” package. It’s out there for Debian and Ubuntu, so I’m sure it’s out there for other Linux/Unix systems as well.

    root @ steve-G75VX ~ :) ##   sudo apt-get install sshpass
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    The following NEW packages will be installed:
      sshpass
    0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded.
    Need to get 10.5 kB of archives.
    After this operation, 56.3 kB of additional disk space will be used.
    Get:1 http://us.archive.ubuntu.com/ubuntu/ vivid/universe sshpass amd64 1.05-1 [10.5 kB]
    Fetched 10.5 kB in 0s (65.3 kB/s)  
    Selecting previously unselected package sshpass.
    (Reading database ... 258807 files and directories currently installed.)
    Preparing to unpack .../sshpass_1.05-1_amd64.deb ...
    Unpacking sshpass (1.05-1) ...
    Processing triggers for man-db (2.7.0.2-5) ...
    Setting up sshpass (1.05-1) ...
    root @ steve-G75VX ~ :) ##


    Go ahead and test logging into your NAS box, or any box really, with this. The idea is that, when you’re scripting you need to logon to remote systems without a password. If you can’t use SSH keys, then this is your next best bet. Create a file in “root’s” home dir and name it whatever you want. I named mine, “backup.dat”. It must contain only the password you use to log into your remote machine, on one line, all by itself.

    root @ steve-G75VX ~ :) ##   nano ~/backup.dat
    root @ steve-G75VX ~ :) ##   chmod 600 backup.dat


    You’ll call “sshpass”, -f for the file with the password, the location of your “ssh” program, -p and the port number (default port for ssh is 22), followed by the username you login with (make sure it’s in the format of, “user@machine-ip”).

    root @ steve-G75VX ~ :) ##   sshpass -f /root/backup.dat /usr/bin/ssh -p 22 steve@1.1.1.1
    Last login: Sun Nov  1 17:22:08 2015 from 1.1.1.2
    FreeBSD 9.2-RELEASE (FREENAS.amd64) #0 r+2315ea3: Fri Dec 20 12:48:50 PST 2013

        FreeNAS (c) 2009-2013, The FreeNAS Development Team
        All rights reserved.
        FreeNAS is released under the modified BSD license.

        For more information, documentation, help or support, go here:
        http://freenas.org
    Welcome to FreeNAS
    [steve@freenas ~]$ exit
    logout
    Connection to 1.1.1.1 closed.
    root @ steve-G75VX ~ :) ##


    Okay, now that we’ve tested this and know it’s working, lets modify our script here and get this working with “sshpass”.

    root @ steve-G75VX ~ :) ##   /usr/bin/rsync -athvz --delete --rsh="/usr/bin/sshpass -f /root/backup.dat ssh -o StrictHostKeyChecking=no -l YourUserN@me" /home/steve steve@1.1.1.1:/mnt/Backups/laptop/


    Now test to make sure the script is working (as soon as you see the incremental file list being sent, you know it’s working properly):

    root @ steve-G75VX ~ :) ##   /usr/bin/rsync -athvz --delete --rsh="/usr/bin/sshpass -f /root/backup.dat ssh -o StrictHostKeyChecking=no -l steve" /home/steve steve@1.1.1.1:/mnt/Backups/laptop
    sending incremental file list
    ^Crsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(632) [sender=3.1.1]
    root @ steve-G75VX ~ :) ##
    root @ steve-G75VX ~ :) ##
    root @ steve-G75VX ~ :) ##   /backups/backupall
    sending incremental file list
    steve/.cache/google-chrome/Default/Cache/
    ^Crsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(632) [sender=3.1.1]
    root @ steve-G75VX ~ :( ##

    Success!







    http://linux.die.net/man/1/rsync
    https://www.debian-administration.org/article/56/Command_scheduling_with_cron

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Bash Shell Customizing

    I’ve had a request for a blog on how to update bash shell. I’ll put more into this in the future, but for now, here is the actual code in my .bashrc file.

    Basically, I like to have my command line environment customized to my liking, just like any other user/administrator. So what I’ve done here is added some color to my shell, as well as added some nice, helpful and easy to remember aliases that really save time in typing.

    Here is a screenshot of what my shell looks like:

    Screenshot

    #PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games"
    PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games"

    # mint-fortune - If you like the fortunes, keep this on, otherwise delete it.
    # you will need to have Mint Fortunes installed on your system for this to work
    /usr/bin/mint-fortune

    #------------------------------------------------------------------------------------------------------
    #------------------------------------------------------------------------------------------------------


    #[Color Prompt] This adds color prompt to your shell.
    #    I've gone through and figured out a whole bunch
    #    of colors so you can go ahead and customize to
    #    your heart's content.

    force_color_prompt=yes

    #[Variables]
    RESET="\[\017\]"
    NORMAL="\[\033[;m\]"
    LGREEN="\[\033[1;32m\]"
    LGREEN0="\[\033[0;32m\]"
    LBLUE="\[\033[1;34m\]"
    LCYAN="\[\033[1;36m\]"
    LRED="\[\033[1;31m\]"
    LPURPLE="\[\033[1;35m\]"
    BLACK="\[\033[0;30m\]"
    BLUE="\[\033[0;34m\]"
    GREEN="\[\033[0;32m\]"
    CYAN="\[\033[0;36m\]"
    PURPLE="\[\033[0;35m\]"
    BROWN="\[\033[0;33m\]"
    LGRAY="\[\033[0;37m\]"
    DGREY="\[\033[01;30m\]"
    RED="\[\033[0;31m\]"
    YELLOW="\[\033[01;33m\]"
    WHITE="\[\033[01;37m\]"


    #[Good Command]
    SMILEY="${GREEN}:)${NORMAL}"

    #[Bad Command]
    FROWNY="${RED}:(${NORMAL}"

    #[Command Judge]
    SELECT="if [ \$? = 0 ]; then echo \"${SMILEY}\"; else echo \"${FROWNY}\"; fi"

    #[Working PS1 output]
    PS1="${RESET}${LCYAN}\u ${RED}@ ${LCYAN}\h: ${YELLOW}\w\a~ \`${SELECT}\` ${YELLOW}\$ ${GREEN} ${NORMAL} "


    #------------------------------------------------------------------------------------------------------
    #------------------------------------------------------------------------------------------------------


    #[Aliases]
    alias du="du -bchsS"
    alias ll="ls -alhF --color=auto"
    alias ..='cd ..'
    alias ...='cd ../..'
    alias dfah='df -ah'
    alias mount='mount |column -t'
    alias now='date +"%T'
    alias nowdate='date +"%d-%m-%Y"'
    alias vlspci='sudo lspci -vvnn'
    alias vi=vim
    alias disks='sudo blkid && sudo fdisk -l'

    alias svi='sudo vi'
    alias vis='vim "+set si"'
    alias edit='vim'
    alias ports='netstat -tulanp'
    alias apt-get="sudo apt-get"
    alias updatey="sudo apt-get --yes"
    alias update='sudo apt-get update && sudo apt-get upgrade'
    alias meminfo='free -m -l -t'
    alias psmem='ps auxf | sort -nr -k 4'
    alias psmem10='ps auxf | sort -nr -k 4 | head -10'
    alias pscpu='ps auxf | sort -nr -k 3'
    alias pscpu10='ps auxf | sort -nr -k 3 | head -10'
    alias cpuinfo='lscpu'
    ##alias cpuinfo='less /proc/cpuinfo' ##
    alias gpumeminfo='grep -i --color memory /var/log/Xorg.0.log'
    alias reboot='sudo /sbin/reboot'
    alias poweroff='sudo /sbin/poweroff'
    alias halt='sudo /sbin/halt'
    alias shutdown='sudo /sbin/shutdown'
    alias tftpstuff='sudo chmod 777 /tftp/* && sudo chown root:root /tftp/*'


    #------------------------------------------------------------------------------------------------------
    #------------------------------------------------------------------------------------------------------

    #[Backups] This section is where I have my backups defined.
    #    For more information, please check out my "Backups"
    #    blog. You can find it here:
    #    http://www.erdmanor.com/blog/debian-backups-command-line-way/

    alias backupall='sudo rsync -athvz --delete --exclude-from='backups/exclude.list' / /backups/computername/path/to/save/backups'

    VN:F [1.9.22_1171]
    Rating: 5.0/5 (1 vote cast)
    VN:F [1.9.22_1171]
    Rating: +1 (from 1 vote)

    Backing up Cisco Configurations for Routers, Switches and Firewalls

    I will add more about this when I have time. Until then, you should be able to just install python, paramiko and pexpect and run this script as-is (obviously changing the variables).

    This should give you all the software you need:

    sudo apt-get update
    sudo apt-get install python python-pexpect python-paramiko

    I plan on GREATLY increasing the ability of this script, adding additional functionality, as well as setting up a bash script that will be able to parse the configs, and perform much deeper backup abilities for ASAs.

    I have not tested this on Routers and Switches. I can tell you that the production 5520 HA Pair that I ran this script against was running “Cisco Adaptive Security Appliance Software Version 8.4(2)160”. Theoretically, I would believe that this would work with all 8.4 code and up, including the 9.x versions that are out as of the writing of this blog.

    Here you go! Full Scripted interrogation of Cisco ASA 5520 that can be setup to run on a CRON job.

    #!/usr/bin/python
    import paramiko, pexpect, hashlib, StringIO, re, getpass, os, time, ConfigParser, sys, datetime, cmd, argparse



    ### DEFINE VARIABLES

    currentdate="10-16-2014"
    hostnamesfile='vpnhosts'
    asahost="192.168.222.1"
    tacacsuser='testuser'
    userpass='Password1'
    enpass='Password2'
    currentipaddress="192.168.222.1"
    currenthostname="TESTASA"


    #dummy=sys.argv[0]
    #currentdate=sys.argv[1]
    #currentipaddress=sys.argv[2]
    #tacacsuser=sys.argv[3]
    #userpass=sys.argv[4]
    #enpass=sys.argv[5]
    #currenthostname=sys.argv[6]

    parser = argparse.ArgumentParser(description='Get "show version" from a Cisco ASA.')
    parser.add_argument('-u', '--user',     default='cisco', help='user name to login with (default=cisco)')
    parser.add_argument('-p', '--password', default='cisco', help='password to login with (default=cisco)')
    parser.add_argument('-e', '--enable',   default='cisco', help='password for enable (default=cisco)')
    parser.add_argument('-d', '--device',   default=asahost, help='device to login to (default=192.168.120.160)')
    args = parser.parse_args()

       


    #python vpnbackup.py $currentdate $currentipaddress $tacacsuser $userpass $enpass $currenthostname



    def asaLogin():
       
        #start ssh")
        child = pexpect.spawn ('ssh '+tacacsuser+'@'+asahost)
       
        #testing to see if I can increase the buffer
        child.maxread=9999999
       
        #expect password prompt")
        child.expect ('.*assword:.*')
        #send password")
        child.sendline (userpass)
        #expect user mode prompt")
        child.expect ('.*>.*')
        #send enable command")
        child.sendline ('enable')
        #expect password prompt")
        child.expect ('.*assword:.*')
        #send enable password")
        child.sendline (enpass)
        #expect enable mode prompt = timeout 5")
        child.expect ('#.*', timeout=10)
        #set term pager to 0")
        child.sendline ('terminal pager 0')
        #expect enable mode prompt = timeout 5")
        child.expect ('#.*', timeout=10)
        #run create dir function")
        createDir()
        #run create show version")
        showVersion(child)
        #run create show run")
        showRun(child)
        # run showCryptoIsakmp(child)
        showCryptoIsakmp(child)
        # run dirDisk0(child)
        dirDisk0(child)
        # run showInterfaces(child)
        showInterfaces(child)
        #run  showRoute")
        showRoute(child)
        #run showVpnSessionDetail")
        showVpnSessionDetail(child)
        # run showVpnActiveSessions(child)
        showWebVpnSessions(child)
        # run showVpnActiveSessions(child)
        showAnyConnectSessions(child)
        #send exit")
        child.sendline('exit')
        #close the ssh session")
        child.close()
       
       
    def createDir():
        if not os.path.exists(currentdate):
            os.makedirs(currentdate)
        if not os.path.exists(currentdate+"/"+currenthostname):
            os.makedirs(currentdate+"/"+currenthostname)
       
       
       
    def showVersion(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/"+currenthostname+datetime.datetime.now().strftime("%m-%d-%Y")+"sh-ver.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show version")
        child.sendline('show version')
        #expect enable mode prompt = timeout 400")
        child.expect(".*# ", timeout=50)
        #closing the log file")
        fout.close()
       
       
    def showRun(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/"+currenthostname+datetime.datetime.now().strftime("%m-%d-%Y")+"sh-run.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending more system running-config")
        child.sendline('more system:running-config')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=999)
        #closing the log file
        fout.close()   
       

    def showCryptoIsakmp(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/"+currenthostname+datetime.datetime.now().strftime("%m-%d-%Y")+"cryptoisakmp.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show crypto isakmp sa")
        child.sendline('show crypto isakmp sa')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=50)
        #closing the log file
        fout.close()   


    def dirDisk0(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/"+currenthostname+datetime.datetime.now().strftime("%m-%d-%Y")+"dirdisk0.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending dir disk0:")
        child.sendline('dir disk0:')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=75)
        #closing the log file
        fout.close()


    def showInterfaces(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/"+currenthostname+datetime.datetime.now().strftime("%m-%d-%Y")+"interfaces.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show interface")
        child.sendline('show interface')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=100)
        #closing the log file
        fout.close()


    def showRoute(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/"+currenthostname+datetime.datetime.now().strftime("%m-%d-%Y")+"show-route.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show route")
        child.sendline('show route')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=300)
        #closing the log file
        fout.close()


    def showVpnSessionDetail(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/"+currenthostname+datetime.datetime.now().strftime("%m-%d-%Y")+"vpnsession.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show vpn-sessiondb detail")
        child.sendline('show vpn-sessiondb detail')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=50)
        #closing the log file
        fout.close()


    def showWebVpnSessions(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/"+currenthostname+datetime.datetime.now().strftime("%m-%d-%Y")+"webvpns.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show vpn-sessiondb webvpn")
        child.sendline('show vpn-sessiondb webvpn')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=200)
        #closing the log file
        fout.close()


    def showAnyConnectSessions(child):
        #setting a new file for output")
        fout = file(currentdate+"/"+currenthostname+"/"+currenthostname+datetime.datetime.now().strftime("%m-%d-%Y")+"anyconnectvpns.txt",'w')
        #capturing the command output to the file")
        child.logfile_read = fout
        #sending show vpn-sessiondb anyconnect")
        child.sendline('show vpn-sessiondb anyconnect')
        #expect enable mode prompt = timeout 400
        child.expect(".*# ", timeout=999)
        #closing the log file
        fout.close()




    def main():
        #Nothing has been executed yet
        #executing asaLogin function
        asaLogin()
        #Finished running parTest\n\n Now exiting
       

    main()

    Here are all the websites that have provided help to me writing these scripts:
    http://www.802101.com/2014/06/automated-asa-ios-and-nx-os-backups.html
    http://yourlinuxguy.com/?p=300
    http://content.hccfl.edu/pollock/Unix/FindCmd.htm
    http://paulgporter.net/2012/12/08/30/
    http://paklids.blogspot.com/2012/01/securely-backup-cisco-firewall-asa-fwsm.html
    http://ubuntuforums.org/archive/index.php/t-106287.html
    http://stackoverflow.com/questions/12604468/find-and-delete-txt-files-in-bash
    http://stackoverflow.com/questions/9806944/grep-only-text-files
    http://unix.stackexchange.com/questions/132417/prompt-user-to-login-as-root-when-running-a-shell-script
    http://stackoverflow.com/questions/6961389/exception-handling-in-shell-scripting
    http://stackoverflow.com/questions/7140817/python-ssh-into-cisco-device-and-run-show-commands
    http://pastebin.com/qGRdQwpa
    http://blog.pythonicneteng.com/2012/11/pexpect-module.html
    https://pynet.twb-tech.com/blog/python/paramiko-ssh-part1.html
    http://twistedmatrix.com/pipermail/twisted-python/2007-July/015793.html
    http://www.lag.net/paramiko/
    http://www.lag.net/paramiko/docs/
    http://stackoverflow.com/questions/25127406/paramiko-2-tier-cisco-ssh
    http://rtomaszewski.blogspot.com/2012/08/problem-runing-ssh-or-scp-from-python.html
    http://www.copyandwaste.com/posts/view/pexpect-python-and-managing-devices-tratto/
    http://askubuntu.com/questions/344407/how-to-read-complete-line-in-for-loop-with-spaces
    http://stackoverflow.com/questions/10463216/python-pexpect-timeout-falls-into-traceback-and-exists
    http://stackoverflow.com/questions/21055943/pxssh-connecting-to-an-ssh-proxy-timeout-exceeded-in-read-nonblocking
    http://www.pennington.net/tutorial/pexpect_001/pexpect_tutorial.pdf
    https://github.com/npug/asa-capture/blob/master/asa-capture.py
    http://stackoverflow.com/questions/26227791/ssh-with-subprocess-popen

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Creating a basic monitoring server for network devices

    I’ve recently been working more and more with network device management. So, to help with up-time monitoring, interface statistics, bandwidth utilization, and alerting, I’ve been building up a server with some great Open Source tools. My clients love it because it costs virtually nothing to run these machines, and it helps keep the network running smoothly when we know what is going on within the network.

    One thing I haven’t been able to do yet is SYSLOG monitoring with the ability to generate email alerts off of specific SYSLOG messages. That’s in the work, and I’ll be adding that information into this blog as soon as I get it up and running properly.

    I am using Debian 7.6 for this Operating System. Mainly because it’s very stable, very small, and doesn’t update as frequently (making it easier to manage). You can follow a basic install of this OS from here: Debian Minimal Install. That will get you up and running and we’ll take it from there.

    Okay, now that you have an OS running, go ahead and open up a command prompt and log in as your user account or “root”. Go ahead an then “sudo su”.

    Now we will update apt:

    apt-get update

     

    From here, let’s get LAMP installed and running so our web services will run properly.

    apt-get install apache2
    apt-get install mysql-server
    apt-get install php5 php-pear php5-mysql

     

    Now that we have that all setup, lets secure MySQL a bit:

    mysql_secure_installation

     

    When you run through this, make sure to answer these questions:

    root@testmonitor:/root# mysql_secure_installation




    NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL
          SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!


    In order to log into MySQL to secure it, we'll need the current
    password for the root user.  If you've just installed MySQL, and
    you haven't set the root password yet, the password will be blank,
    so you should just press enter here.

    Enter current password for root (enter for none):
    OK, successfully used password, moving on...

    Setting the root password ensures that nobody can log into the MySQL
    root user without the proper authorisation.

    You already have a root password set, so you can safely answer 'n'.

    Change the root password? [Y/n] n
     ... skipping.

    By default, a MySQL installation has an anonymous user, allowing anyone
    to log into MySQL without having to have a user account created for
    them.  This is intended only for testing, and to make the installation
    go a bit smoother.  You should remove them before moving into a
    production environment.

    Remove anonymous users? [Y/n] y
     ... Success!

    Normally, root should only be allowed to connect from 'localhost'.  This
    ensures that someone cannot guess at the root password from the network.

    Disallow root login remotely? [Y/n] y
     ... Success!

    By default, MySQL comes with a database named 'test' that anyone can
    access.  This is also intended only for testing, and should be removed
    before moving into a production environment.

    Remove test database and access to it? [Y/n] y
     - Dropping test database...
    ERROR 1008 (HY000) at line 1: Can't drop database 'test'; database doesn't exist
     ... Failed!  Not critical, keep moving...
     - Removing privileges on test database...
     ... Success!

    Reloading the privilege tables will ensure that all changes made so far
    will take effect immediately.

    Reload privilege tables now? [Y/n] y
     ... Success!

    Cleaning up...



    All done!  If you've completed all of the above steps, your MySQL
    installation should now be secure.

    Thanks for using MySQL!

     
     

    Let’s test the server and make sure it’s working properly. Using nano, edit the file “info.php” in the “www” directory:

    nano /var/www/info.php

     

    Add in the following lines:

    <?php
    phpinfo();
    ?>

     

    Now, open a web browser and type in the server’s IP address and the info page:

    http://192.168.0.101/info.php

     

     

    Now let’s get Cacti installed.

    apt-get install cacti cacti-spine

    Make sure to let the installer know that you’re using Apache2 as your HTTP server.

    Also, you’ll need to let the installer “Configure database for cacti with dbconfig-common”. Say yes!

    After you apt is done installing your software, you’ll have to finish the install from a web browser.

    http://192.168.0.101/cacti/install/

     

    After answering a couple very easy questions, you’ll be finished and presented with a login screen.

    The default credentials for cacti are “admin:admin”

    From there you can log in and start populating your server with all the devices that you want to monitor. It’s that easy.

     

     

     

     

    Now, let’s get Nagios installed. Again, it’s really easy. I just install everything nagios (don’t forget the asterisk after nagios):

    apt-get install nagios*

    This is what it will look like:

    root@debiantest:/root# apt-get install nagios*
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    Note, selecting 'nagios-nrpe-plugin' for regex 'nagios*'
    Note, selecting 'nagios-nrpe-doc' for regex 'nagios*'
    Note, selecting 'nagios-plugins-basic' for regex 'nagios*'
    Note, selecting 'check-mk-config-nagios3' for regex 'nagios*'
    Note, selecting 'nagios2' for regex 'nagios*'
    Note, selecting 'nagios3' for regex 'nagios*'
    Note, selecting 'nagios-snmp-plugins' for regex 'nagios*'
    Note, selecting 'uwsgi-plugin-nagios' for regex 'nagios*'
    Note, selecting 'ndoutils-nagios3-mysql' for regex 'nagios*'
    Note, selecting 'nagios-plugins' for regex 'nagios*'
    Note, selecting 'gosa-plugin-nagios-schema' for regex 'nagios*'
    Note, selecting 'nagios-nrpe-server' for regex 'nagios*'
    Note, selecting 'nagios-plugin-check-multi' for regex 'nagios*'
    Note, selecting 'nagios-plugins-openstack' for regex 'nagios*'
    Note, selecting 'libnagios-plugin-perl' for regex 'nagios*'
    Note, selecting 'nagios-images' for regex 'nagios*'
    Note, selecting 'pnp4nagios-bin' for regex 'nagios*'
    Note, selecting 'nagios3-core' for regex 'nagios*'
    Note, selecting 'libnagios-object-perl' for regex 'nagios*'
    Note, selecting 'nagios-plugins-common' for regex 'nagios*'
    Note, selecting 'nagiosgrapher' for regex 'nagios*'
    Note, selecting 'nagios' for regex 'nagios*'
    Note, selecting 'nagios3-dbg' for regex 'nagios*'
    Note, selecting 'nagios3-cgi' for regex 'nagios*'
    Note, selecting 'nagios3-common' for regex 'nagios*'
    Note, selecting 'nagios3-doc' for regex 'nagios*'
    Note, selecting 'pnp4nagios' for regex 'nagios*'
    Note, selecting 'pnp4nagios-web' for regex 'nagios*'
    Note, selecting 'ndoutils-nagios2-mysql' for regex 'nagios*'
    Note, selecting 'nagios-plugins-contrib' for regex 'nagios*'
    Note, selecting 'ndoutils-nagios3' for regex 'nagios*'
    Note, selecting 'nagios-plugins-standard' for regex 'nagios*'
    Note, selecting 'gosa-plugin-nagios' for regex 'nagios*'
    The following extra packages will be installed:
      autopoint dbus fonts-droid fonts-liberation fping freeipmi-common freeipmi-tools gettext ghostscript git git-man gosa gsfonts imagemagick-common libavahi-client3 libavahi-common-data libavahi-common3 libc-client2007e
      libcalendar-simple-perl libclass-accessor-perl libclass-load-perl libclass-singleton-perl libconfig-tiny-perl libcroco3 libcrypt-smbhash-perl libcups2 libcupsimage2 libcurl3 libcurl3-gnutls libdata-optlist-perl libdate-manip-perl
      libdatetime-locale-perl libdatetime-perl libdatetime-timezone-perl libdbus-1-3 libdigest-hmac-perl libdigest-md4-perl libencode-locale-perl liberror-perl libfile-listing-perl libfont-afm-perl libfpdf-tpl-php libfpdi-php
      libfreeipmi12 libgd-gd2-perl libgd2-xpm libgettextpo0 libgomp1 libgs9 libgs9-common libhtml-form-perl libhtml-format-perl libhtml-parser-perl libhtml-tagset-perl libhtml-tree-perl libhttp-cookies-perl libhttp-daemon-perl
      libhttp-date-perl libhttp-message-perl libhttp-negotiate-perl libice6 libijs-0.35 libio-pty-perl libio-socket-ip-perl libio-socket-ssl-perl libipc-run-perl libipmiconsole2 libipmidetect0 libjansson4 libjasper1 libjbig0 libjbig2dec0
      libjpeg8 libjs-jquery-ui libkohana2-php liblcms2-2 liblist-moreutils-perl liblqr-1-0 libltdl7 liblwp-mediatypes-perl liblwp-protocol-https-perl liblwp-useragent-determined-perl libmagickcore5 libmagickwand5 libmail-imapclient-perl
      libmailtools-perl libmath-calc-units-perl libmath-round-perl libmcrypt4 libmemcached10 libmodule-implementation-perl libmodule-runtime-perl libnet-dns-perl libnet-http-perl libnet-ip-perl libnet-libidn-perl libnet-smtp-tls-perl
      libnet-snmp-perl libnet-ssleay-perl libodbc1 libpackage-deprecationmanager-perl libpackage-stash-perl libpackage-stash-xs-perl libpaper-utils libpaper1 libparams-classify-perl libparams-util-perl libparams-validate-perl
      libparse-recdescent-perl libpgm-5.1-0 libpq5 libradiusclient-ng2 libreadonly-perl libreadonly-xs-perl librecode0 librrds-perl librtmp0 libruby1.9.1 libslp1 libsm6 libsocket-perl libssh2-1 libsub-install-perl libsub-name-perl
      libsystemd-login0 libtalloc2 libtdb1 libtiff4 libtimedate-perl libtry-tiny-perl libunistring0 liburi-perl libwbclient0 libwww-perl libwww-robotrules-perl libxpm4 libxt6 libyaml-0-2 libyaml-syck-perl libzmq1 mlock ndoutils-common
      perlmagick php-fpdf php5-curl php5-gd php5-imagick php5-imap php5-ldap php5-mcrypt php5-recode poppler-data python-httplib2 python-keystoneclient python-pkg-resources python-prettytable qstat rsync ruby ruby1.9.1 samba-common
      samba-common-bin slapd smarty3 smbclient ttf-liberation uwsgi-core x11-common
    Suggested packages:
      dbus-x11 freeipmi-ipmidetect freeipmi-bmc-watchdog gettext-doc ghostscript-cups ghostscript-x hpijs git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb gosa-si-server
      cyrus21-imapd postfix-ldap gosa-schema php5-suhosin php-apc uw-mailutils cups-common libgd-tools libdata-dump-perl libjasper-runtime libjs-jquery-ui-docs libkohana2-modules-php liblcms2-utils libcrypt-ssleay-perl
      libmagickcore5-extra libauthen-sasl-perl libmcrypt-dev mcrypt libio-socket-inet6-perl libcrypt-des-perl libmyodbc odbc-postgresql tdsodbc unixodbc-bin libscalar-number-perl slpd openslp-doc libauthen-ntlm-perl backuppc perl-doc
      cciss-vol-status expect ndoutils-doc imagemagick-doc ttf2pt1 rrdcached libgearman-client-perl libcrypt-rijndael-perl poppler-utils fonts-japanese-mincho fonts-ipafont-mincho fonts-japanese-gothic fonts-ipafont-gothic
      fonts-arphic-ukai fonts-arphic-uming fonts-unfonts-core python-distribute python-distribute-doc ri ruby-dev ruby1.9.1-examples ri1.9.1 graphviz ruby1.9.1-dev ruby-switch ldap-utils cifs-utils nginx-full cherokee libapache2-mod-uwsgi
      libapache2-mod-ruwsgi uwsgi-plugins-all uwsgi-extra
    The following NEW packages will be installed:
      autopoint check-mk-config-nagios3 dbus fonts-droid fonts-liberation fping freeipmi-common freeipmi-tools gettext ghostscript git git-man gosa gosa-plugin-nagios gosa-plugin-nagios-schema gsfonts imagemagick-common libavahi-client3
      libavahi-common-data libavahi-common3 libc-client2007e libcalendar-simple-perl libclass-accessor-perl libclass-load-perl libclass-singleton-perl libconfig-tiny-perl libcroco3 libcrypt-smbhash-perl libcups2 libcupsimage2 libcurl3
      libcurl3-gnutls libdata-optlist-perl libdate-manip-perl libdatetime-locale-perl libdatetime-perl libdatetime-timezone-perl libdbus-1-3 libdigest-hmac-perl libdigest-md4-perl libencode-locale-perl liberror-perl libfile-listing-perl
      libfont-afm-perl libfpdf-tpl-php libfpdi-php libfreeipmi12 libgd-gd2-perl libgd2-xpm libgettextpo0 libgomp1 libgs9 libgs9-common libhtml-form-perl libhtml-format-perl libhtml-parser-perl libhtml-tagset-perl libhtml-tree-perl
      libhttp-cookies-perl libhttp-daemon-perl libhttp-date-perl libhttp-message-perl libhttp-negotiate-perl libice6 libijs-0.35 libio-pty-perl libio-socket-ip-perl libio-socket-ssl-perl libipc-run-perl libipmiconsole2 libipmidetect0
      libjansson4 libjasper1 libjbig0 libjbig2dec0 libjpeg8 libjs-jquery-ui libkohana2-php liblcms2-2 liblist-moreutils-perl liblqr-1-0 libltdl7 liblwp-mediatypes-perl liblwp-protocol-https-perl liblwp-useragent-determined-perl
      libmagickcore5 libmagickwand5 libmail-imapclient-perl libmailtools-perl libmath-calc-units-perl libmath-round-perl libmcrypt4 libmemcached10 libmodule-implementation-perl libmodule-runtime-perl libnagios-object-perl
      libnagios-plugin-perl libnet-dns-perl libnet-http-perl libnet-ip-perl libnet-libidn-perl libnet-smtp-tls-perl libnet-snmp-perl libnet-ssleay-perl libodbc1 libpackage-deprecationmanager-perl libpackage-stash-perl
      libpackage-stash-xs-perl libpaper-utils libpaper1 libparams-classify-perl libparams-util-perl libparams-validate-perl libparse-recdescent-perl libpgm-5.1-0 libpq5 libradiusclient-ng2 libreadonly-perl libreadonly-xs-perl librecode0
      librrds-perl librtmp0 libruby1.9.1 libslp1 libsm6 libsocket-perl libssh2-1 libsub-install-perl libsub-name-perl libsystemd-login0 libtalloc2 libtdb1 libtiff4 libtimedate-perl libtry-tiny-perl libunistring0 liburi-perl libwbclient0
      libwww-perl libwww-robotrules-perl libxpm4 libxt6 libyaml-0-2 libyaml-syck-perl libzmq1 mlock nagios-images nagios-nrpe-plugin nagios-nrpe-server nagios-plugin-check-multi nagios-plugins nagios-plugins-basic nagios-plugins-common
      nagios-plugins-contrib nagios-plugins-openstack nagios-plugins-standard nagios-snmp-plugins nagios3 nagios3-cgi nagios3-common nagios3-core nagios3-dbg nagios3-doc nagiosgrapher ndoutils-common ndoutils-nagios3-mysql perlmagick
      php-fpdf php5-curl php5-gd php5-imagick php5-imap php5-ldap php5-mcrypt php5-recode pnp4nagios pnp4nagios-bin pnp4nagios-web poppler-data python-httplib2 python-keystoneclient python-pkg-resources python-prettytable qstat rsync ruby
      ruby1.9.1 samba-common samba-common-bin slapd smarty3 smbclient ttf-liberation uwsgi-core uwsgi-plugin-nagios x11-common
    0 upgraded, 196 newly installed, 0 to remove and 0 not upgraded.
    Need to get 81.9 MB of archives.
    After this operation, 272 MB of additional disk space will be used.
    Do you want to continue [Y/n]?

     

     

    Now to test, just login at http://your-server-ip/nagios3/

    You’ll have to look up tutorials on configuring Nagios and Cacti. Of the two, Cacti is much easier because it’s all web based. But Nagios isn’t too difficult once you get used to playing around with config files.

    One last thing I did was setup a landing page to point at the services. To do that just edit the index.php file in your www folder like this:

    root@testdebian:/etc/nagios3/conf.d/hosts# cat /var/www/index.html
    <html><body><h1>TEST Monitoring Server</h1>
    <p>This is the landing page for the TEST Monitoring server.</p>
    <p>&nbsp;</p>
    <p>Please use the following links to access services:</p>
    <p><a href="/nagios3"> 1. Nagios</a></p>
    <p><a href="/cacti"> 2. Cacti</a></p>
    </body></html>
    root@testdebian:/etc/nagios3/conf.d/hosts#

    Now you can browse to the IP address and get a easy to use page that will forward you to which ever service you want!

    Let me know if you have any questions!

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Creating a Reverse Proxy with Apache2

    Sometimes there is a need for hosting multiple websites from one server, or from one external IP address. For whatever your reason or need is, in this tutorial, I’ll just go through what I did to setup Apache server to forward requests.

    In my setup here, I have a Debian Wheezy server in my DMZ, and in my tier 2 DMZ I have 5 Web servers. My objective is to host all these server from 1 IP address, and introduce some security.

    I found a ton of info out there on setting up Apache as a reverse proxy, but none of them really spelled out exactly what to do, and what the results would be. Some of them did, but it wasn’t what I was looking for. So I took a bunch of stuff I see others doing, modify it to fit my needs and report back to you. I hope this helps.

    Lets get started.

    You’ll want a base install of Debian Wheezy which you can find at www.debian.org. After you download that, just follow my guide for install if you need: Debian Minimal Install: The base for all operations

    As I stated before, I have a bunch of web servers in my tier 2 DMZ, and a Debian box in my Internet facing DMZ. It is my intention that the web servers never actually communicate with the end users. I want my end users to talk to my Debian box, the Debian box to sanitize and optimize the web request, and then forward that request on to the web server. The web server will receive the request from the Debian box, process it, and send back all the necessary data to the Debian server, which will in turn reply to the end user who originally made the request.

    It sounds complicated to some people, but in reality it’s pretty simple, and the reverse proxy is transparent to the end user. Most people out there don’t even realize that many sites out there utilize this type of technology.

    My Debian server needs some software, so I installed these packages:

    sudo apt-get install apache2 libapache2-mod-evasive libapache2-mod-auth-openid libapache2-mod-geoip
    libapache2-mod-proxy-html libapache2-mod-spamhaus libapache2-mod-vhost-hash-alias libapache2-modsecurity

    From here you’ll want to get into the Apache directory.

    cd /etc/apache2

    Let’s get going with editing the main Apache config file. These are just recommendations, so you’ll want to tweak these for what ever is best for your environment.

    sudo vim apache2.conf

    I modified my connections for performance reasons. The default is 100.

    # MaxKeepAliveRequests: The maximum number of requests to allow
    # during a persistent connection. Set to 0 to allow an unlimited amount.
    # We recommend you leave this number high, for maximum performance.
    #
    MaxKeepAliveRequests 500

    Also, what security engineer out there doesn’t know that without logs you have no proof that anything is happening. We’ll cover log rotation and retention in another blog, but for now, I set my logging to “notice”. Default was “warn”.

    # LogLevel: Control the number of messages logged to the error_log.
    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    #
    LogLevel notice

    Perfect. Now, you may want to tweak your server a little differently, but for now this is all we need for here.

    Now let’s get into some security hardening of the server.

    sudo vim /etc/apache2/conf.d/security

    We do have security in mind, so let’s not divulge any information that we don’t need to. Set “ServerTokens Prod”

    # ServerTokens
    # This directive configures what you return as the Server HTTP response
    # Header. The default is 'Full' which sends information about the OS-Type
    # and compiled in modules.
    # Set to one of:  Full | OS | Minimal | Minor | Major | Prod
    # where Full conveys the most information, and Prod the least.
    #
    #ServerTokens Minimal
    #ServerTokens OS
    #ServerTokens Full
    ServerTokens Prod

    Now let’s set “ServerSignature Off”

    # Optionally add a line containing the server version and virtual host
    # name to server-generated pages (internal error documents, FTP directory
    # listings, mod_status and mod_info output etc., but not CGI generated
    # documents or custom error documents).
    # Set to "EMail" to also include a mailto: link to the ServerAdmin.
    # Set to one of:  On | Off | EMail
    #
    #ServerSignature Off
    ServerSignature On

    And lastly, go ahead and uncomment these three lines in your config. We’ll configure “mod_headers” later.

    Header set X-Content-Type-Options: "nosniff"

    Header set X-XSS-Protection: "1; mode=block"

    Header set X-Frame-Options: "sameorigin"

    Sweet, looking good. Go ahead and save that, and we can get “mod_headers” activated. First, I’d like to point out that you can view what modules you have installed by using the “a2dismod” program. Simply enter the command, and it will ask you what modules you’d like to disable. Obviously, if you see it in the list, it’s already enabled. just hit “Ctrl+C” to stop the program.

    To enable a module in Apache, you need to first made sure it’s installed, then you can just use the program “a2enmod”… like this:

    sudo a2enmod headers

    Now that we’ve enabled “mod_header”, lets verify we have the other necessary modules enabled as well.

    steve @ reverseproxy ~ :) ᛤ>   a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    cache
    Enabling module cache.
    Could not create /etc/apache2/mods-enabled/cache.load: Permission denied
    steve @ reverseproxy ~ :( ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    cache
    Enabling module cache.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    proxy_ajp
    Considering dependency proxy for proxy_ajp:
    Module proxy already enabled
    Enabling module proxy_ajp.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    proxy_balancer
    Considering dependency proxy for proxy_balancer:
    Module proxy already enabled
    Enabling module proxy_balancer.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    proxy_connect
    Considering dependency proxy for proxy_connect:
    Module proxy already enabled
    Enabling module proxy_connect.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    proxy_ftp
    Considering dependency proxy for proxy_ftp:
    Module proxy already enabled
    Enabling module proxy_ftp.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    proxy_http
    Considering dependency proxy for proxy_http:
    Module proxy already enabled
    Enabling module proxy_http.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    rewrite
    Enabling module rewrite.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    vhost_alias
    Enabling module vhost_alias.
    To activate the new configuration, you need to run:
      service apache2 restart
    steve @ reverseproxy ~ :) ᛤ>   sudo a2enmod
    Which module(s) do you want to enable (wildcards ok)?
    vhost_hash_alias
    Enabling module vhost_hash_alias.
    To activate the new configuration, you need to run:
      service apache2 restart

    Here is a list of the Modules I just enabled:
    cache proxy_ajp proxy_balancer proxy_connect proxy_ftp proxy_http rewrite vhost_alias vhost_hash_alias

    Now let’s just restart Apache, and keep going.

    steve @ reverseproxy ~ :) ᛤ>   sudo service apache2 restart
    [ ok ] Restarting web server: apache2 ... waiting .

    Perfect, moving right along… Now what we need to do is setup a new file in the “/etc/apache2/conf.d/sites-available” directory. I named mine, “reverseproxy”, as it’s easy to figure out what it is.

    Now, to correctly setup your reverse proxy, this server should not be hosting ANY websites. This is a proxy server, not a web host. So go ahead and delete the config sym link for the default website. We don’t want to host that.

    sudo rm /etc/apache2/sites-enabled/000-default

    Now we can edit our “reverseproxy” file.

    sudo vim /etc/apache2/sites-available/reverseproxy

    #enter this code into your file

    <VirtualHost *:80>
      ServerName yoursite.info
      ServerAlias www.yoursite.info yoursite.info
      ServerAdmin info@yoursite.info
      ProxyPreserveHost On
      ProxyPass / http://www.yoursite.info/
      ProxyPassReverse / http://www.yoursite.info/
      <Proxy *>
            Order allow,deny
            Allow from all
      </Proxy>
      ErrorLog /var/log/apache2/yoursite.info.log
      CustomLog /var/log/apache2/yoursite.info.log combined
    </VirtualHost>



    <VirtualHost *:80>
      ServerName anothersite.com
      ServerAlias anothersite.com www.anothersite.com
      ServerAdmin info@anothersite.com
      ProxyPreserveHost On
      ProxyPass / http://www.anothersite.com/
      ProxyPassReverse / http://www.anothersite.com/
      <Proxy *>
            Order allow,deny
            Allow from all
      </Proxy>
      ErrorLog /var/log/apache2/anothersite.com.log
      CustomLog /var/log/apache2/anothersite.com.log combined
    </VirtualHost>




    <VirtualHost *:80>
      ServerName thirdsite.cc
      ServerAlias thirdsite.cc www.thirdsite.cc
      ServerAdmin info@thirdsite.cc
      ProxyPreserveHost On
      ProxyPass / http://www.thirdsite.cc/
      ProxyPassReverse / http://www.thirdsite.cc/
      <Proxy *>
            Order allow,deny
            Allow from all
      </Proxy>
      ErrorLog /var/log/apache2/thirdsite.cc.log
      CustomLog /var/log/apache2/thirdsite.cc.log combined
    </VirtualHost>

    Awesome, now save that file and we can get it enabled. Just like setting up new modules, we’re going to sym-link our new file to the “sites-enabled” folder.

    sudo ln -s /etc/apache2/sites-available/reverseproxy /etc/apache2/sites-enabled

    Now we can just reload the Apache server (no restart required) the server so that it picks up the new settings.

    sudo service apache2 reload

    Now we need to edit the /etc/hosts file so that our reverse proxy server knows where to push site traffic to on our DMZ. So lets do that:

    127.0.0.1       localhost
    127.0.1.1       reverseproxy.internal.dmz  reverseproxy
    192.168.0.26   www.thirdsite.cc
    192.168.0.26   thirdsite.cc
    192.168.0.26   www.anothersite.com
    192.168.0.26   anothersite.com
    192.168.0.65   www.yoursite.info
    192.168.0.65   yoursite.info

    # The following lines are desirable for IPv6 capable hosts
    ::1     localhost ip6-localhost ip6-loopback
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters

    Sweet, all done!
    Now you can test from a computer that all your sites are working. They *should* be! 🙂

    I’ll work on a blog eventually to show how to enable mod_security with this setup so that we can sanitize user interaction with our site. Our visitors are probably good people, but attackers and skiddies are always out there trying to damage stuff.

    Thanks for reading!!

    References:
    http://ubuntuguide.org/wiki/Apache2_reverse_proxies
    http://www.raskas.be/blog/2006/04/21/reverse-proxy-of-virtual-hosts-with-apache-2/
    http://www.askapache.com/hosting/reverse-proxy-apache.html
    http://www.integratedwebsystems.com/2010/06/multiple-web-servers-over-a-single-ip-using-apache-as-a-reverse-proxy/
    http://httpd.apache.org/docs/current/vhosts/examples.html
    http://geek-gogie.blogspot.com/2013/01/using-reverse-proxy-in-apache-to-allow.html
    http://www.ducea.com/2006/05/30/managing-apache2-modules-the-debian-way/
    http://www.akadia.com/services/apache_redirect.html
    http://unixhelp.ed.ac.uk/manual/mod/mod_proxy.html
    https://httpd.apache.org/docs/2.2/vhosts/
    https://httpd.apache.org/docs/2.2/vhosts/name-based.html
    https://httpd.apache.org/docs/2.2/vhosts/examples.html
    https://httpd.apache.org/docs/2.2/vhosts/mass.html
    https://httpd.apache.org/docs/2.2/vhosts/details.html

    VN:F [1.9.22_1171]
    Rating: 5.0/5 (1 vote cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Intro to Linux: File Systems, Permissions, and Hardware Fundamentals

    Hello again everyone. So, for the past few years now I’ve really been getting more and more into working with Linux. I know that’s a broad statement… Linux is just about on every device you see these days; mobile phones, computers, laptops, tablets, supercomputers, refrigerators, cars, custom motorcycles… everything! And how many different distros are there? hundreds!

    I won’t start any debates on how good or bad Linux is as a whole, or how Linux is as an overall Operating System… but I will go into how to use it, understand it, and operate it. This is the first part of many blogs I’ll be posting about how to use Linux, and we’ll start here with the file system. The reason why we’ll start with the file system is that it’s really the basis for everything you’ll be doing in Linux. I say that because Linux thinks everything is a file. Devices, files, folders… everything is a file. And everything can be referenced (pretty much) from the command line.

    It was a toss up for me on whether to start with this or my next blog (the Bash shell). I mean, literally everything you do in Linux requires the file system or Bash, or both, to complete any task. But we’ll start here and build up from there.

     
     
     

    Table of Contents

     
     

    Before we really get going, I’ll need you to start the “Terminal” program on your Linux machine. If you’re on Red Hat (or one of its derivatives) it may look something like this:

     
     
     
     
     

    What is a File System?

    We won’t touch on any other Operating System here than Linux. Specifically, the Linux kernel really is the Operating System, but we’ll cover that in my third blog (Understanding the Linux Kernel and Processes). For now just realize that the Linux Kernel is the underlying Operating System that allows data to be pulled from the local hard drive, which hosts a file system, and run as a process.

    From here on out when I refer to the Linux file system, I’ll be talking mainly about the EXT3 and EXT4 file systems. Don’t worry about what that means for now, we’ll cover that later.

    The base directory in Linux, is referred to as the “root” directory of the file system and is generally expressed in text as a simple forward slash: “/”. Every file and folder from here is referred to as part of the “directory tree”.

    To see all the objects in any directory, you can use the Bash shell command “ls”, which is short for “list”. This is what it looks like when you list the contents of my current Red Hat home directory:

    This command was run with no “arguments”, it is just a simple command that is just asking for a listing of the files and folders that are in my “home” folder.

    This next picture is a screenshot of what the exact same folder looks like with 3 command line arguments. The ‘a’ is for “all” (including hidden stuff), the ‘l’ is for a “long” listing, and the ‘h’ says that we want to see this in a “human” readable format.

    As you can see, there is a lot more information here, specifically, info regarding a lot of info about the file system. We’ll start with the easy parts of this output. We’ll look at the file, “.bash_logout” for this example.

    -rw-r--r--  1 serdman83 serdman83  176 Jan 27  2011 .bash_logout

    The first serdman83 is my username, and the second one is the group. These represent the user and group that own the file. This is very important to understand that the user and group are two very different things. The username is tied directly to me, which you can probably figure out, and the group is my primary group. Your primary group normally is the same as your username (except in rare cases). We’ll talk more about that later in another blog.

    The number “176” is a dynamic number. It’s the size in bytes of the file, and this file is quite small. As you can probably guess, this file was created on Jan 27, 2011, and obviously, the file name is “.bash_logout”. We’ll talk more about this file later, but try to remember what this “long” listing means.

    All of this information, plus more, is all stored in the file system.

     
     
     

    Terminals

    What is a terminal? According to Wikipedia, the definition of a terminal is, “The Linux console is a system console support in the Linux kernel. The Linux console was the first functionality of the kernel, developed as early as in 1991 (see history of Linux). On PC architecture, it is common to use VGA-compatible video hardware. Implementations of computer graphics on Linux are excluded from the scope of this article. Linux console, like pure text mode, uses monospace fonts.”

    What’s all this mean? You’ve already seen a Linux terminal above. The Linux terminal is what you see when you’re working with the command line. It’s how you open files, view directories, run programs, etc…

    But how do you use a terminal? Well, I’m going to cover that for you real quick here. It’s not going to be horribly in-depth, but we’ll do a 5 second intro.

    The first and most important thing to remember is that if you get stuck in a terminal that you’ve made a bunch of changes to and it’s not working right, you can always just enter the “reset” command to return it to normal behavior.

    There are also Control Sequences that can be passed to the Bash shell. These are most always entered with a key combination which includes the “Ctrl” (Control) key. We’ll cover a handful of the more popular control sequences that you’ll find yourself using.

    Ctrl + C = Probably the most used control sequence you’ll use. This will terminate almost any program that is currently running. There are programs that are setup to ignore this sequence, but just remember that the vast majority of programs do not.

    Ctrl + D = If you’ve entered a command that didn’t work, it may still be waiting for you to complete your input. Try a Ctrl+D to see if that completed the input. This control sequence is used to force complete a user input.

    Ctrl + H = If, for some reason, your backspace key isn’t working, you can use the old Ctrl+H for single character backspace.

    Ctrl + J = This command is an alternative to using the RETURN key. It’s just another way to perform a line feed.

    Ctrl + L = This does the same thing as the “clear” command. It will clear or refresh the screen for you.

    Ctrl + U = There are some commands that you can type into the Bash Terminal that can be very long. If for instance you realize you don’t need that command anymore, you can Ctrl+U to erase the whole line.

    Ctrl + Z = This sequence is used for suspending a program. We’ll be talking about this later. If you suspend a program, you haven’t terminated it, it’s still running in the background.

     
     
     

    Navigating the Filesystem

    The Linux filesystem is actually really a simple concept. Every shell, or terminal, has a current working directory. Current working directory (cwd) is like saying, where am I at. Wouldn’t it be nice to see where you’re at though? We’ll you’re in luck, because Linux has come a long way. While most traditional versions of Linux were totally command line driven, modern versions are rivaling Windows and OSX in spectacular fashion. Without getting into what a window manager is in too great of detail, there are versions that are just as easy to work with as Windows XP, Windows 7 and Apple’s OSX.

    In various versions of Linux, there are, I would say 3 main Window Managers. Gnome (and various forks of the original Gnome), KDE, and (Unity) only because of Ubuntu. There are many others out there, like XFCE, Enlightenment, Fluxbox, and LXDE, but we won’t be getting into those in these blogs. I personally like Gnome, probably because that’s what I started with so many years ago.

    Here’s what some of them Look like:
    Gnome 2.28
    KDE 4.x
    Ubuntu Unity
    MATE

    But in staying with the command line for the time being, you can view the directory tree quite easily. Below, I have shown what the output of the “tree” command looks like. For a text based output it shows you quite nicely what your directory structure looks like from your Current Working Directory. In this case my “cwd” was my home folder.

    From the command line I can see that the “Desktop”, “Downloads”, “Documents”, and other folders are in my cwd. To go into those folders I can just “cd”, or Change Directory, and tell the Bash Shell, my Terminal, to go there. Like this:

    In there, I’ve introduced two new commands, “cd” and “pwd”, as well as I’ve shown that Linux is CaSe SeNsItIvE… If I had typed, “cd documents”, the command would have failed because there is no “documents” folder. The folder is named, “Documents”. The cd command tells the Bash Shell that I want to move into a new directory, and “Documents/” is the directory I want to move into. “Documents/” is the command line ‘argument’ that I provided the program “cd”. You can issue the “cd” command with no arguments as well; it will take you where ever you are at back to your home directory. The “pwd” command is another program that has a sole purpose of “P”rint “W”orking “D”irectory. The “pwd” command takes no arguments.

    This also brings to mind the relative and absolute paths that you can see in that screenshot. There is an absolute path, “/home/serdman83/Documents”, and a relative path, “Documents/”. The Absolute path is exactly where a object is based on where it is located from the root of the system. In this case the root of the system, as in all Linux systems, is “/”. So my Documents directory is located in “/home/serdman83/Documents”. Because I was already in my home directory, “/home/serdman83/”, I can tell the shell to move into the “Documents/” directory because relative to my “cwd”, “Documents” is a sub folder.

     
     
     

    Other Important Directories

    Lets talk about other important directories in the filesystem. We’ll start with the Root of the system, which is represented by a simple “/”. Root, not the username but the directory, is the holding section of the entire system. Everything you see will always come from the root directory because there is directory higher than the root directory. If you do a listing of the root directory, this is very similar to what you’ll see:

    As you can see there are a lot of folders in here. Let’s talk about some of them. The first one we’ll talk about are the “/bin/ and “/sbin” directories. These directories are special because they hold almost all of the programs that run on a computer. The “/bin” folder holds the programs that normal computer users use, such as “ls”, “pwd”, etc… We’ll definitely cover more later. The “/bin” folder is supplemented by the “/usr/bin” folder also, which holds other programs that normal users can run. Any program that runs with no elevated rights can be put in these two folders. On the other hand, the “/sbin” folder holds many more programs that only the root user can run. It is supplemented with the “/usr/sbin” folder which holds many programs as well.

    To make this easier, just remember that the “/bin” and “/usr/bin” folders hold programs that generally any user should be able to run. And the “/sbin” and the “/usr/sbin” folders hold programs that require elevated rights (such as the Root account) to run.

    The next folder is the “/boot” folder. Generally, you’ll almost never go into this directory and store files. This directory holds information for booting the machine. In every Red Hat and Debian based machine I’m aware of, this folder holds the information for the Linux Kernel, the RAM Drive and some other configuration files such as the Grub boot loader.

    “/Dev” is the next directory we’ll talk about. Similar to “/boot”, you’ll never save anything in this folder either. The purpose of this directory is to hold all the information about every single device that is attached to your computer. We’ll talk much more about this in another blog.

    The next directory is “/etc”. This directory holds all the configuration data for all the programs and software that are installed on your Linux machine. You’ll most likely use this folder frequently if you’re planning on making changes to the way any software runs on your computer. Every single thing that runs or is installed in Linux can be modified with a configuration file. Windows is tied to the Windows Registry and the “C:\Windows\” directory for everything, while Linux uses the “/etc” and “/var” directories. There is no Linux Registry for security and stability reasons, but there are plenty of configuration files that offer the same functionality. We’ll be touching on this much more in the future blogs here.

    Quickly, I’ll touch on the “/lib” directory, which holds all the library files on your computer. Any software that requires extra software libraries will be calling some file that resides in here.

    The “/mnt” and “/media” folders are similar just because when Linux mounts a folder, network share, local USB drive, or CD/DVD drive, you’ll most likely find it in one of these two folders. If you’re virtualizing your Linux install, and you’re sharing folders with your host machine, those folders will appear in one of these two directories as well.

    Next is “/tmp”. Just as you would expect, this directory is for temporary files. By default, any file that is put in here has a life span of 10 days. More accurately, if a file’s access date is 10 days or more, that file will be deleted. So if I create a file today, and don’t touch it for 10 days, Linux will automatically delete it after that. This is the only directory that you’ll find that anyone has rights to write to. By default, all other folders in a Linux system can only be written to by the Root user. The exception to that is every user’s personal home directory, and the “/tmp” directory.

    The last directory we’ll cover here is the “/var” directory. If you’re hosting a web server, it’ll be in here. Your system mail is delivered here. Many things happen in this directory. You’ll find that many configuration files are also in here, but there are also log files, news group information (if it’s setup), ftp files that your machine is hosting and many other things too. We’ll talk much more on this in other blogs.

     
     
     
    So you’re thinking, “dude, this is so boring, when are we going to get to the fun stuff?” And here’s my answer: “We’re there, you just don’t know it yet.”

    All of this stuff is the core building blocks of Linux. If you understand this stuff at a good level, you’ll be so much better off using Linux in the long run.

     
     
     

    How to Manage Files and Directories

    We’ll touch first on redirection of output. The thing to remember here is that output in Linux defaults to the console. To redirect the output you use the “>” greater than or “<" less than signs. Here you can see that I've run the "pwd" command to print my working directory. I've them redirected my output to the pwd.txt file. Then I used "cat" (short for concatenate) to print the pwd.txt file back out to the screen. While this doesn't seem to be that important, you'll surely find it useful down the road.

     

    We can now try to copy that file to a new directory. But first let’s create the new directory, then we’ll copy the file into it.

     

    Let’s cover copying directories while we’re talking about copying.

     

    As you can see, I listed my “newdir/”, then I copied my “newdir/” to another folder named, “newdir2/” and then I listed my current working directory recursively.

    Now that we have two copies of the same “pwd.txt” we can delete one of them. So let’s go over how to remove directories too. In order to remove a file you use the “rm” command.

    What if you wanted to move a file instead of copying it? How about renaming a file? Well, Linux doesn’t have a rename command, you just move a file to a new name. Like this.

    Here you’ll see that I first Changed Directory (cd) into my “newdir/” directory, then I listed in Long format, all the files in human readable format of that directory. Then I moved the pwd.txt file into a new name (newpwd.txt). Following that I moved the newpwd.txt file back up one directory (to my home directory). Lastly, I showed what tab complete does by changing directories back to my home folder and issuing the “ls -alh” command again. But this time when I issued the “ls -alh” command, I typed the word “new” behind it, and pressed tab twice.

     

    I hope I didn’t move too fast through that last screenshot. Changing directories backwards is easy because the Linux kernel understands two periods “..” as “go back one directory”. And the Tab complete is extremely useful because it will attempt to complete whatever it is you’re typing. Try it in almost any command, at almost any time. You’ll find it very useful.

     

    Don’t get too hasty in moving files around though. Be absolutely sure that what you are doing is exactly right. In Linux, there is no “undo” function. If you move a file to a directory that contains a file with the same name you can overwrite, or “clobber”, the original file with the one you moved.

     
     
     

    File Globbing and File Names

    Unlike Windows, Linux files can contain just about every character on the keyboard. If you wrap a file name in single quotes (‘ ‘) you can use any of these characters in a file: ‘!@#$%^&*()_+-=\|][}{:;?><,.~`' In doing that you can cause a nightmare for developers and users of files with those characters in them. So while you can technically use those chars, I really recommend NOT doing so. One special char that I want to touch on here is the period (.). The reason why is that, like Windows and Mac, there can be files that are "hidden". In Linux, you can't really hide a file. There are no Alternate Data Streams (ADS) in the Linux File System, so a “ls -alh” will show you every file in a folder. But if you want to “hide” a file from a regular “ls” command you can start the file name with a period. Files like “.bash_history”, “.bashrc” and folders like “.ssh/” (all of these should be in your home directory), are not visible with a plain “ls” command.

    Now we’ll talk about file globbing. This is really simple, and a really simple concept, but you need to understand the ramifications of what you’re doing. By using the asterisk (*), you can specify many files at the same time. And while we’re at it, let’s introduce “tab-complete” since they’re pretty similar.

    Let’s see a screenshot of tab complete, then a screenshot of file globbing:

    So as you can see in the first picture, the tab complete helps because I know there is a folder that starts with “lab” but I’m not sure exactly what it is. So if I type “lab” and then hit the “Tab” key twice I can see what other files and folders start with the letters “lab”.

    The file globbing was nice because I wanted to move all the files and folders that start with “lab” into a folder called “all-labs”. I was able to do this as you can see.

    File globbing is also nice to use when you have a folder with a ton of files in it and you’re looking for all the files that end in “.conf”. So to find them all, you could issue this command:

    ls -alh *.conf

     
     
     

    File Ownership

    Before we get too much further, how a file can be managed by permissions as well as ownership.

    Let’s first talk about Linux Users. All of the users for a system reside in the “/etc/passwd” file and in modern Linux (and UNIX) systems their passwords are managed in the “/etc/shadow” file. We’ll talk more about both of these files later, but you should at least know that these two files are extremely important.

    As you well know, with any computer system, you log on with a username. The /etc/passwd file holds all the information about the user. As you can see from this screenshot there is a standard format to the file as well.

    As you can see in the above screenshot, there are 7 columns in every single line item, and they are separated by colons (:). Let’s review these fields real quick.

    Field 1 is your username. Pretty strait forward.
    Field 2 is your password. But it isnt stored here. Remember, it’s in the /etc/shadow file, and the “x” designates that.
    Field 3 is your user ID. When your account is created, you’re assigned a unique number. While it can be changed, it’s highly advisable not to.
    Field 4 is your primary group ID. This is normally the same number as your user ID, but it can be different for special circumstances.
    Field 5 is the GECOS field. It’s deprecated, meaning it’s not used anymore, but it needs to be there for backward compatibility. Normally it just holds the users full name.
    Field 6 is for your home folder. It tell the Operating System where your home folder is located. For the VAST MAJORITY of the time, your folder will be a sub-directory of the “/home/” folder.
    Field 7 is the shell that you’re assigned. Most of the time it’s the bash shell, but on other systems it can be others. We’ll talk about shells later.

    As a NOTE on Field 7, if you see that a user or service has the shell “/sbin/nologin”, that user’s account is basically disabled.

     

    As for the password field (field 2), whenever you change your password, the password is stored encrypted in the “shadow” file. You can change your password with the “passwd” command. See here:

    I cheated a bit, because I’m lazy and don’t feel like changing my password, but you would be prompted for your current password, then your new password, then your new password again (just to make sure that you didn’t fat finger it).

     

    I just mentioned that there is a system account. There are actually three different types of accounts: Normal User, System User and the Root User. They are different and in the case of the Root user, it is the user that has more privileges than any other user on the machine.

    Normal user accounts and groups usually start their UID and GID numbers above the number 500, service accounts are usually below 500, and the Root user account is ALWAYS 0 (zero).

     
     
     

    Groups in Linux

    I mentioned groups and Group IDs above because part of the file permissions includes group permissions. You’re user account will always be part of at least one group: your primary group. We talked about your primary group in the last section, but now we’ll get into the secondary groups.

    All the users on the Linux system you’re working on have the option of being placed into a secondary group, which is controlled by the “/etc/group” file. This file looks fairly similar to the “/etc/passwd” file, but it plays an entirely different role. Lets look at the “/etc/group” file and dig into what it does.

    As you can see above, there are a lot of groups on the system. In total on my test box, you can see 106 groups defined. The file itself, like the “/etc/passwd” file, is comprised of many fields. While the “/etc/passwd” file has 7 fields, the “/etc/group” file only has 4.

    Field 1 is the group name.
    Field 2 is the group password. This field is rarely ever used. It is normally filled with an “X” just as a placeholder.
    Field 3 is the Group ID, or GID. It’s always a whole integer value.
    Field 4 is comprised of all the secondary groups that a user is also part of. Make sure this field always ends in a real group name. If it ends in a “,” you’ll be booting to recovery mode.

    Overall, this is really all you need to know about Linux groups. It’s pretty easy, you’re either in a group, or you’re not.

    So what if you’re not in a group that you want to be in? Lets say you want to be part of the “Motorcycle” group. First, if you have the password for the Root account, you need to log in as root, and then you can use the “usermod” and “groupmod” programs to modify your information.

    The “usermod” program is very powerful. We’ll only touch on what it can do for groups here; we’ll cover the rest of it as we move forward.

    The “usermod -g” will change the primary group membership for the user you’re changing (remember, the primary group is stored in the “/etc/passwd” file). The “usermod -G” will take a list of comma separated group names and overwrite the secondary group memberships for whatever user you’re referencing. And lastly, the “usermod -a” will take a list of comma separated groups and APPEND them to the already existing secondary groups for the user you are changing.

    Not to get too in depth on this, we’ll move forward, but we’ll be back to this later.
     
     
     

    File Owners

    Now that you know what is needed about Users and Groups, lets talk about file ownership.

    If you look at a file with a long listing, you’ll see that same information I showed you before:

    serdman83 @ newstudent05 ~ :) ?> ls -alh
    total 62244
    drwxr-xr-x 11 serdman83 serdman83     4096 2013-04-22 12:27 ./
    drwxr-xr-x  3 serdman83 serdman83     4096 2011-12-15 14:47 ../
    -rw-------  1 serdman83 serdman83    24309 2013-06-14 17:23 .bash_history
    -rw-r--r--  1 serdman83 serdman83      220 2011-12-15 14:47 .bash_logout
    -rw-r--r--  1 serdman83 serdman83     3860 2012-11-09 15:16 .bashrc
    drwx------  2 serdman83 serdman83     4096 2011-12-15 16:02 .cache/

    As you can see, in here, you see my username listed twice. That’s actually not my username in the 4th field, it’s my primary group name.

    Before we get to my username, lets look at the columns that are there.

    The first column is the file permissions. It specifies what the owner, group and other permissions are for the file. We’ll cover this more in a minute.
    The Second column is the number of hard links. We’ll get into file linking in a little bit as it is also very important.
    The third column is the file owner. This output shows that I am the file owner.
    The fourth column shows the group owner. In this case, my group is the owner of this file, but it could be changed to some other group.
    The fifth column is the size of the file in bytes.
    The Sixth and Seventh columns are the date and time the file was created.
    The Eighth and final column is the file name.

     
     
     

    User and Group Information

    We’re going to cover some commands here that will help you down the road for system administration. First off, we’ll discuss information about the “whoami” command.

    It’s pretty easy to figure out what it does. You issue the command “whoami” to the command line and Linux will tell you who you are.

     

    So what if you know who you are, but you want to know what information there is about your user account? Or someone else’s account?

    This is where the ID command comes into play. The “id” command has 4 arguments, and we’ll cover them here.

    -g will tell you the the primary group for a user.
    -G will tell you all the groups a user is part of.
    -u will tell you the user’s UID number.
    -n will tell you the user’s username or group name instead of just printing out the UIDs and/or GIDs.

    Let’s see some examples:

     

    So how do you know who to look up if you don’t want to look through all the user and group information held in the “/etc/passwd” and “/etc/group” files?

    That’s easily done by just finding other users that are logged into a computer. You can do that with 3 other commands. Those commands are “users”, “w” and “who”.

    The “users” command will output all the users logged into the system at the moment the command was issued.

     

    Don’t let it deceive you if you see the same user logged on more than once. You can see more than one person logged in multiple times if they have multiple shells (terminals) open.

    Next is the “w” command. As you can see below, it’s much more detailed than the “users” command. It also has a nice header to tell you what each of the columns are telling you. In addition to that, it tells you system up time, what users are currently logged in and it tells you the load averages on the CPU for the last minute, 5 minutes and 15 minutes.

     

    The next command is the “who” command. It’s slightly different than the “w” command, but is equally important.

     

    As you can see from the screenshot I’ve provided, there are multiple columns, but this time no header.

    The first column is the username for who is logged in.
    The second column is the terminal that they’ve logged in to.
    The third and fourth column are the date and time that the user logged in.

     
     
     

    Logging in as a Different User

    The last thing I want to cover here is logging in as a different user. We’re straying away from actually talking about the file system, but I did bring up a couple things regarding the “root” account so I feel it’s only fair that I tell you how to log in as root (if you don’t already know).

    It’s real easy actually. See below.

     

    You can do that for any account you know the password for. You can “su” and then any account name you know is on the system.

     
     
     

    File Permissions

    Now that we’ve covered user identities, file ownership, groups, and all that stuff, let’s get back to the file system and file permissions.

    There’s two way to control file system permissions for a file. The first way is called generic permissions. I call them generic because you’re using letters to map permissions. The other way is with Octal permissions. This is where you use numbers to modify the file permissions.

    Let’s start by discussing what we’re doing here. Below is a folder called “newdir” and a file called “newpwd.txt”.

    drwxrwxr-x. 2 serdman83 serdman83 4.0K May 13 17:14 newdir/
    -rw-rw-r--. 1 serdman83 serdman83   16 May 13 16:59 newpwd.txt

    Lets look at the file permissions before we look at the folder permissions.

    The file permissions are “-rw-rw-r–“.

    You will always see these 10 spaces filled with some characters.
    The first character is a hyphon (-). The reason why is that it’s a file. You notice on the directory it’s a “d” (for directory).
    The next three characters are “rw-” These are the permissions associated with the owner of the file. It means the owner (a user) is allowed to read and write to the file, but can’t execute or run the file.
    The next three characters are also “rw-“. These permissions are associated with the group of the file. This means that the group that owns the file is allowed to read and write to the file, but again, cant execute it.
    The next three characters are “r–“. These permissions are Everyone else. This means that anyone else is allowed to read the file, but can’t write (or change) or execute the file.

    I need to cover that first column better so that you know what you’re looking at here. Below is a table of the possible characters that you’ll see in the first character’s position.

    Regular File             -      Storing data
    Directories              d      Organizing files
    Symbolic Links           l      Referring to other files
    Character Device Nodes   c      Accessing devices
    Block Device Nodes       b      Accessing devices
    Named Pipes              p      Interprocess communication
    Sockets                  s      Interprocess communication

     

    After the first bit you will always, always, always, have the options of read, write and execute, for each of the Owner, Group and Other of a file.

    Let’s say that you want only yourself to be able to read and modify a file, the permissions would look like this, “-rw——-”

    Let’s say you want you and the group to be able to read and modify a file, but nobody else… the permissions would look like this, “-rw-rw—-”

    Here’s a graphic I found at Oracle’s website and then doctored up for understanding this.

     

    You’re probably wondering why I have the “421 421 421” and the “7 5 0” on there too.

    The reason why is that when you look at binary, the first three positions are 1, then 2, then 4 (binary is read right to left). If you add up the values that are present in a file’s permissions, you’ll end up with a value between 0 (represented by a hyphon (-)) and 7. The Read position is 4, the write position is 2 and the execute bit is 1. And if you add up the three positions, you’ll find a number between 0 and 7.

     
     
     

    Using CHOMD to Change File Permissions

    So that’s great, now that we understand what the permissions look like after they’re set, you’re probably wondering how to change them.

    This is where CHMOD comes into play.

    As I said before, “chmod” (a program that stands for “CHange MODe”) takes different types of arguments. The first type is what I call generic. Personally, I never use this. I always use the second type of argument, Octal. But lets look at what we have here:

    u    user
    g    group
    o    other
    a    all
    +    add
    -    remove
    =    set
    r    read
    w    write
    x    execute

    Now that we know what types of abilities we have, lets test this stuff out and change some permissions.

    Below, you see a whole list of commands you can run for “chmod” with the effective permissions at the end. We’re working with an imaginary file named “linux.dat”. Make sure to look at the file’s starting permissions and ending permissions.

    serdman83 @ newstudent05 ~ :) ?> ls -l linux.dat
    -rw-rw-r-- 1 serdman83 serdman83 42 Apr 15 12:12 linux.dat

    chmod arguments                       result of command                     effective permissions
    chmod o-r linux.dat         remove readability for others                   rw-rw----
    chmod g-w linux.dat         remove writability for group                    rw-r--r--
    chmod ug+x linux.dat        add executability for user and group            rwxrwxr--
    chmod o+w linux.dat         add writability for other                       rw-rw-rw
    chmod go-rwx linux.dat      remove readability, writability,
                                and executability for group and other           rw-------
    chmod a-w linux.dat         remove writability for all                      r--r--r--
    chmod uo-r linux.dat        remove readability for user and other           -w-rw----
    chmod go=rx linux.dat       set readability and executability but no
                                writability for group and other                 rw-r-xr-x

     

    I hope you can see from this output that you can effectively change permissions for any file using this technique. Test it out on your own and see what you can do!

     

    Now lets talk about Octal permissions. I think Octal is easier, but maybe that’s because I use it all the time, and I rarely ever use the other method.

    With Octal you can specify permissions for entire folders of files as well as individual files. I believe it’s more powerful, and easier to script with octal notation. As we saw in the graphic before, you have User, Group and Other permissions. Lets look at that again:

     

    750 isn’t actually, seven hundred and fifty. It’s 7-5-0. the 7 means that the owner user is allowed to Read, Write and Execute the file. The 5 means that everyone in the group that owns the file is allowed to Read and Execute the file. And if you’re not the owner or in the owner group, you aren’t allowed to do anything with the file.

    664 would mean that the owner has read and write permissions, the group has read and write permissions and everyone else has read permissions.

    Now, lets look at the “chmod” command with octal notation. Remember that with the other way of changing file permissions you have to calculate what the current file permissions are, and then figure out what your command should add or remove. Here, with Octal notation, you don’t have to worry about how to change the permissions, you just have to figure out what the end result should be. We’ll use the same chart as we used above.

    serdman83 @ newstudent05 ~ :) ?> ls -l linux.dat
    -rw-rw-r-- 1 serdman83 serdman83 42 Apr 15 12:12 linux.dat

    chmod arguments             effective permissions
    chmod 660 linux.dat         rw-rw----
    chmod 644 linux.dat         rw-r--r--
    chmod 774 linux.dat         rwxrwxr--
    chmod 666 linux.dat         rw-rw-rw
    chmod 600 linux.dat         rw-------
    chmod 444 linux.dat         r--r--r--
    chmod 260 linux.dat         -w-rw----
    chmod 655 linux.dat         rw-r-xr-x

    Make sense?

     
     
     

    Changing Ownership of Files

    Well that’s great, we can now work on files permissions and we understand how to interpret long listings of files using the “ls -alh” command. Now let’s look at changing ownership of files.

    We know that there are two owners. There’s the actual User that owns a file, and there is the group who owns the file. There always has to be both.

    With the “chown” command, you can either change the owner of one file or directory, or you can add in a “-R” and change all the files and folders recursively (starting with everything in your current working directory). Be very careful, you may have some unintended consequences by using the “-R” argument. Make sure you understand what you’re doing.

    So lets look at some examples.

    Below you see that I have changed the ownership of a file from me to root. See here that I was logged in as Root to do that.

     

    On this screenshot below I showed the use of the “-R” so that I could change the ownership of the whole “all-labs” directory and all the files and folders below them.

     
     
     

    Changing Group Ownership of Files

    Now that we know how to change the ownership of a file, how about changing the group owner of a file? That is done in the exact same way as the “CHOWN” command, but instead of “chown”, we’ll use “CHGRP” (which is short for CHange GRouP).

    Below, I changed the group owner from my personal group, to the Root group.

    And here, I showed how to use the “-R” for recursion.

     
     
     

    File System File Information

    Before we go any further, we need to touch on some information that I brought up before. As you can see below, I’ve created a new file by echoing some data into it. The first line started the file, and the three following lines added to it. Then I showed a long listing of the file to show you that information.

    If you notice, my file is 23 bytes in size, which is the Data portion of the file. There is also metadata for the file; the owner, group owner and permissions on the file. You don’t see it here, but there is other data about the file too, such as the creation data, modify data and read date. The last piece of info is the file name, commonly referred to as a “Dentry”, which is a combination of the file name and the “Inode” that it refers to.

    Inode is a new word here, as well as Dentry. As I said before a Dentry is a combination of the filename and the Inode. The Inode is the file’s meta data and holds a reference to the file’s Data. Those three things are what makes up a file. I hope I explained that so you can understand. Just remember that a file will always have those three things: Inode, Dentry and Data.

    As I mentioned before, an Inode contains information about what a file is. As I mentioned before, everything in Linux is a file, just that there are different types of files. Here are the different types of files that you’ll see in Linux:

    Regular File             -      Storing data
    Directories              d      Organizing files
    Symbolic Links           l      Referring to other files
    Character Device Nodes   c      Accessing devices
    Block Device Nodes       b      Accessing devices
    Named Pipes              p      Interprocess communication
    Sockets                  s      Interprocess communication

    You must remember that an Inode carries the File Type (as mentioned above), the owner and group owner of the file referenced, the times about the file (atime (access/read time), mtime (Last Modified time) and ctime (Last time the Inode information was changed)), the file length (measured in bytes) and the total amount of disk space the file uses, and lastly the link count (which we’ll talk about in the next section).

     
     
     

    File System Linking: Difference between Hard and Soft Links

    Now we’re going to talk about file linking. Just how Windows and Mac have links, so does Linux. Windows has shortcuts on the desktop (ruined by the Windows 8 UI) which are similar to links in Linux.

    In Linux, there are two different kinds of links: Hard and Soft. Lets dive in and look at the difference and how you can apply them in your Linux box.

    Hard Links can be used when the same file needs to be located in two different locations. Lets say there is a program that runs, that needs to see a configuration file that is located in another folder for a different program. Instead of keeping the file up to date, and replicating the changes in two different spots, you could create a hard link. Every time the configuration file is updated in one location, the changes are automatically seen in the other. The other benefit to this is if the config file is referenced by one program as “program1.conf” and the other program see’s the file as “other-program.conf” I know, this is a bad example, but stay with me here.

    So the file is created for the first program in “/etc/new-program/program1.conf”. It’s just a regular file on the system’s hard drive. Let’s pretend that the file I just created in the last section (newfile-test.dat) is this program1.conf file. Now we’ll create a hard link to the file to pretend that the file is in a different location.

    You can create a hard link by using the “ln” command. It’s very similar to the way that the Move command “mv” works. See here how I’ve done it:

    Always remember when making links, the rule of thumb is,

    "ln" <spacebar> real-file-name <spacebar> linked-file-name

     

    Soft links are very similar to hard links. The difference between them is what happens when they are deleted. Lets look at Soft Links first, then we’ll talk about deleting them.

    Soft Links can be created very similarly, but the difference is the underlying structure of the link. A soft link, or symbolic link, is like your shortcut on the desktop of your Windows box. When you create a soft, or symbolic, link, you’re just putting a file in the location that you want it, that points to the real location of the file. Let me show you:

    As you can see, I created a file as root in my home folder and then created a symbolic link to that file that is named “linked-newfile.dat”.

    Now that you see how to create hard and soft links, let’s talk about the differences, and why they matter.

    With soft links, if I delete the source (original) file, then the link is dead. That is called a dangling link. The linked file still exists, but it’s not linked to anything. That can’t happen with a hard link because the file exists in two locations.

    The other issue is, lets say I create a link to a link. And the first link references the second link, and the second link refers to the first. That’s an infinite loop and it’s called a recursive link. Unless you’re trying to wreak havoc on your machine, it’s pretty hard to do, but it is possible.

    Hard Links                                                     Soft Links
    Directories may not be hard linked.                            Soft links may refer to directories.

    Hard links have no concept of "original" and "copy".           Soft links have a concept of "referrer" and
                                                                   "referred". Removing the "referred" file results in a
    Once a hard link has been created, all instances               dangling referrer.
    are treated equally.                                          

    Hard links must refer to files in the same filesystem.         Soft links may span filesystems (partitions).

    Hard links may be shared between "chroot"ed                    Soft links may not refer to files outside of a
    directories.                                                   "chroot"ed directory.

     
     
     

    Linux File Systems, Disks and Mounting Them

     

    Before we get into mounting disks, we need to look at how Linux looks at Disks. As we mentioned in the section named, “Other Important Directories”, there is a directory named “dev” at the root of the file system (/dev). That is where you’re going to find all the disks located by default. But that’s not how you access the data on the disk.

    Before we talk about how to access the data on a disk, we need to talk about some other stuff.

    Disks are devices within your computer system, and if you look at the long listing of the /dev directory, you’ll see something interesting.

    As we mentioned above, there is an object that is a “block level device” in Linux, and that is a hard disk. Most of your systems these days deal in Sata devices, so that’s why we see “sd”. If it were and IDE Hard Drive, you would see “hd” there. If you saw a floppy disk, it would start with an “fd”. And if you saw a CD-Rom device, you would see a “cdrom”. Pretty strait forward.

    But why is there “sda” and “sda1” and “sda2” (and so on) in there? Those are all significant in their own way and we’ll cover what all of that stuff means.

    I’m not going to get too granular here, but I will say that the main thing you need to understand here is that by default no one but the Root account and the “Disk” account on your Linux box have access to do anything in this folder. That’s really important because accessing data on these devices shouldn’t be allowed to just anyone. If someone wants to access the data, they will have to see where the disk is mounted to and then see if they are allowed to write data to the mounted area.

    Before a disk can be mounted, it must have been formatted with a file system…

     
     
     

    File Systems and EXT4

     

    What’s the big deal about File Systems? Well, the big deal is that without a file system, you wouldn’t be able to store data logically on a disk, you wouldn’t be able to easily recall that data later, and you wouldn’t be easily able to search for data on that disk.

    A file system provides a template of “blocks” where the Operating System is allowed to store data. The default file system on Linux is the EXT4 File System. EXT stands for Extended, and the number 4 is the version number. So EXT4 is the fourth extended file system. It supports a lot of options that I’m not going to get super deep into here, but you can read all about it on other websites.

    Essentially, before a disk can be used in Linux, it must have a file system setup on it. In Linux, this is really easy to accomplish. There are many GUI tools out there, such as GParted, but the one I’m going to cover here is the “mkfs” command line toolset. I say toolset because there are actually many “mkfs” programs in the /sbin/ directory.

    You must be logged in as “root” in order to use the “mkfs” program (remember “su root”), otherwise it won’t work properly and throw errors that you may not be expecting. The programs all live in the “/sbin” directory, and if you recall, the /sbin directory is where all the programs live that only Root is allowed to run. Let’s look at the mkfs programs:

    As you can see above, there are many different file systems that Linux is able to make.

     
     
     

    Mounting File Systems and Viewing Mount Points

    Since we have the ability to make filesystems, now let’s mount them!

    As you may or may not know, the “mount” program is used to mount filesystems. But how do you see the partition you’ve mounted? There are no drive letters like in the Windows world.

    Filesystems and partitions are actually quite simple in Linux. Recall that the root of the filesystem is always in “/”. So whenever you mount a filesystem, you mount it to a folder that is somewhere in the root of your drive. In many single Operating System desktops, you wouldn’t normally have multiple partitions in your filesystem. But in more advanced systems, there could literally be over a dozen partitions, and the end user wouldn’t know.

    In my system, I actually separate out many partitions so that I can easily upgrade or migrate Operating Systems. This makes it especially easy when you move your home folder to a new machine. Imagine if the “/home” directory was actually a different Hard Drive that was automatically mounted upon the system booting. This way if you reinstalled your Linux OS, say to a different one all together (Maybe you were switching from Fedora to Ubuntu, or Debian to SUSE Linux), you could keep all the data in your “home” folder intact while reloading your OS.

    That “/home” folder would be considered a mount point. You can see your mount points by just issuing the “mount” command at the terminal.

    As you can see above, the mount command gives some good information to the end user. You can see that there is a single hard drive in my machine (it’s a virtual machine, but its all the same), and it is “/dev/sda”. On that disk, you can see that there are two partitions that are mounted: sda1, mount point is “/boot”; sda2, mount point at “/” (root partition).

    There are some other mount points listed here. For instance, the CDROM is mounted at “/media/RHEL_6.1 x86_64 Disk 1”.

    In most Linux distributions CD or DVD Rom devices are mounted in either the “/media” directory, or in the “/mnt” directory. Just from habit, I normally mount devices (DVDs, CDs, USB drives, etc..) in the “/mnt” directory.

    Some people say that it’s easier in Windows to view disk drives through the “My Computer” icon that is on the desktop. In Linux it’s really easy too. The “df” command will tell you everything you want to know about your disk’s free space. Let’s take a look at what that looks like. When I use the “df” command I normally append an “-ah” on the back so that I can see everything in human readable format. But let’s look at both here:

    AS you can see, the DF command can come in very handy. Also, as I said before, I almost always use “df -ah” because it’s normally the information I’m looking for. Play around with the other options though, you may find them useful.

     
     
     

    System Hardware

    As long as we’re on the subject of hard drives, why don’t we slide right into system hardware? It’s not really filesystem related but, we might as well cover some things such as… information about the hardware in your computer.

    If you’re on a machine that you’ve never used before, you can find out what hardware is in it with a few different commands, and looking at a couple different log files.

    When a Linux system boots, many times you’ll see a bunch of messages that the system is processing. There are portions of the hardware starting up, drivers being activated, network interfaces being brought up, services starting and many other tasks as well. All of these messages are produced by the Kernel, and they are logged in a file called “dmesg” which is located in the “/var/log/” directory. This file is different than many other logs in that it only can grow to a certain size, and it is wiped clean on every boot so you can only see logs since the most recent boot.

    According to Henry’s blog site, the default size is 32K, which can be changed in a couple ways if you so choose. I don’t particularly see the need for that, but check out his blog if you want more info on that.

    The dmesg log (also referred to as a buffer) can offer a lot of insight into what hardware is installed in your computer. Go ahead and check it out!

    To view to contents of that log you can either “cat /var/log/dmesg” or you can just issue the “dmesg” command to the command line. Pending what version of Linux you’re using, you may need to run that as “root” or “sudo” the command.

    steve @ mintdebianvm ~ :) ᛤ>   sudo cat /var/log/dmesg
    [sudo] password for steve:
    [    0.000000] Initializing cgroup subsys cpuset
    [    0.000000] Initializing cgroup subsys cpu
    [    0.000000] Linux version 3.2.0-4-amd64 (debian-kernel@lists.debian.org) (gcc version 4.6.3 (Debian 4.6.3-12) ) #1 SMP Debian 3.2.32-1
    [    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-4-amd64 root=UUID=6336df47-4713-4fe1-8327-93cbc721c8ef ro quiet
    [    0.000000] BIOS-provided physical RAM map:
    ...
    ...
    ...
    [   10.620051] Bluetooth: BNEP (Ethernet Emulation) ver 1.3
    [   10.620051] Bluetooth: BNEP filters: protocol multicast
    [   10.624155] Bluetooth: RFCOMM TTY layer initialized
    [   10.624155] Bluetooth: RFCOMM socket layer initialized
    [   10.624155] Bluetooth: RFCOMM ver 1.11
    [   10.692260] lp: driver loaded but no devices found
    [   10.825230] ppdev: user-space parallel port driver
    [   10.972943] e1000: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
    [   10.976905] ADDRCONF(NETDEV_UP): eth1: link is not ready
    [   10.972943] ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready

    As you can see above, the file is pretty long. My dmesg buffer is over 400 lines long. For some Linux boxes that’s short… for others it’s long. It all depends on what the system is doing, and what software and hardware you have installed.

     

    Another way to see what hardware is in your computer is to look at the Hardware Abstraction Layer. There is a process that runs on every single Linux box named, “hald”. This is the Hardware Abstraction Layer Daemon. You can actually query the “hal daemon” with a command: “lshal”, short for “list hal”. Check out the command below and try it on your computer too.

    steve @ mintdebianvm ~ :) ᛤ>   lshal

    Dumping 64 device(s) from the Global Device List:
    -------------------------------------------------
    udi = '/org/freedesktop/Hal/devices/computer'
      info.addons = {'hald-addon-cpufreq', 'hald-addon-acpi'} (string list)
      info.callouts.add = {'hal-storage-cleanup-all-mountpoints'} (string list)
      info.interfaces = {'org.freedesktop.Hal.Device.SystemPowerManagement'} (string list)
      info.product = 'Computer'  (string)
      info.subsystem = 'unknown'  (string)
      info.udi = '/org/freedesktop/Hal/devices/computer'  (string)
      org.freedesktop.Hal.Device.SystemPowerManagement.method_argnames = {'num_seconds_to_sleep', 'num_seconds_to_sleep', '', '', '', 'enable_power_save'} (string list)
      org.freedesktop.Hal.Device.SystemPowerManagement.method_execpaths = {'hal-system-power-suspend', 'hal-system-power-suspend-hybrid', 'hal-system-power-hibernate', 'hal-system-power-shutdown', 'hal-system-power-reboot', 'hal-system-power-set-power-save'} (string list)
      org.freedesktop.Hal.Device.SystemPowerManagement.method_names = {'Suspend', 'SuspendHybrid', 'Hibernate', 'Shutdown', 'Reboot', 'SetPowerSave'} (string list)
      org.freedesktop.Hal.Device.SystemPowerManagement.method_signatures = {'i', 'i', '', '', '', 'b'} (string list)
      org.freedesktop.Hal.version = '0.5.14'  (string)
      org.freedesktop.Hal.version.major = 0  (0x0)  (int)
      org.freedesktop.Hal.version.micro = 14  (0xe)  (int)
      org.freedesktop.Hal.version.minor = 5  (0x5)  (int)

    steve @ mintdebianvm ~ :) ᛤ>   lshal --help
    lshal version 0.5.14

    usage : lshal [options]

    Options:
        -m, --monitor        Monitor device list
        -s, --short          short output (print only nonstatic part of udi)
        -l, --long           Long output
        -t, --tree           Tree view
        -u, --show <udi>     Show only the specified device

        -h, --help           Show this information and exit
        -V, --version        Print version number

     
     
     

    /Proc

    You’ll also notice a filesystem mounted on your machine named “/proc”. This is an interesting virtual directory. It, too, is much like the “dmesg” log, in that it is wiped clean on every boot. The purpose of the /proc filesystem is to hold information generated and used by the Kernel. If you do an “ls -alh” on the “/proc” directory, you’ll notice many folders named with only numbers. You’ll notice quickly that if you issue the command, “ps aux”, that those numbered folders directly correspond to the Process ID (PID) number of every process running on your computer. Web Browsers, Terminal sessions, etc… everything running is issued a PID, and every process has a folder in /proc with information about that process.

    You’ll also notice that there are a ton of files in there too. Let’s examine some of them!

    As you can see above, there are many files in there. I couldn’t fit all of them neatly into a screenshot, but you can look at them on your computer.

    The one I thought you may be interested in was the “uptime” file. As you can see, the file outputs in seconds, how long the system has been up and running. My test system has been up for

    Let’s look at a couple more files:

    As you can see here, I am showing the “cpuinfo” and “meminfo” files. Both of them show some good details about the CPU and Memory installed in the system we’re using here.

     
     
     

    Disk and USB Information

    There is also information you can find about Hard Drives and USB devices. We’ll start with USB devices. Issue the “lsusb” command on your computer and look at the output.

    steve @ mintdebianvm ~ :) ᛤ>   lsusb
    Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 001 Device 002: ID 80ee:0021 VirtualBox USB Tablet

    As you can see from the output, there aren’t many USB devices on my local computer. On my Red Hat test server, there was actually nothing to report, which is why I showed the output from my Debian box.

    Now lets look at the “/proc/scsi/” folder. Since I don’t know anyone who uses IDE drives anymore, I’m not going to cover that. SATA is pretty much the defacto standard for laptop and desktop systems these days. See below for some of the outputs.

    There are only a couple files and sub directories in the “/proc/scsi” directories, but they are valuable knowledge for a system administrator looking to gain knowledge about the hard disks in the system.

     
     
     

    PCI Devices and Resources

    As you most likely have noticed, the “lspci” command is much like the “lsusb” command. It is a listing of all of the PCI Devices within the system you’re working on. It’s pretty strait forward, so I won’t spend much time here.

    Notice in the screenshot above the devices in my test server.

    Going back to the “/proc” virtual filesystem, there is a file that tracks IRQs, or Interrupt Request Lines. An IRQ is used by the hardware in your computer to get the attention of the CPU. According to WikiPedia, “… on the Intel 8259 family of PICs there are eight interrupt inputs commonly referred to as IRQ0 through IRQ7. In x86 based computer systems that use two of these PICs, the combined set of lines are referred to as IRQ0 through IRQ15. … Newer x86 systems integrate an Advanced Programmable Interrupt Controller (APIC) that conforms to the Intel APIC Architecture. These APICs support a programming interface for up to 255 physical hardware IRQ lines per APIC, with a typical system implementing support for only around 24 total hardware lines.”

    If you’re running a multi-processor system, you’ll notice that there are a set of IRQs for each processor. My system only has 1 CPU so I only have one set of IRQs. You’ll also notice that IRQ 0 (zero) is always running the timer service. The reason for this is that your CPU needs to time slice every process in order to process all the information that your system is computing. The timer runs at a guaranteed rate of 1000 interrupts per second.

    If you want more information about IRQs, WikiPedia has a great write-up on the subject and you can learn more about them there, but for this discussion this is about as much as you need to know.

    The file I was speaking of earlier was “/proc/interrupt” and if you view it, you’ll be able to see all the IRQs and what they are tied to on your system.

    There is also a file for memory information. Back in the day, RAM was in high value and hard to come by in large quantities in desktop computers. These days, just about every peripheral in your computer probably comes with it’s own memory buffer. And Linux needs to know how to handle that memory, what drivers are using memory, and how data will flow through that memory.

    As you can see below, that memory is mapped in the “/proc/iomem” file.

     
     
     

    How Filesystems Manage Devices

    As I stated earlier in the File System File Information section, everything is a file. Linux needs to use a device, it calls a file. Linux needs to write data to a hard disk, it writes it to a file. Linux needs to post data to your terminal, it writes it to a file (stdout). Everything is a file.

    There are virtual consoles in your system as well. These virtual consoles can be accessed by using your “Ctrl” + “Alt” + F# (where # is a number 1-8 by default. So if you press “Ctrl” + “Alt” + “F6” your screen will turn black, and then a prompt will appear waiting for you to log in. Then you need to press “Ctrl” + “Alt” + “F4” to get back to your desktop, which is just another virtual console but is running a service called “x.org” which is what displays your GUI.

    Regardless, all of those virtual consoles are actually just files. Strange, maybe, but try to echo some text to “/dev/tty6” and see what happens when you look at virtual console 6.

     

    I hope this is a convincing fact to show you that everything is a file. We just “echoed” text into the file “/dev/tty6” and it showed up on the VC6 screen.

     

    Again, going back to what I said before, everything is a file, just that there are different types of files. Here are the different types of files that you’ll see in Linux:

    Regular File             -      Storing data
    Directories              d      Organizing files
    Symbolic Links           l      Referring to other files
    Character Device Nodes   c      Accessing devices
    Block Device Nodes       b      Accessing devices
    Named Pipes              p      Interprocess communication
    Sockets                  s      Interprocess communication

    The two that we are going to work with now are Device Nodes and Character Device Nodes. As you see from the table above, they both deal in accessing devices. But How?

    We’ll cover Block Devices first because we’re talking about hard drives. You’ll notice that any hard disk in your system, such as “/dev/sda1”, is a block level device. That means that information is transferred to and from the device that file is “attached” to in groups, or blocks. Another important fact to block level devices is that the Linux drivers allow for random access to the device, as opposed to sequential access. This is a huge benefit. Could you imagine if your computer had to read all the data on the drive before being able to pull a file located at the very end?

    As for Character devices, these have to do with things like keyboard input and output, such as the virtual console (or virtual terminal) that we just “wrote” data to in the example above. Another type of Character Device would be a printer.

     
     
     

    We’re getting there… slowly but surely! We’re on the home stretch, so lets finish this up with the last part of file system management, disk partitioning and encryption!!

     
     
     

    More on Partitions

    As I mentioned before, Linux sees hard disks through block devices that you can list in the “/dev” directory.

    To expand on this, lets look at the screenshot I have for “sda” again:

    As you can see from the screenshot, there is a device referred to as “/dev/sda”. That is one disk in the machine. If there was another, it would be “/dev/sdb”, and then, “/dev/sdc”, and so on.

    The partitions are listed after that. You can see there are multiple partitions on “sda”, and they are “/dev/sda1”, “/dev/sda2”, and “/dev/sda3”. Using the “mount” command you can see that those three partitions are mounted to “/boot”, “/” (root), and “swap”, respectively. We’ll talk about swap space here in a bit.

     
     
     

    Disk Partition Alignment

    Every disk has something called a Master Boot Record, or MBR for short. This tells the disk exactly where certain things are located on the disk, such as the Bootloader and the Partition Table.

    The Bootloader only exists on disks that are marked as bootable. The Bootloader is a low level executable that the BIOS transfers control to upon its boot cycle, and then the bootloader passes control of the boot to the partition for which an operating system is present on.

    Sixty four (64) bytes of the MBR is reserved for the partition table. The partition table is just like a map, and holds information as to where partitions on the disk start and stop. 64 bytes isn’t a lot of room, which is why there is a limit to how many partitions are allowed to be made on the disk. Disks are only allowed to have 4 primary partitions.

    There’s a way to get more partitions on your disk though, using “Extended Partitions”. This has been around for many years and is a genius way to fit more partitions on a disk. According to DOS partitioning, you can pick any one of the 4 partitions as an Extended Partition. This Extended Partition can be thought of as a container for other partitions that are referred to as “logical partitions”.

    There is a program that you can use to alter or view partition information. That program is “fdisk”. You must be root to run the program because it queries the disks in your machine at a low level that normal users don’t have access to. Many times you’ll see people call the “fdisk” program in one of two ways:

    or:

    The reason for the “fdisk -cul” is that the “c” disables some old DOS compatibility that isn’t required anymore, and the “u” prints out the information in the number of sectors, as opposed to the number of cylinders. Back in the day, and even back in OpenBSD versions 3.6 or 3.8, I remember having to partition disks by specifying the number of Cylinders, Heads and Sectors. These days, it’s so much easier… you can specify size in a variety of ways, such as K for Kilobytes, M for Megabytes and G for Gigabytes.

    But we’re not even at that part yet. So let’s keep moving!

    In the output of the last screenshot you can see a lot of information. You can see the total size of “sda” is 6442MB. You can see that there are three partitions on “sda”. You can see that there is a second disk in the system (sdb) that is just about 1GB in size and it has 7 partitions.

     
     
     

    Making New Partitions

    With “fdisk” you can also specify new partitions. I’ll do my best in describing this…

    To start the “fdisk” utility, you need to call “fdisk” with a few arguments. See my screenshot below:

    Now that we’re inside the fdisk editor, you can do a lot of damage if you’re not careful, so… be careful!

    As you can see, I told fdisk that I want to edit the disk “/dev/sdb”. The first thing I want to do is look at the partition table.

    So press “p” and then on the keyboard to show the partition table.

    In this case, I don’t want any of these partitions on here, so I’m going to delete them all. Lets see what that looks like:

    As you can see, now we have disk “sdb” with no defined partitions on it.

    Now that we have an empty disk, let’s create some new partitions to see what that looks like.

    As you can see from that screenshot, I chose “n” for new partition, then “p” for primary partition, then “1” for the first partition in the disk. Then I specified “+200M” to say, I want the Partition to be 200 Megabytes in size. after that, I printed the partition table again for you to see the new partition.

     
     
     

    Making New File Systems

    Now that we’ve got new partitions we can go back to our discussion on File Systems and EXT4. So just to clarify, now that we actually have a partition, you still can’t store data on there yet. Well… you could, but you wouldn’t be able to retrieve it very easily. You have to give your new partition a valid file system. The one that is pretty much the Linux standard these days is the EXT4 filesystem, so that’s the one I’ll show you how to use.

    The EXT4 file system is the predecessor to the EXT3 and 2 file systems. Those file systems used what is referred to as “block mapping scheme”. The EXT4 file system now uses “extents” and also adds journaling to a long list of other add-ons, improvements and scalability.

    EXT4 also supports huge file sizes (around 16TB or 16 Terabytes), huge file system sizes (1 Exabyte, which is 1024PB (Petabytes). A Petabyte is 1024TB (Terabytes). A Terabyte is 1024GB (Gigabytes)).

    As we spoke about before, there are many things a file system provides. The first thing we spoke of was the structure of the file system. There is the root of the file system that is located at “/” and every folder, file, and device is located below that. And since Linux looks at everything as a file, we can also recall that every file has a number of attributes including an “Inode”, a “Dentry” and the data. We covered these words in a previous section, but you really should know and understand the meaning, so lets cover them again:

    • Inode: is a location to store all of the information, or metadata, about a file. Well, at least most of the metadata about a file. The Inode doesn’t store the actual data portion of a file or the file name. It does, on the otherhand, store the file permissions, the user and group ownership data, and three different types of data about when the file was create, modified and so-forth.
    • Dentry: is the section that stores the file name of a file. It also stores information about what folder structure is associated with a file, such as “/usr/bin/”.
    • Data: is pretty strait forward. It is the total data that is associated with a file, such as a configuration text file, or a Libre Office file, or any other user type file.

    Anyway, now we need to create our file system on our new partition. See below how to do that:

     
     
     
     
     

    Just an update. I’m not finished with this blog, but I felt there was enough starter information here to help people get going with Linux. Enjoy and keep coming back for more info as I’ll be adding to this and releasing new blogs all the time!!!

     

     
     
     
     
     
     
     
     

    References:

    • No Startch Press
    • http://nostarch.com/obenbsd2e
    • http://nostarch.com/download/samples/ao2e_ch8.pdf
    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Linux Stuff: How to setup SSH certificates to simplify logins to remote systems

    SSH and Server Certificates

    If you haven’t done this yet, we’re going to make life easy and get the SSH Certificates setup to make it super easy to SSH from our Linux Desktop.

     

    You’ll want to make sure to install SSH Server and client on both the machines you’re planning on configuring. Most of the time this is done already.

    Debian Based machines:

    apt-get install ssh openssh-server openssh-client

     

    Red Hat Based machines:

    yum install ssh openssh-server openssh-client

     

    When that’s done test out connecting from your local machine to your remote host using:

    ssh steve@208.28.163.39
    The authenticity of host '208.28.163.39 (208.28.163.39)' can't be established.
    RSA key fingerprint is 69:23:4c:49:35:41:ca:ae:23:3f:69:63:b2:ba:12:3c.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '208.28.163.39' (RSA) to the list of known hosts.
    user03@208.28.163.39's password:
    user03 @ newstudent ~ :) ᛤ>   pwd
    /home/user03
    user03 @ newstudent ~ :) ᛤ>   exit
    logout
    Connection to 208.28.163.39 closed.
    steve @ mintdebianvm ~ :) ᛤ>

    Now we can setup SSH keys on this system so that you can easily log in from your main Linux Desktop machine.

     

    So go to your home directory on your local machine (NOT THE REMOTE SYSTEM!) and your navigate to your home folder. From here CD into your .ssh directory and we’ll create your SSH Certificates. If your “.ssh” directory doesn’t exist, just make one! Same goes for your REMOTE system too! Make sure that exists or this won’t work!

    cd ~/.ssh/
    ssh-keygen -t rsa -b 2048
    {save as default file, press enter}        
    {enter your own password and hit enter}     <-- this can be blank
    {confirm your password}                     <-- this can be blank

     

    Once this is done we’ll setup your host with keys to stay authenticated

    cat ~/.ssh/id_rsa.pub | ssh user03@208.28.163.39 "cat - >> ~/.ssh/authorized_keys"

     

    Now edit your LOCAL “.ssh/config” file and add in your new server. If you don’t have one, again, just create one!

    Host 208              <-- make that whatever you want. Keep it simple and easy to remember!
    HostName 208.28.163.39      <-- IP of remote host
    User user03                 <-- Your username on the remote machine.

     

    And now you can test your new ssh keys by doing this:

    ssh 208

     

    You may need to adjust your permissions properly. To do so, simply run this command on your local system:

    chmod 700 ~/.ssh && chmod 600 ~/.ssh/*

     

    And this command on your remote system that you’re trying to connect to:

    chmod 600 ~/.ssh/authorized_keys && chmod 700 ~/.ssh/

     

    Now you should be able to just log in without a password to any Remote system you set this up on!! 🙂

    ssh 208

     

    Enjoy!

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Linux How-To: Debian Server, Bind9 DNS and Postfix Mail Relay SPAM Filter


    So, MS Exchange has been attacked so many times over the years that it would be stupid to let it just sit out on the internet. Same goes for Microsoft DNS server. I would try as hard as I could to never put a Microsoft Server out on the Internet, or even allow a Microsoft to directly service the Internet. It’s just too risky, and I don’t play dice in certain situations such as these. I would, however, make an exception for hosting an Internet Information Services (IIS) Web Server. There are easy ways to lock down IIS and the OS, perform secure code reviews on the website itself, put reverse proxies in front of the web server (Apache Mod_Security or DotDefender)… the list goes on.

    But this isn’t a blog about web services. This blog is about setting up a secure Debian Server to host out a Bind9 DNS server and a Postfix reverse email Proxy. And this really could be split up into two different blogs, but I really think that they belong together because of how intertwined Email services are with DNS. Without DNS, mail would be significantly more difficult. But, DNS is also the problem with a LOT of SPAM. DNS configured improperly causes much of the SPAM that gets through to be seen by end users. Also, with DNS and Postfix running on the same box, the services are speedier and more responsive. We’ll do our best, but I really hope I can just refer people to this setup, because I truly believe that if more people would secure their mail servers and setup DNS properly, we could easily stop MOST SPAM that is out on the Internet from making it to Inbox’s around the world.

    And this will be a nice, really long, blog… strap in, people, we’re in for a ride! 🙂

     

    First things first, we need to start with a fresh install of Debian server. The main reason why I choose to go with Debian server is that:

    • First, it’s exceedingly stable and secure right out of the box. Very little configuration is needed…
    • Second, the creators of Debian don’t make tons of changes and they aren’t on the bleeding edge of new technologies.
    • Third, Debian is super easy to use and the software we need is also super easy to install
    • Lastly, especially for virtualized environments, a full install, using my method, takes a minimum of 512MB RAM and 1.5GB HDD space.

     

     

    I want to let everyone know here, that whatever I post on my site is things that I truly believe in. The main reason why I believe this process to work so well is that I’ve seen it in action at past employers, I’ve seen the MASSIVE cost savings past on to our customers, and because of all that I’ve implemented this exact same process at home. So basically, I eat my own dog food. I’m not going to tell you all to do something that is in secure or full of shit. My email server is already receiving emails through this Postfix Proxy, my domains are hosted off of this BIND9 server, and, if I may say, it’s ALL working beautifully.

    A good friend of mine, Nick (I’ll leave out his last name until he says it’s okay to mention him here), was the one who inspired me to get much of this stuff going. I worked with him at a past employer and showed me much of this stuff. Regardless, what I’m trying to get to here is, just the way that we have things setup now is pretty damn good. I have one domain passing all of my mail to a DMZ which has zero restrictions, and that domain forwards all the email it gets to my home server, which is the Proxy we’re about to setup. The reason I do this is to make sure that my SPAM filtering isn’t killing emails I WANT to see. SO, every so often I’ll check both accounts, side by side, and make sure that I’m filtering properly. And if I’m not, I’ll tweak the proxy accordingly. Eventually, maybe even in this blog, I’ll get a mail quarantine up and running so that I can just do away with the DMZ server and pass all my mail through this Proxy…

    Lastly, I’ve gone out of my way to make this as absolutely clear as I can. I’ve referenced all the sites and pages at the bottom of this blog, as I always do, and made this as close to perfect as I can. If you want an “installer” for this process, then you’re in the wrong spot. I will never build an automated installer for this without charging a butt load of money. If that’s what you’re looking for, go buy some Windows based software. Here, we’re working with Debian server on the Bash Shell.

     

     

    Anyways… So let’s get a base image up and running.

    Debian Minimal Install: The base for all operations

    When you’re done with that come back here and we’ll keep going… In the mean time let’s talk about the software we’re dealing with here…

     

    Postfix

    While Postfix can do a lot, just by itself, in filtering SPAM, it’s not the end all, be all, software. It’s literally just a Mail Transfer Agent (MTA), and it’s only purpose is to send and receive mail. So what we need to do here is arm Postfix with some weaponry, by the likes of Amavis-New, SpamAssassin, Anomy Sanitizer, and ClamAV. Now, I know your thinking, “ClamAV, huh?” But it’s better than nothing, it’s open source and it’s got over a million signatures. If you’re reading this thinking “WTF? My company wont be able to run this!”, then you’re in luck, because Postfix can forward mail for AV inspection to many of the top names in Anti Virus (Kaspersky, Symantec and McAfee). But for this article we’re going to work with ClamAV and some other tools, so deal with it. It’s free, and so is this blog…

     

     

    Amavis-New

    Amavis-New is a really good SPAM filtering engine, as well as SpamAssassin. What we’ll have to do is create two directories for Amavis and SpamAssassin to work in. They both receive mail from Postfix, unpack the email and attachments, inspect everything, then package everything back up the way they should be, and send it back to Postfix. This happens in two forks. Amavis gets the email first, then sends it back to Postfix, then it’s sent to SpamAssassin, then sent back to Postfix.

    When Amavis first starts at system boot, it just sits there and waits until it gets work to do, as any good little daemon should do. But when an email comes in, Amavis instantly forks a child process to do the work that needs to be done. This child process will create a sub directory in, in the Amavis working directory, and to it’s unpacking, inspection and repacking in. In the Amavis conf file you can specify how many children can be spawned, but you’ll want to test this out. Our config will have 5 children, and on a box with 1GB of ram, we should have PLENTY of room to work with. Now, if you’re running a Enterprise level SPAM filtering service, you may want to setup multiples of these servers that sit on a few or more MX records so that you can spread out the work load. Then beef up how much RAM and CPU cores you allocate to the VM and then allow Amavis to spawn more children. Pending the amount of hardware you have to work with, you could filter a TON of email with this configuration.

    Really though, at the end of the day, I strongly recommend that you investigate the Amavis-New website. Their FAQ’s are great and super informative. It’s truly amazing what this product can do.

     

     

    SpamAssassin

    As for SpamAssassin let’s talk about this for a minute. At the writing of this blog, Spam Assassin is at release 3.3.1. I’ll tell you the same thing I said a minute ago about Amavis: You should really look at the Spam Assassin website for more details about running, installing, configuring, testing and the operations of Spam Assassin. But I’ll briefly go over this stuff now. SpamAssassin works like many other filtering engines, “grading” the email on a multitude of different areas, including content, encoding, MIME settings, HTML markup and blacklists provided from different carriers like Spamhaus (which we’ll talk about later in this blog). Configured and monitored properly, Spam Assassin, just by itself, can filter over 97% of all SPAM, it’s false positive ratio is easily 1% or less, and the best part is that it has the ability to “learn” about new SPAM. The scoring engine is like a game of golf. The lower the score, the better. Other factors are looked at as well, such as Blacklisted IP’s, Reverse DNS lookups, list of banned words, list of banned file attachments (exe, vbs, etc…) sender and receiver addresses, valid date and time, etc…

    SpamAssassin isn’t all by itself though. While SpamAssassin is able to do a LOT on it’s own, it also “calls” other programs in to help it, such as razor, pyzor, and dcc-client. Each of these programs have specialized duties that perform additional SPAM checking. Razor is a distributed network devoted to spam detection. Razor uses statistical and randomized signatures that effectively identify many different types of SPAM. Pyzor, not surprisingly, is built on Python and also is based on a network dedicated to identifying SPAM. Like Razor, it too is signature based. Lastly, DCC (Distributed Checksum Clearinghouses) is also an anti-spam content filter. According to the DCC website, “The idea of DCC is that if mail recipients could compare the mail they receive, they could recognize unsolicited bulk mail. A DCC server totals reports of checksums of messages from clients and answers queries about the total counts for checksums of mail messages. A DCC client reports the checksums for a mail message to a server and is told the total number of recipients of mail with each checksum. If one of the totals is higher than a threshold set by the client and according to local whitelists the message is unsolicited, the DCC client can log, discard, or reject the message.”

    Back to SpamAssassin… The thing that really makes SpamAssassin great is the way that it handles SPAM. It’s completely configurable to the way YOU want SPAM handled. You can have it tag email as potential SPAM by just changing the email headers. There’s also ways that Spam Assassin will modify the Subject line of an email to include text like “***Potential SPAM***” or whatever you want it to say to your end users. This option truly is great, because there will always be false positives (email marked as SPAM that really isnt), and there will always be false negatives (SPAM that gets through to the end user that shouldn’t). With Subject line modification, we can alert the user to use their best judgement in looking at an email. If a message has a high enough score we can have the message quarantined until the user releases the message for review, or in extreme cases the email can just be dropped without notification.

    On the contrary, not all email should be blocked either. And Spam Assassin can look into messages to see if they have good karma. This sounds strange, but while there are services like Spamhaus, there are services that do the exact opposite of them. For instance, there are services like ISIPP Email Accreditation and Deliverability, Return Path who actually owns Bonded Sender which used to be Iron Port‘s product (which now Cisco owns), and more.

     

     

    Anomy

    Just because I’m too lazy to keep going on with this, I’ll just forward you to the Anomy website and you can look at their information if you want to know more. The main reason why I’ve decided to incorporate Anomy is because of the fact that, while the other SPAM and Virus checkers need to perform inspection on the disk, which can get very intense (and in extremely large environments can cause performance issues), Anomy does everything in system memory. The other reason why is that Anomy comes with it’s own custom built MIME parser which performs more checks than some of the other options. The thing that we’re looking at here is security in layers. You’ll hear that concept driven into your head over and over until the end of time. Security in layers. The day that you can buy 1 product to perform ALL of your security needs is the day I’m out of a job. Until then, you’re going to have to use multiple scan engines, multiple security technologies and continue to drive a culture of knowledge for your employees.

     

     

     

    Awesome, you got your VM up and running!!!

     

    SSH and Server Certificates

    In that tutorial I had you setup the IP address on your new Debian server to 192.168.0.100. We’ll reference that IP address for the rest of the time, but you can substitute it for whatever you made it on your network.

    If you haven’t done this yet, we’re going to make life easy and get the SSH Server installed so we can get some remote access to this server from our Linux Desktop.

    apt-get install ssh openssh-server openssh-client

     

    When that’s done test out connecting from your local machine to this virtual host using:

    ssh steve@192.168.0.100

    Now we can setup SSH keys on this system so that you can easily log in from your main Linux Desktop machine.

     

    So go to your home directory on your local machine (NOT THE SERVER!) and your navigate to your home folder. From here CD into your .ssh directory and we’ll create your SSH Certificates.

    cd ~/.ssh/
    ssh-keygen -t rsa
    {save as default file, press enter}        
    {enter your own password and hit enter}     <-- this can be blank
    {confirm your password}                     <-- this can be blank

     

    Once this is done we’ll setup your host with keys to stay authenticated

    cat ~/.ssh/id_rsa.pub | ssh steve@192.168.0.100 "cat - >> ~/.ssh/authorized_keys"

     

    Now edit your “.ssh/config” file and add in your new server. If you dont have one just create one!

    Host 100
    HostName 192.168.0.100
    User steve

     

    And now you can test your new ssh keys by doing this:

    ssh 100

     

    You may need to adjust your permissions properly. To do so, simply run this command on your local system:

    chmod 700 ~/.ssh && chmod 600 ~/.ssh/*

     

    And this command on your remote system that you’re trying to connect to:

    chmod 600 ~/.ssh/authorized_keys && chmod 700 ~/.ssh/

     

    Disable IPv6

    For our install, we need to disable IPv6. I’ve seen issues with Postfix and Bind when there is IPv6 running on the same box. I always bitch about lazy admins, and here I am being lazy and turning off IPv6 instead of fixing the underlying issue. 🙁

     

                               SO! Let’s get IPv6 disabled! haha 🙂

     

    I promise I’ll look into the issue over time, because I’ll need to make this solution work with IPv6 eventually. I can’t run from it forever. In the mean time, lets get going with editing your grub file:

    sudo vim /etc/default/grub

     

    While you’re in your Grub file, find the line that looks like this:

    GRUB_CMDLINE_LINUX="

     

    What you need to do here is make it look like this:

    GRUB_CMDLINE_LINUX="ipv6.disable=1"

     

    Then you need to update the loader by doing this:

    steve @ debian ~ :) ?>   sudo update-grub2
    Generating grub.cfg ...
    Found linux image: /boot/vmlinuz-2.6.32-5-amd64
    Found initrd image: /boot/initrd.img-2.6.32-5-amd64
    done
    steve @ debian ~ :) ?>   sudo update-grub
    Generating grub.cfg ...
    Found linux image: /boot/vmlinuz-2.6.32-5-amd64
    Found initrd image: /boot/initrd.img-2.6.32-5-amd64
    done

     

     

    Bind9 Domain Name System (DNS)

    Perfect! Now, let’s get Bind9 installed and configured properly. What I’ve done in my network is allowed my Internal Name Servers keep a copy of the External DNS zones. It makes life easier than setting up all your internal servers to also look at your External Servers. We’ll run through that as well during the setup. You’ll also want to get a copy of the Bind 9 Administrator Reference Manual. It’s not critical, but there’s some pretty damn good information in that document. www.bind9.net has both the online website and the downloadable PDF document.

    sudo apt-get install bind9

     

    Now that Bind is installed, lets configure the service to do what we want. We’ll start by editing our “named.conf” file where all the good stuff is.

    cd /etc/bind/
    sudo vim named.conf

    ### Named.conf File ###
    // This is the primary configuration file for the BIND DNS server named.
    //
    // Please read /usr/share/doc/bind9/README.Debian.gz for information on the
    // structure of BIND configuration files in Debian, *BEFORE* you customize
    // this configuration file.
    //
    // If you are just adding zones, please do that in /etc/bind/named.conf.local

    include "/etc/bind/named.conf.options";
    include "/etc/bind/named.conf.local";
    include "/etc/bind/named.conf.default-zones";

     

    This file is really tiny; it’s really just the spawn point for all the other configurations. And there’s two ways you can do this.

    • 1. You can remove all the other files and just do all your configurations in here
    • 2. You can continue to use the file structure the way it is

    Either way will work. If you’re a small company with only a few domain names, you can easily get away with lumping everything into this file and still keep separate zone files. If you’re a large company you may want to stay with many separate, smaller, configuration files. Especially when you’re dealing with companies that own hundreds, if not thousands, of domain names… even more so if you’re dealing with companies dispersed over several continents… or globally!

     

    In this scenario, we’re going to tighten things up just to make the initial config easy to see, but by no means am I telling you that you have to do it this way. DO it however you feel makes the most sense to you!

     

    So here we have the named.conf file; go ahead and make a backup of all your config files into a backup folder here and then modify your named.conf to look like mine below.

    cd /etc/bind/
    sudo mkdir installer-backup
    sudo cp * installer-backup/
    rm named.*

     

    And here is the code you can copy and paste into your “named.conf” file:

    sudo vim named.conf

    #####################################################################################
    #  This is not part of the default configuration that is included as part of the    #
    #  Bind 9 package. This section is commented out because it isnt needed.            #
    #  Also, for all of the files that were installed by default,                       #
    #  look in the "/etc/bind/installer-backup" directory                               #
    #####################################################################################
      #                                                                               #
      #                CONFIGURED BY STEVE ERDMAN, updated 12/27/12                   #
      #                                                                               #
      #################################################################################

    // The following section is the called the options section.
    // Configures the working directory for this BIND9 installation
    // Sets up BIND to allow query's from the Internet
    // recursion only from the Internal network (Change to your Internal Network!)
    // Forwarders set to Level 3, Google and OpenDNS public servers (if these guys dont work, the Internet is probably broken!
    // Listening on all interfaces (make sure to update this address to your real IP on this server!)
    // IPv6 turned off
    // running "named" version
    // auth-nxdomain states that this server will answer authoritatively for all domains configured on it

    options {
            directory "/etc/bind";
            notify-source * port 53;
            allow-query { any; };
            allow-recursion { 127.0.0.1; 192.168.0.0/24; };
            forwarders { 209.244.0.3; 209.244.0.3; 8.8.8.8; 8.8.4.4; 208.67.222.222; 208.67.220.220; };
            listen-on { 127.0.0.1; 192.168.0.100; };
            listen-on-v6 { none; };
            version "named";
            auth-nxdomain yes;    # conform to RFC1035
    };
    // end of options

    #---------------------------------------------------------------------------------------#
    #     Below are all of the zone files for all the forward and lookup zones that your    #
    #     company is responsible for.                                                       #
    #---------------------------------------------------------------------------------------#

    // zone name
    // 'type' only allows master, slave, stub, forward, hint... We own our zone, we're the master.
    // specify the file that our zone sits in
    // allow anyone to query our server
    // allow our internal name servers to cache this zone as a slave server
    // specify that if the zone data may have changed, that all servers with this zone data need to contact the SOA
    // THE ERDMANOR
    zone "example.com" IN {
                    type master;
                    file "/etc/bind/db.example.com";
                    allow-query { any; };
                    allow-transfer {192.168.0.7; 192.168.0.13; 192.168.0.18; 192.168.0.47; };
                    notify yes;
    };
    //same options apply as the above zone
    // 111.222.333.44 Reverse DNS
    zone "333.222.111.in-addr.arpa" {
                    type master;
                    file "/etc/bind/333.222.111.in-addr.arpa";
                    allow-query { any; };
                    allow-transfer {192.168.0.7; 192.168.0.13; 192.168.0.18; 192.168.0.47; };
                    notify yes;
    };

    #---------------------------------------------------------------------------------------#
    #   Consider adding the 1918 zones here, if they are not used in your organization  #
    #                  to use these just uncomment the following line:                      #
    #   include "/etc/bind/zones.rfc1918";                          #
    #---------------------------------------------------------------------------------------#
         #   Below are some zones that your server should cache                        #
         #   The for more info on this visit: http://www.zytrax.com/books/dns/ch7/     #
         #-----------------------------------------------------------------------------#

    // prime the server with knowledge of the root servers
    zone "." {
            type hint;
            file "/etc/bind/db.root";
    };
    // be authoritative for the localhost forward and reverse zones, and for
    // broadcast zones as per RFC 1912
    zone "localhost" {
            type master;
            file "/etc/bind/db.local";
    };
    zone "127.in-addr.arpa" {
            type master;
            file "/etc/bind/db.127";
    };
    zone "0.in-addr.arpa" {
            type master;
            file "/etc/bind/db.0";
    };
    zone "255.in-addr.arpa" {
            type master;
            file "/etc/bind/db.255";
    };

     

     

    Now we need to create some zone files. “What is a zone file?” you may be asking… Well, zone files are where all of your host information is stored, so that when a Internet customer queries “www.yourdomain.com” your DNS server looks up in it’s zone file the “www” host A record, and returns the response. There are all kinds of records, and here is a site that can explain all of this for you: List of DNS record types at Wikipedia.

     

    Now that you’re understanding records, lets get your zone file going. Working off of the example “named.conf” file above, let’s create our “db.example.com” and “333.222.111.in-addr.arpa” zone files. If you want to cheat a little bit, go ahead and use a zone file generator such as this one, but you really should understand how they work as well. So let’s look at one…

    sudo vim db.example.com


    ; BIND data file for example.com
    ;
    $TTL 3600
    @       IN      SOA     ns1.example.com.      ns2.example.com. (
                            201212263453789   ; serial number YYMMDDNN + some numbers
                            28800           ; Refresh
                            7200            ; Retry
                            3600          ; Expire
                            3600           ; Min TTL
                            )

           
        IN  NS  ns1.example.com.
            IN  NS  ns2.example.com.

            IN  MX  10  mail.example.com.
            IN  MX  20  smtp.example.com.

    $ORIGIN example.com.
        IN  A   111.222.333.41
    ns1     IN      A       111.222.333.42
    ns2     IN      A       111.222.333.43
    mail    IN      A       111.222.333.44
    smtp    IN      A       111.222.333.45
    autodiscover    IN      A       111.222.333.46
    vpn     IN      A       111.222.333.47
    www     IN      A       111.222.333.48

     

    Now let’s look at our Reverse Lookup zone so you can get an idea of what yours should look like:

    sudo vim 333.222.111.in-addr.arpa


    ; BIND data file for local loopback interface
    ;
    $TTL 3600
    @       IN      SOA     ns1.example.com.      dns.example.com. (
                            201212263453789   ; serial number YYMMDDNN
                            28800           ; Refresh
                            7200            ; Retry
                            3600          ; Expire
                            3600           ; Min TTL
                            )

    42     IN      NS       ns1.example.com.
    43     IN      PTR      smtp.example.com.
    44     IN      PTR      mail.example.com.
    45     IN      PTR      smtp.example.com.
    47     IN      PTR      vpn.example.com.
    48     IN      PTR      www.example.com.

     

    Awesome, now, one last thing that has helped me is if the “/etc/bind/” directory was owned by the “bind” user that was created upon install. Let’s do that real quick!

    sudo chown -R bind:root /etc/bind/

     

    Give your Bind server a quick restart, but you restart the service, open another bash shell tab (or session) and do a “sudo tail -f /var/log/syslog” and watch the output to make sure everything loads properly. It all should load up right, but if not, it’s better to find out now if there’s a problem than to wait until the end and troubleshoot tons of errors you *MAY* be having.

     

    sudo /etc/init.d/bind9 restart
    Stopping domain name service...: bind9 waiting for pid 2655 to die.
    Starting domain name service...: bind9.

     

    And dont forget your “tail”!

    steve @ debian ~ :( ?>sudo tail -f /var/log/syslog
    [sudo] password for steve:
    Dec 26 22:17:01 debian /USR/SBIN/CRON[3353]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
    Dec 26 22:48:05 debian named[2655]: received control channel command 'stop -p'
    Dec 26 22:48:05 debian named[2655]: shutting down: flushing changes
    Dec 26 22:48:05 debian named[2655]: stopping command channel on 127.0.0.1#953
    Dec 26 22:48:05 debian named[2655]: stopping command channel on ::1#953
    Dec 26 22:48:05 debian named[2655]: no longer listening on ::#53
    Dec 26 22:48:05 debian named[2655]: no longer listening on 127.0.0.1#53
    Dec 26 22:48:05 debian named[2655]: no longer listening on 192.168.0.100#53
    Dec 26 22:48:05 debian named[2655]: exiting
    Dec 26 22:48:06 debian named[3491]: starting BIND 9.7.3 -u bind
    Dec 26 22:48:06 debian named[3491]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info'
    Dec 26 22:48:06 debian named[3491]: adjusted limit on open files from 1024 to 1048576
    Dec 26 22:48:06 debian named[3491]: found 2 CPUs, using 2 worker threads
    Dec 26 22:48:06 debian named[3491]: using up to 4096 sockets
    Dec 26 22:48:06 debian named[3491]: loading configuration from '/etc/bind/named.conf'
    Dec 26 22:48:06 debian named[3491]: reading built-in trusted keys from file '/etc/bind/bind.keys'
    Dec 26 22:48:06 debian named[3491]: using default UDP/IPv4 port range: [1024, 65535]
    Dec 26 22:48:06 debian named[3491]: using default UDP/IPv6 port range: [1024, 65535]
    Dec 26 22:48:06 debian named[3491]: listening on IPv4 interface lo, 127.0.0.1#53
    Dec 26 22:48:06 debian named[3491]: listening on IPv4 interface eth0, 192.168.0.100#53
    Dec 26 22:48:06 debian named[3491]: generating session key for dynamic DNS
    Dec 26 22:48:06 debian named[3491]: set up managed keys zone for view _default, file 'managed-keys.bind'
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: 254.169.IN-ADDR.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: 2.0.192.IN-ADDR.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: 100.51.198.IN-ADDR.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: 113.0.203.IN-ADDR.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: 255.255.255.255.IN-ADDR.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: D.F.IP6.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: 8.E.F.IP6.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: 9.E.F.IP6.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: A.E.F.IP6.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: B.E.F.IP6.ARPA
    Dec 26 22:48:06 debian named[3491]: automatic empty zone: 8.B.D.0.1.0.0.2.IP6.ARPA
    Dec 26 22:48:06 debian named[3491]: command channel listening on 127.0.0.1#953
    Dec 26 22:48:06 debian named[3491]: command channel listening on ::1#953
    Dec 26 22:48:06 debian named[3491]: the working directory is not writable
    Dec 26 22:48:06 debian named[3491]: zone 0.in-addr.arpa/IN: loaded serial 1
    Dec 26 22:48:06 debian named[3491]: zone 333.222.111.in-addr.arpa/IN: ending notifies (serial 3289701)
    Dec 26 22:48:06 debian named[3491]: zone 127.in-addr.arpa/IN: loaded serial 1
    Dec 26 22:48:06 debian named[3491]: zone 255.in-addr.arpa/IN: loaded serial 1
    Dec 26 22:48:06 debian named[3491]: zone example.com/IN: loaded serial 16381
    Dec 26 22:48:06 debian named[3491]: zone localhost/IN: loaded serial 2
    Dec 26 22:48:06 debian named[3491]: managed-keys-zone ./IN: loading from master file managed-keys.bind failed: file not found
    Dec 26 22:48:06 debian named[3491]: managed-keys-zone ./IN: loaded serial 0
    Dec 26 22:48:06 debian named[3491]: running
    Dec 26 22:48:06 debian named[3491]: zone example.com/IN: sending notifies (serial 598703)

     

     

    Success! Your DNS server started and all your zones are loaded! Let’s test a couple queries and just make sure 🙂

    steve @ debian ~ :) ?>   dig @192.168.0.100 erdmanor.com mx

    ; <<>> DiG 9.8.1-P1 <<>> @192.168.0.100 erdmanor.com mx
    ; (1 server found)
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55227
    ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 4

    ;; QUESTION SECTION:
    ;erdmanor.com.          IN  MX

    ;; ANSWER SECTION:
    erdmanor.com.       3600    IN  MX  20 smtp.erdmanor.com.
    erdmanor.com.       3600    IN  MX  10 mail.erdmanor.com.

    ;; AUTHORITY SECTION:
    erdmanor.com.       3600    IN  NS  ns1.erdmanor.com.
    erdmanor.com.       3600    IN  NS  ns2.erdmanor.com.

    ;; ADDITIONAL SECTION:
    mail.erdmanor.com.  3600    IN  A   65.55.37.62
    smtp.erdmanor.com.  3600    IN  A   64.4.59.173
    ns1.erdmanor.com.   3600    IN  A   74.125.228.105
    ns2.erdmanor.com.   3600    IN  A   74.125.228.96

    ;; Query time: 1 msec
    ;; SERVER: 192.168.0.100#53(192.168.0.100)
    ;; WHEN: Wed Dec 26 22:52:11 2012
    ;; MSG SIZE  rcvd: 172

     

    Fantastic, we’re looking good so far!

     

     

    Now that you’re mostly updated, you’ll need to visit the registrar for your domain name and update the information for where your domain is hosted. These records are called glue records and normally they take a while to update. They could take up to 12 or 24 hours to update so dont get worried if you have and DNS issues in the next few hours. Really, the best time to update that information for production domains (domains that cant suffer down time) is early on a Saturday night. Many people are watching TV, busy with the family or out on the town after 8pm on a Saturday (unless you’re me, haha). By the time the propagation spreads across the Internet, it’s Sunday morning and no one really noticed. Also, you’ll want to get on the phone with your ISP to have them forward all reverse lookup queries to your name servers. This is critical if you want YOUR out going email not to be tagged as SPAM!

    According to WikiPedia, “Name servers in delegations are identified by name, rather than by IP address. This means that a resolving name server must issue another DNS request to find out the IP address of the server to which it has been referred. If the name given in the delegation is a subdomain of the domain for which the delegation is being provided, there is a circular dependency. In this case the nameserver providing the delegation must also provide one or more IP addresses for the authoritative nameserver mentioned in the delegation. This information is called glue. The delegating name server provides this glue in the form of records in the additional section of the DNS response, and provides the delegation in the answer section of the response.

    For example, if the authoritative name server for example.org is ns1.example.org, a computer trying to resolve www.example.org first resolves ns1.example.org. Since ns1 is contained in example.org, this requires resolving example.org first, which presents a circular dependency. To break the dependency, the nameserver for the org top level domain includes glue along with the delegation for example.org. The glue records are address records that provide IP addresses for ns1.example.org. The resolver uses one or more of these IP addresses to query one of the domain’s authoritative servers, which allows it to complete the DNS query.”

     

    While your registrar information is updating let’s move forward and get some email action going!

     


    If all you were looking for here was a DNS tutorial for a single DNS server, you’re done. If you’re looking to go any further into SPAM filtering, continue on!

    I will be posting a blog as soon as I can on how to setup a distributed DNS server cluster. Stay tuned for that!

     

     

    Postfix and SPAM Filtering

    Alright, we need some software here, so… lets get Postfix installed!

    sudo apt-get update && sudo apt-get dist-upgrade
    sudo apt-get install -y postfix

     

    Now, when the software is installing, you’ll want to setup Postfix in a certain way. You NEED to make sure you pick “Internet Site” at the first prompt, and enter your EXTERNAL MX A-record. Many times this MX A Record is either “mail.example.com” or smtp.example.com”, but you’ll want to verify from your DNS zone that we created back in the BIND9 section.. See my screenshots below:

    Internet Site

    smtp.erdmanor.com

     

    Now that we have Postfix installed, we can setup a temporary mail relay to our Microsoft Exchange server. THIS SHOULD NOT BE IN PRODUCTION RIGHT NOW!

    GO ahead and edit your “main.cf” file. There is a line we need to change that I’ll show you below:

    sudo vim /etc/postfix/main.cf

    # Uncomment the next line to generate "delayed mail" warnings
    delay_warning_time = 4h


    # Add the IP address of your Exchange server's Receive Connector responsible for your Domain. (See below Screenshot)
    relayhost =192.168.0.125

    # And lastly, find "myorigin", and right below that add in "relay_domains = mydomain.com, example.com, (other, domains, comma, separated)"
    myorigin = /etc/mailname
    relay_domains = erdman.cc, erdmanor.com, someone.net, assholes.org

    # If you're hosting multiple domains, you'll want to setup a transport config file.
    transport_maps = hash:/etc/postfix/transport

    WE will talk about the /etc/postfix/transport file, and others, later, but this DOES need to be there!

    Exchange Server Receive Connector

     

    Now that we have that complete, we’ll restart the service:

    sudo /etc/init.d/postfix restart

     

    SPAM Filtering Engines

    Alright, cool… Let’s get some more software installed!

    sudo apt-get install -y amavisd-new spamassassin clamav-daemon

     

    As soon as that’s complete you’ll want to update the ClamAV virus definitions. They’re readily available, and even easier, you can run a simple command to do this:

    sudo freshclam
    [sudo] password for steve:
    ClamAV update process started at Thu Dec 27 00:16:40 2012
    main.cvd is up to date (version: 54, sigs: 1044387, f-level: 60, builder: sven)
    daily.cvd is up to date (version: 16130, sigs: 427971, f-level: 63, builder: neo)
    bytecode.cvd is up to date (version: 209, sigs: 40, f-level: 63, builder: neo)

    If you’re really looking to have fun with this, just create a quick shell script and then make a cron job out of it to run daily 🙂

     

    Alright, more software to install. Mainly more dependencies and stuff you’ll need that may not have been installed yet.

    sudo apt-get install -y libnet-dns-perl pyzor razor libarchive-tar-perl libio-socket-ssl-perl libio-socket-inet6-perl libnet-ident-perl liburi-perl libwww-perl libmailtools-perl tnef arj bzip2 cabextract cpio file gzip nomarch pax unzip zip zoo ripole cabextract p7zip lzop rpm2cpio unrar-free arc

     

    Perl Script Installs

    Following some package installs, we’ll be needing some perl scripts. So to install those, follow these instructions:

    steve @ debian ~ :) ?>   sudo perl -MCPAN -e shell


    CPAN is the world-wide archive of perl resources. It consists of about
    300 sites that all replicate the same contents around the globe. Many
    countries have at least one CPAN site already. The resources found on
    CPAN are easily accessible with the CPAN.pm module. If you want to use
    CPAN.pm, lots of things have to be configured. Fortunately, most of
    them can be determined automatically. If you prefer the automatic
    configuration, answer 'yes' below.

    If you prefer to enter a dialog instead, you can answer 'no' to this
    question and I'll let you configure in small steps one thing after the
    other. (Note: you can revisit this dialog anytime later by typing 'o
    conf init' at the cpan prompt.)
    Would you like me to configure as much as possible automatically? [yes]

     

    You’ll see a ton of information fly by as many values are automatically generated for you.
    Feel free to look at that stuff if you want. When you’re ready install the perl modules we need:

    (as you’re installing these Perl modules, you’ll see a lot of scrollback)

    o conf prerequisites_policy ask
    o conf commit
    install IP::Country::Fast
    install MIME::Base64
    install MIME::QuotedPrint
    install Net::DNS
    install DB_File
    quit

     

    Now for the DCC install. I haven’t found a package for DCC in the Debian repo’s unfortunately, and while that is a drawback to this software, it’s not the end of the world. We’ll just need to do some quick building of the software. But first we need to acquire the software from the DCC download page. The newest version that is out was released on January 12, 2013.

    From your Debian VM, run this command:

    wget http://www.dcc-servers.net/src/dcc/old/dcc-1.3.144.tar.Z

    Then you can extract and build the software like this:

    tar -xzvf dcc-1.3.144.tar.Z
    cd dcc-1.3.144/
    ./configure
    make
    sudo make install clean

    And you’re ready to move forward! (we’ll configure DCC later, for now we just need to have the software installed)

    NOTE

    Perfect, we’re moving right along here. One other thing to note here is that with all this going on, you’re going to want a highly tuned box. What I mean by that is, think of it this way: Every time a message comes in, we’re sending that message through 4 scanning engines, each one of which invokes it’s own shell or child process, some using a Perl interpreter, and unpacking/repacking each message in a temporary folder, inspecting the message and then sending it back out to your internal Exchange server. There’s A LOT going on here. This may add a bit of latency to the delivery of your messages. Remember, I’m running a VM on an SSD, with a Core i7 960, and the VM has 2 cores and 1GB of RAM. The latency I’m seeing here, as opposed to my other email service, is less than 1 minute, which is more than reasonable. We’ll go over some tuning at the end of this and tweak this whole system to work as efficiently as possible.

     

    Okay, now we need some user accounts created so that we can tighten up security a bit.

    Start by cat’ing your /etc/passwd file. Pending if your following my tutorial on a Red Hat, CentOS, Ubuntu or other OS, I want to make sure that our “amavis”, “spamd”, “anomy” and “clamav” users are created.

    steve @ debian ~ :) ?>   cat /etc/passwd
    ...
    ...
    steve:x:1000:1000:Steve Erdman,,,:/home/steve:/bin/bash
    postfix:x:104:107::/var/spool/postfix:/bin/false
    bind:x:105:109::/var/cache/bind:/bin/false
    clamav:x:106:110::/var/lib/clamav:/bin/false
    amavis:x:107:111:AMaViS system user,,,:/var/lib/amavis:/bin/sh

     

    SpamAssassin Configure

    Based on this information, we’re good on most user accounts, but we need to create a “spamd” account and an “anomy” account. We also need to setup working directories for both of these services and lock down access to them.

    sudo mkdir /var/run/spamassassin
    sudo mkdir /usr/local/anomy
    sudo groupadd -g 112 spamd
    sudo useradd -u 112 -g 112 -s /sbin/nologin -d /var/run/spamassassin spamd
    sudo chown spamd:spamd /var/run/spamassassin
    sudo chmod 750 /var/run/spamassassin
    sudo groupadd -g 113 anomy
    sudo useradd -u 113 -g 113 -s /sbin/nologin -d /usr/local/anomy anomy
    sudo chown root:anomy /usr/local/anomy
    sudo chmod 750 /usr/local/anomy
    sudo usermod -a -G clamav amavis
    sudo usermod -a -G amavis clamav

     

    Now let’s modify the SpamAssassin conf file:

    sudo vim /etc/default/spamassassin

     

    And modify these parameters: ( by default, SpamAssassin is disabled, we need to give it options to start)

    ENABLED=1
    OPTIONS="--username=spamd --create-prefs --max-children 5 --helper-home-dir"
    PIDFILE="/var/run/spamassassin/spamd.pid"
    CRON=1

     

    Now lets try to start SpamAssassin:

    sudo /etc/init.d/spamassassin restart

     

    And update the databases for SpamAssassin:

    sudo sa-update

     

     

    Amavis-New Configure

    Now, let’s get Amavis running. Technically, it is already running, but we need to enable Virus and SPAM filtering. Start by editing this file:

    sudo vim /etc/amavis/conf.d/15-content_filter_mode

     

    There are 4 lines in the file that you need to “uncomment”. See below:

    use strict;

    # You can modify this file to re-enable SPAM checking through spamassassin
    # and to re-enable antivirus checking.
    #
    # Default antivirus checking mode
    # Please note, that anti-virus checking is DISABLED by
    # default.
    # If You wish to enable it, please uncomment the following lines:

    @bypass_virus_checks_maps = (
       \%bypass_virus_checks, \@bypass_virus_checks_acl, \$bypass_virus_checks_re);

    #
    # Default SPAM checking mode
    # Please note, that anti-spam checking is DISABLED by
    # default.
    # If You wish to enable it, please uncomment the following lines:

    @bypass_spam_checks_maps = (
       \%bypass_spam_checks, \@bypass_spam_checks_acl, \$bypass_spam_checks_re);

    1;  # ensure a defined return

     

    Now restart Amavis to take effect:

    sudo /etc/init.d/amavis restart

     

     

     

    Anomy Configure

    ###I WANT TO STRESS THAT THIS PORTION (ANOMY) IS STILL UNDER INVESTIGATION AND YOU CAN SKIP THIS PART###

    Now lets get Anomy installed and running. First we’ll have to download it from their website.

    steve @ debian ~ :) ?>   cd ~
    /home/steve
    steve @ debian ~ :) ?>   wget http://bre.klaki.net/cgi-bin/qc?mailtools.anomy.net/dist/anomy-sanitizer-1.76.tar.gz
    HTTP request sent, awaiting response... 200 OK
    Length: 172722 (169K) [application/x-gzip]
    Saving to: “qc?mailtools.anomy.net%2Fdist%2Fanomy-sanitizer-1.76.tar.gz”

    100%[================================================================================>] 172,722      168K/s   in 1.0s    

    2012-12-27 15:26:38 (168 KB/s) - “qc?mailtools.anomy.net%2Fdist%2Fanomy-sanitizer-1.76.tar.gz” saved [172722/172722]

     

    Now to move it to it’s new home and unpack it. (for some reason the file name wasnt right so we need to rename it)

    sudo mv qc\?mailtools.anomy.net%2Fdist%2Fanomy-sanitizer-1.76.tar.gz /usr/local/anomy-sanitizer-1.76.tar.gz
    cd /usr/local/
    sudo su
    tar -zxvf anomy-sanitizer-1.76.tar.gz
    cd anomy
    ls -alh

     

    For starters on configuration, I found a site that provides a baseline config that we’ll work off of. Thanks to “advosys.ca” for this one! We’ll use this conf file to start with. If that link doesn’t work, here it is on my site: anomy.conf.

    Download that file and place it in your /usr/local/anomy/ folder.

    END OF ANOMY SECTION

     

     

    Allow Mail to be Scanned: Postfix Configuration

     

    Now what we need to do is setup Postfix to actually send the mail to the Spam Filtering engines. In order to make this happen we’re going to have to modify some postfix files. We’re also going to setup the “client_access”, “helo_access”, “sender_access” and “transport” files. We’ll talk more about that when after we modify the “main” and “master” files for Postfix. Basically, these files further enhance how Postfix is able to start the filtering process before mail even gets to the SPAM Filtering Engines. It is here that we start invoking services such as dsbl.org, spamhaus.org, abuseat.org, and dnsbl.sorbs.net that work by notifying servers like our that a domain is either blacklisted or black-holed. PLEASE Visit their sites for more information. Let’s start by looking at the “main.cf” file. To look at the “main.cf” file in all it’s glory, check this out. All of the descriptions below are accredited to this that page.

    **NOTE**: I’m setting up my configuration with the ability to verify user accounts through Active Directory. The reason for this is to allow Postfix to verify that the email address is valid before processing the mail. This is yet another safeguard against SPAM. Why accept mail for an account that doesn’t exist in your domain? Just block it! I’ll also show you how to secure the communications between Postfix and the domain. We’ll talk about this later. I haven’t added this content yet, but I will in the future!**END NOTE**

    What I’m going to do is just post my “main.cf” file in here and then comment the hell out of it so you understand the reasons for what is in the file. Please take out ALL of my comments before pasting this config into your “main.cf” file! If you don’t, you will most definitely have errors at run time!

    #EDITED BY STEVE ERDMAN
    # This is the banner that will be seen by all systems connecting to our Postfix server.
    smtpd_banner = The Erd-Manor-dot-com ESMTP Relay

    #Biff is an old legacy thing that isnt needed anymore and can cause performance issue if left on.
    biff = no

    #We dont want to help anyone out. If you're hosting more than 1 domain, you better leave this off (no).
    append_dot_mydomain = no

    #This is how much time Postfix will wait before sending a message back to the originating server
    #that there is an issue.
    delay_warning_time = 4h

    #This tells Postfix where to send mail on the next hop. You need this if you have more than 1 domain.
    transport_maps = hash:/etc/postfix/transport

    #The Internet hostname of this mail system. The default is to use the fully-qualified domain name (FQDN)
    #of your MX record.
    myhostname = smtp.erdmanor.com

    #The alias databases that are used for local mail delivery. We'll be modifying this later.
    alias_maps = hash:/etc/aliases

    #This is just where the aliases exist at.
    alias_database = hash:/etc/aliases

    #For most cases, your /etc/mailname file should contain the "myhostname" value. In this case, smtp.erdmanor.com
    myorigin = /etc/mailname

    #What destination domains (and subdomains thereof) this system will relay mail to.
    #This can be a file or a list of domains, that, are, comma, separated
    relay_domains = erdman.cc, erdmanor.com

    #The list of domains that are delivered via the $local_transport mail delivery transport. By default
    #this is the Postfix local delivery agent which looks up all recipients in /etc/passwd and /etc/aliases.
    #The SMTP server validates recipient addresses with $local_recipient_maps and rejects non-existent
    #recipients.This can be a file or a list of domains
    mydestination = debian.example.com, localhost

    #This is usually the primary IP address of your Internal Exchange Server. This value is trumped by "transport_maps"
    # so if you have multiple relay servers, you can comment this out like I have.
    #relayhost = 192.168.0.125

    # This is just a list of your internal networks. The list of "trusted" remote SMTP clients that have more
    #privileges than "strangers". You can also specify "/file/name" or "type:table" patterns.
    mynetworks = 127.0.0.0/8, 192.168.0.0/24

    #The maximal size of any local individual mailbox or maildir file, or zero (no limit). In fact, this limits
    #the size of any file that is written to upon local delivery, including files written by external commands
    #that are executed by the local delivery agent. This limit must not be smaller than the message size limit.
    mailbox_size_limit = 0

    #The separator between user names and address extensions (user+foo). Basically, the software tries user+foo
    #and .forward+foo before trying user and .forward. Just leave it the way it is.
    recipient_delimiter = +

    #The network interface addresses that this mail system receives mail on. Specify "all" to receive mail on all
    #network interfaces (default) and "loopback-only" to receive mail on loopback network interfaces only.
    inet_interfaces = all

    #After the message is queued, send the entire message to the specified transport:destination. The transport
    #name specifies the first field of a mail delivery agent definition in master.cf; the syntax of the next-hop
    #destination is described in the manual page of the corresponding delivery agent. More information about
    #external content filters is in the Postfix FILTER_README file.
    content_filter = smtp-amavis:[127.0.0.1]:10024

    #Enable or disable recipient validation, built-in content filtering, or address mapping. Typically, these
    # are specified in master.cf as command-line arguments... Specify zero or more of the following options.
    #The options override main.cf settings and are either implemented by smtpd(8), qmqpd(8), or pickup(8)
    #themselves, or they are forwarded to the cleanup server.
    #no_address_mappings means that we will disable canonical address mapping, virtual alias map expansion,
    #address masquerading, and automatic BCC (blind carbon-copy) recipients. This is typically specified
    #BEFORE an external content filter.
    receive_override_options = no_address_mappings

    #Require that addresses received in SMTP MAIL FROM and RCPT TO commands are enclosed with <>, and that
    #those addresses do not contain RFC 822 style comments or phrases. This stops mail from poorly written
    #software. By default, the Postfix SMTP server accepts RFC 822 syntax in MAIL FROM and RCPT TO addresses.
    strict_rfc821_envelopes = yes

    #Reject the request when the HELO or EHLO hostname has no DNS A or MX record. The
    #unknown_hostname_reject_code parameter specifies the numerical response code for rejected requests
    #(default: 450). This is a strong way to stop many spammers.
    unknown_hostname_reject_code = 450

    #The numerical Postfix SMTP server response code when a client without valid address <=> name mapping
    # is rejected by the reject_unknown_client_hostname restriction. The SMTP server always replies with
    #450 when the mapping failed due to a temporary error condition. Do not change this unless you have a
    # complete understanding of RFC 5321. Turning this on can cause a lot of false positives, test this out.
    ### unknown_client_reject_code = 450

    #Disable the SMTP VRFY command. This stops some techniques used to harvest email addresses.
    disable_vrfy_command = yes

    #Wait until the RCPT TO command before evaluating $smtpd_client_restrictions, $smtpd_helo_restrictions
    #and $smtpd_sender_restrictions, or wait until the ETRN command before evaluating
    #$smtpd_client_restrictions and $smtpd_helo_restrictions. This feature is turned on by default because
    #some clients apparently mis-behave when the Postfix SMTP server rejects commands before RCPT TO.The
    #default setting has one major benefit: it allows Postfix to log recipient address information when
    #rejecting a client name/address or sender address, so that it is possible to find out whose mail is
    #being rejected.
    smtpd_delay_reject = yes

    #Require that a remote SMTP client introduces itself with the HELO or EHLO command before sending the
    #MAIL command or other commands that require EHLO negotiation.
    smtpd_helo_required = yes

    #You need to read this --> http://www.postfix.org/postconf.5.html#smtpd_client_restrictions
    smtpd_client_restrictions =
            permit_mynetworks,
            check_client_access hash:/etc/postfix/client_access,
            reject_unknown_client_hostname,
    #Below are all of the DNS Blacklists that Spam originates from.
            reject_rbl_client sbl-xbl.spamhaus.org,
            reject_rbl_client cbl.abuseat.org,
            reject_rbl_client dul.dnsbl.sorbs.net,
            reject_rbl_client sbl.spamhaus.org,
            permit

    # You need to read this --> http://www.postfix.org/postconf.5.html#smtpd_helo_restrictions
    smtpd_helo_restrictions =
            permit_mynetworks,
            check_helo_access hash:/etc/postfix/helo_access,
            reject_non_fqdn_helo_hostname,
            reject_invalid_helo_hostname,
    #        reject_unknown_helo_hostname, #This can cause false positives, test before production!
            permit

    smtpd_sender_restrictions =
            permit_mynetworks,
            check_sender_access hash:/etc/postfix/sender_access,
            reject_non_fqdn_sender,
            reject_unknown_sender_domain, #This can cause false positives, test before production!
            permit

    smtpd_recipient_restrictions =
            permit_mynetworks,
            permit_sasl_authenticated,
            reject_unauth_destination,
            reject_invalid_hostname,
            reject_non_fqdn_hostname, #This can cause false positives, test before production!
            reject_non_fqdn_recipient,
            reject_unknown_recipient_domain,
            permit

    smtpd_error_sleep_time = 1s
    smtpd_soft_error_limit = 10
    smtpd_hard_error_limit = 20


    # Basic SPAM prevention...Require that a remote SMTP client introduces itself with the HELO or
    #EHLO command before sending the MAIL command or other commands that require EHLO negotiation.
    smtpd_helo_required = yes

     

     

    Wow, that took forever…

     

     

    Now we need to jump into the “master.cf” file. This one is a bit more tricky than the “main.cf” in that it has a lot more little tweaks. For more info on “master.cf”, there is an excellent “FAQ” on Postfix’s website: HERE. Here we go, I’ll do this the same as I did for the “main.cf” file, attempting to explain as much as I can so that you understand what everything is doing. 🙂 Remember to take out ALL of my comments before pasting this config into your “master.cf” file! If you don’t, you will most definitely have errors at run time!

     

    Here we go, here’s my “master.cf” file:

    # ==========================================================================
    # service type  private unpriv  chroot  wakeup  maxproc command + args
    #               (yes)   (yes)   (yes)   (never) (100)
    # ==========================================================================
    smtp      inet  n       -       -       -       -       smtpd
    #submission inet n       -       -       -       -       smtpd
    #  -o milter_macro_daemon_name=ORIGINATING
    #628       inet  n       -       -       -       -       qmqpd
    pickup    fifo  n       -       -       60      1       pickup
             -o content_filter=
             -o receive_override_options=no_header_body_checks
    cleanup   unix  n       -       -       -       0       cleanup
    qmgr      fifo  n       -       n       300     1       qmgr
    #qmgr     fifo  n       -       -       300     1       oqmgr
    tlsmgr    unix  -       -       -       1000?   1       tlsmgr
    rewrite   unix  -       -       -       -       -       trivial-rewrite
    bounce    unix  -       -       -       -       0       bounce
    defer     unix  -       -       -       -       0       bounce
    trace     unix  -       -       -       -       0       bounce
    verify    unix  -       -       -       -       1       verify
    flush     unix  n       -       -       1000?   0       flush
    proxymap  unix  -       -       n       -       -       proxymap
    proxywrite unix -       -       n       -       1       proxymap
    smtp      unix  -       -       -       -       -       smtp
    relay     unix  -       -       -       -       -       smtp
        -o smtp_fallback_relay=
    showq     unix  n       -       -       -       -       showq
    error     unix  -       -       -       -       -       error
    retry     unix  -       -       -       -       -       error
    discard   unix  -       -       -       -       -       discard
    local     unix  -       n       n       -       -       local
    virtual   unix  -       n       n       -       -       virtual
    lmtp      unix  -       -       -       -       -       lmtp
    anvil     unix  -       -       -       -       1       anvil
    scache    unix  -       -       -       -       1       scache
    maildrop  unix  -       n       n       -       -       pipe
      flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
    uucp      unix  -       n       n       -       -       pipe
      flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)

    smtp-amavis     unix    -       -       -       -       2       smtp
            -o smtp_data_done_timeout=1200
            -o smtp_send_xforward_command=yes
            -o disable_dns_lookups=yes
            -o max_use=20

    127.0.0.1:10025 inet    n       -       -       -       -       smtpd
            -o content_filter=
            -o local_recipient_maps=
            -o relay_recipient_maps=
            -o smtpd_restriction_classes=
            -o smtpd_delay_reject=no
            -o smtpd_client_restrictions=permit_mynetworks,reject
            -o smtpd_helo_restrictions=
            -o smtpd_sender_restrictions=
            -o smtpd_recipient_restrictions=permit_mynetworks,reject
            -o smtpd_data_restrictions=reject_unauth_pipelining
            -o smtpd_end_of_data_restrictions=
            -o mynetworks=127.0.0.0/8
            -o smtpd_error_sleep_time=0
            -o smtpd_soft_error_limit=1001
            -o smtpd_hard_error_limit=1000
            -o smtpd_client_connection_count_limit=0
            -o smtpd_client_connection_rate_limit=0
            -o receive_override_options=no_header_body_checks,no_unknown_recipient_checks

     

    Now we need to take care of our “client_access”, “helo_access”, “sender_access” and “transport” files as we spoke of earlier. There are many types of these files that can be referenced by the “main.cf” file, but these are really the only ones we need. Theoretically, we could have created a bunch more of these, and in a large enterprise that owns hundreds or thousands of domains, it’s almost a necessity to do so. For all the info you need about these files look at the database webpage and the man 5 access page.

    Back in the “main.cf” file, we added a line item in there that looks like this, “check_client_access hash:/etc/postfix/client_access“. The purpose of the Client Access file is to “search the specified access database for the client hostname, parent domains, client IP address, or networks obtained by stripping least significant octets.” So what does that mean? Basically it means that this file is like access control list for remote SMTP servers. It checks client information: host names, network addresses, and envelope sender or recipient addresses.

    As a safeguard, we should NEVER be accepting email from our own domain, from a remote source. Our Exchange server is inside our organization already and will process our internal mail for us. This proxy will deny anyone out on the internet trying to spoof mail into our domain. You want to make sure to have every domain you own in this list. And you can also do some “whitelisting” in here as well. Let’s get our “client_access” file going:

    erdmanor.com        REJECT
    erdman.cc       REJECT
    74.114.46.150       OK
    directv.com     OK
    linuxmint.com       OK
    forums.linuxmint.com    OK

     

    Now for our “helo_access” file. This file is much the same, it’s another ACL that we are setting up. Postfix states that this command will tell the Postfix server to “Search the specified access database for the MX hosts for the HELO or EHLO hostname, and execute the corresponding action . Note 1: a result of “OK” is not allowed for safety reasons. Instead, use DUNNO in order to exclude specific hosts from blacklists. Note 2: specify “smtpd_helo_required = yes” to fully enforce this restriction (without “smtpd_helo_required = yes”, a client can simply skip check_helo_mx_access by not sending HELO or EHLO).

    erdmanor.com            REJECT
    erdman.cc           REJECT
    /^smtp\.erdman\.cc$/        550 Dont use my own hostname
    /^smtp\.erdmanor\.com$/     550 Dont use my own hostname
    /^mail\.erdman\.cc$/        550 Dont use my own hostname
    /^mail\.erdmanor\.com$/     550 Dont use my own hostname
    /^ns1\.erdman\.cc$/     550 Dont use my own hostname
    /^ns1\.erdmanor\.com$/      550 Dont use my own hostname
    /^ns2\.erdman\.cc$/     550 Dont use my own hostname
    /^ns2\.erdmanor\.com$/      550 Dont use my own hostname
    /^\[108\.227\.33\.121\]$/   550 Dont use my own IP address
    /^\[108\.227\.33\.122\]$/   550 Dont use my own IP address
    /^\[108\.227\.33\.123\]$/   550 Dont use my own IP address
    /^\[108\.227\.33\.124\]$/   550 Dont use my own IP address
    /^\[108\.227\.33\.125\]$/   550 Dont use my own IP address
    /^[0-9.]+$/         550 Your software is not RFC 2821 compliant
    /^[0-9]+(\.[0-9]+){3}$/     550 Your software is not RFC 2821 compliant

     

    Moving right along here, lets look at the “Sender_Access” file here. Again, this is another ACL that is supposed to search the specified access database for the MAIL FROM address, domain, parent domains, or localpart@, and execute the corresponding action. We want all of our domains in here as well, and for the same reason as the “client_access” file.

    erdmanor.com            REJECT
    erdman.cc               REJECT
    forums.linuxmint.com    OK
    linuxmint.com       OK

     

    And lastly, our transport file. This file is really important. Without this working properly we wont get any mail at all from this proxy.

    erdmanor.com        smtp:[192.168.0.126]
    erdman.cc       smtp:[192.168.0.127]

     
     

    Now that we have our access and transport files completed, we need to make them usable to Postfix. The only way that’s possible is to run the “Postmap” command on them.

    sudo postmap client_access
    sudo postmap helo_access
    sudo postmap sender_access
    sudo postmap transport

     

    ANYTIME YOU MODIFY THESE 4 FILES YOU MUST RUN THE POSTMAP COMMAND AGAINST THEM AND THEN RESTART POSTFIX! NO EXCEPTIONS!

     

    Now that Postfix is setup and ready to go, lets get that restarted and watch our log files at the same time. You should still have a second terminal open, so start your “tail” and then you can restart Postfix.

    steve @ debian ~ :) ?>sudo tail -f /var/log/syslog
    steve @ debian ~ :) ᛤ>   sudo /etc/init.d/postfix restart
    Stopping Postfix Mail Transport Agent: postfix.
    Starting Postfix Mail Transport Agent: postfix.

     

    Here is the output from the tail:

    Dec 28 11:34:12 debian postfix/master[1481]: terminating on signal 15
    Dec 28 11:34:12 debian postfix/master[3266]: daemon started -- version 2.7.1, configuration /etc/postfix

     

     

    At a bare minimum here, assuming your DNS records are setup properly, your MX records have propagated throughout the Internet, your Firewall is setup properly, your Exchange box is setup properly, and the other million variables are good, you should be able to drop this in between your firewall and your Exchange server. I would suggest putting this in a DMZ that is forward facing to the internet as I explained in one of my previous blogs “Serious network architecture that works for everyone“.

     

     

    SPAM Filter: SpamAssassin Configuration

    Now that our Postfix Proxy is moving mail properly, lets get some SPAM engines configured. 🙂 We’ll start with SpamAssassin. A brief background on SpamAssassin: This product is an open source code set that is actually used in a TON of other SPAM filtering products behind the scenes.

    Let’s get a quick idea of where SpamAssassin stores it’s files:

    /etc/spamassassin
    /etc/cron.daily/spamassassin
    /etc/default/spamassassin
    /etc/init.d/spamassassin
    /etc/mail/spamassassin
    /etc/spamassassin/
    /usr/bin/spamassassin
    /usr/share/spamassassin
    /usr/share/doc/spamassassin/
    /usr/share/man/man1/spamassassin*
    /usr/share/perl5/spamassassin-run.pod
    /usr/share/spamassassin/
    /var/lib/spamassassin
    /var/lib/amavis/.spamassassin
    /var/lib/dpkg/info/spamassassin.*
    /var/lib/spamassassin/

     

     

    Here is what my “/etc/spamassassin/local.cf” file looks like. I’ll comment on the file as I did earlier in this blog. Dont forget to remove ALL “#comments” before using this in your configuration. If you don’t, you will most definitely have errors at run time! Also, according to SpamAssassin, “There are now multiple files read to enable plugins in the /etc/mail/spamassassin directory; previously only one, “init.pre” was read. Now both “init.pre”, “v310.pre”, and any other files ending in “.pre” will be read. As future releases are made, new plugins will be added to new files, named according to the release they’re added in.” So we’re going to have to go through that stuff as well. Again, if you would like any further information regarding this, I urge you to visit the SpamAssassin page for the local.cf configuration settings.

    # I recommend not using this for this implementation. Our Postfix Server is acting as a Proxy to our
    # Exchange server. If you have internal servers that need to get mail to your users, then the best
    # place to handle that workload is at the Exchange Server Receive connectors. Send you internal mail there.
    # trusted_networks 192.168.0.0/24

    #Here is where we do our subject line rewrite for mail that is marked as SPAM.
    rewrite_header Subject  [***** SPAM _SCORE_ *****]

    #Score that a message needs to get to in order to be classified as SPAM.
    # this number is actually pretty high, but after tweaking it, you can lower it to 4.5 or 5.0.
    required_score      7.0

    #If the mail message meets the two above requirements the message is then packed up into an attachment and
    # forwarded to the recipient in plain text. It is up to the user to inspect and go from there.
    report_safe     2

    # Turn on DCC
    # dcc
    use_dcc 1
    dcc_path            /usr/bin/dccproc
    dcc_add_header          1
    dcc_dccifd_path         /usr/sbin/dccifd

    # Turning on the skip_rbl_checks setting will disable the DNSEval plugin, which implements Real-time Block
    # List (or: Blackhole List) (RBL) lookups. We WANT Those checks to happen so leave this at ZERO (0).
    skip_rbl_checks     0

    #razor
    use_razor2          1
    razor_config            /etc/razor/razor-agent.conf

    #pyzor
    pyzor_options           --homedir /etc/mail/spamassassin discover
    use_pyzor           1
    pyzor_path          /usr/bin/pyzor
    pyzor_add_header        1


    # Language and Location options. I have mine set to only allow English. If you work at a large international
    # business you'll want to setup all the languages your company communicates in or just say allow all:
    #  ok_locales all         (allow all locales)
    #  ok_locales en          (only allow English)
    #  ok_locales en ja zh    (allow English, Japanese, and Chinese)
    ok_locales              en


    # The next three deal with the Bayes system and how SpamAssassin actually can "learn" spam.
    use_bayes       1
    use_bayes_rules     1
    bayes_auto_learn    1
    use_learner 1

    # If you receive mail filtered by upstream mail systems, like a spam-filtering ISP or mailing list, and that
    # service adds new headers (as most of them do), these headers may provide inappropriate cues to the Bayesian
    # classifier, allowing it to take a "short cut". To avoid this, list the headers using this setting. Example:
    # bayes_ignore_header X-Upstream-Spamfilter
    # bayes_ignore_header X-Upstream-SomethingElse
    bayes_ignore_header X-Bogosity
    bayes_ignore_header X-Spam-Flag
    bayes_ignore_header X-Spam-Status

    # To be accurate, the Bayes system does not activate until a certain number of ham (non-spam) and
    # spam have been learned. The default is 200 of each ham and spam, but you can tune these up or
    # down with these two settings.
    bayes_min_ham_num        20 #default is 200
    bayes_min_spam_num       20 #default is 200

    # The Bayes system will, by default, learn any reported messages (spamassassin -r) as spam.
    # If you do not want this to happen, set this option to 0.
    bayes_learn_during_report      1


    # SpamAssassin will opportunistically sync the journal and the database. It will do so once a day,
    # but will sync more often if the journal file size goes above this setting, in bytes. If set to
    # 0, opportunistic syncing will not occur.
    bayes_journal_max_size        102400

    # What should be the maximum size of the Bayes tokens database? When expiry occurs, the Bayes
    # system will keep either 75% of the maximum value, or 100,000 tokens, whichever has a larger
    # value. 150,000 tokens is roughly equivalent to a 8Mb database file.
    bayes_expiry_max_db_size      200000

    # If enabled, the Bayes system will try to automatically expire old tokens from the database.
    # Auto-expiry occurs when the number of tokens in the database surpasses the
    # bayes_expiry_max_db_size value.
    bayes_auto_expire      1


    # If this option is set, whenever SpamAssassin does Bayes learning, it will put the information
    # into the journal instead of directly into the database. This lowers contention for locking the
    # database to execute an update, but will also cause more access to the journal and cause a delay
    # before the updates are actually committed to the Bayes database.
    bayes_learn_to_journal (default: 0)



    #   Some shortcircuiting, if the plugin is enabled
    #
    ifplugin Mail::SpamAssassin::Plugin::Shortcircuit

    #   default: strongly-whitelisted mails are *really* whitelisted now, if the
    #   shortcircuiting plugin is active, causing early exit to save CPU load.
    #   Uncomment to turn this on
    shortcircuit USER_IN_WHITELIST       on
    shortcircuit USER_IN_DEF_WHITELIST   on
    shortcircuit USER_IN_ALL_SPAM_TO     on
    shortcircuit SUBJECT_IN_WHITELIST    on

    #   the opposite; blacklisted mails can also save CPU
    shortcircuit USER_IN_BLACKLIST       on
    shortcircuit USER_IN_BLACKLIST_TO    on
    shortcircuit SUBJECT_IN_BLACKLIST    on

    #   and a well-trained bayes DB can save running rules, too
    #
    shortcircuit BAYES_99                spam
    shortcircuit BAYES_00                ham

    endif # Mail::SpamAssassin::Plugin::Shortcircuit

     

     

    Here’s that Exact same file without all the comments:

    rewrite_header Subject  [***** SPAM _SCORE_ *****]
    required_score          7.0
    report_safe         2
    use_dcc 1
    dcc_path                /usr/bin/dccproc
    dcc_add_header          1
    dcc_dccifd_path         /usr/sbin/dccifd
    skip_rbl_checks     0
    use_razor2          1
    razor_config            /etc/razor/razor-agent.conf
    pyzor_options           --homedir /etc/mail/spamassassin discover
    use_pyzor               1
    pyzor_path          /usr/bin/pyzor
    pyzor_add_header        1
    ok_locales              en
    use_bayes       1
    use_bayes_rules     1
    bayes_auto_learn    1
    use_learner 1
    bayes_ignore_header X-Bogosity
    bayes_ignore_header X-Spam-Flag
    bayes_ignore_header X-Spam-Status
    bayes_min_ham_num        20 #default is 200
    bayes_min_spam_num       20 #default is 200
    bayes_learn_during_report      1
    bayes_journal_max_size        102400
    bayes_expiry_max_db_size      200000
    bayes_auto_expire      1
    bayes_learn_to_journal (default: 0)


    ifplugin Mail::SpamAssassin::Plugin::Shortcircuit

    shortcircuit USER_IN_WHITELIST       on
    shortcircuit USER_IN_DEF_WHITELIST   on
    shortcircuit USER_IN_ALL_SPAM_TO     on
    shortcircuit SUBJECT_IN_WHITELIST    on

    shortcircuit USER_IN_BLACKLIST       on
    shortcircuit USER_IN_BLACKLIST_TO    on
    shortcircuit SUBJECT_IN_BLACKLIST    on

    shortcircuit BAYES_99                spam
    shortcircuit BAYES_00                ham

    endif # Mail::SpamAssassin::Plugin::Shortcircuit

     

    Now we need to restart SpamAssassin and test out our changes.

    sudo sa-update -D --updatedir /tmp/updates
    sudo /etc/init.d/spamassassin restart
    echo "test" | sudo spamassassin -D pyzor 2>&1 | less

     

    Alright, enough SpamAssassin stuff. Let’s get Amavis up and running.

     

     

    SPAM Filter: Amavis-New Configuration

    Now that our Postfix Proxy is moving mail properly, and SpamAssassin is filtering mail, lets get some Amavis-New configured. Remember what we said before: Amavis sends mail to SpamAssassin by default. That is the reason why we setup SpamAssassin first. In order to have Amavis properly scanning mail we’ll be configuring files in your /etc/amavis/ directory. Before we jump into that, lets get an Idea of where Amavis is located in your Server. Below is where Amavis has files by a default install:

    /etc/amavis/
    /etc/amavis/conf.d/
    /etc/cron.d/amavisd-new
    /etc/cron.daily/amavisd-new
    /etc/cron.hourly/amavisd-new
    /etc/init.d/amavis
    /etc/init.d/amavisd-new-milter
    /etc/ldap/schema/amavis.schema
    /etc/logcheck/ignore.d.server/amavisd-new
    /etc/logcheck/violations.ignore.d/amavisd-new
    /usr/sbin/amavis
    /usr/sbin/amavis-milter
    /usr/sbin/amavisd-agent
    /usr/sbin/amavisd-nanny
    /usr/sbin/amavisd-new
    /usr/sbin/amavisd-new-cronjob
    /usr/sbin/amavisd-release
    /usr/share/amavis/
    /usr/share/amavis/conf.d/
    /usr/share/doc/amavisd-new/
    /usr/share/lintian/overrides/amavisd-new
    /var/lib/amavis/
    /var/lib/dpkg/info/amavisd-ne*
    /var/lib/update-rc.d/amavis
    /var/lib/update-rc.d/amavisd-new-milter

     

    I know that seems like a lot, but we’ll try cover it all. Amavis is really a different beast than SpamAssassin. But since SpamAssassin is already doing the brunt force of the work, we can take our time in this one a bit.

     

     

     

     

     

    SPF Records

    The last thing I wanted to cover in this blog, since we’re hosting our own DNS and Mail servers, it would only be right for us to cover DNS SPF records. This is just another layer of security that we *should be* using to help strengthen, not only our email, but our whole external domain.

     

     

    STILL A WORK IN PROGRESS!

    I updated this again on 2/3/13. But I’m lazy, so… there’s no change log. 🙂

     

     

    References for this blog go out to:
    http://pcsupport.about.com/od/tipstricks/a/free-public-dns-servers.htm
    http://www.bind9.net/manuals
    http://www.zytrax.com/books/dns/ch7/queries.html
    http://www.itechlounge.net/2011/12/bind-unexpected-rcode-refused-resolving-xx-xx-xx-xx-in-addr-arpaptrin/
    http://www.webupd8.org/2009/11/how-to-disable-ipv6-in-ubuntu-910.html
    http://pgl.yoyo.org/as/bind-zone-file-creator.php
    http://postfix.1071664.n5.nabble.com/Unknown-Recipient-Domain-td44755.html
    http://www.cyberciti.biz/tips/howto-postfix-flush-mail-queue.html
    http://www.zytrax.com/books/dns/
    http://www.fencepost.net/2010/03/fix-postfix-recipient-address-rejected-domain-not-found/
    http://www.zytrax.com/books/dns/ch7/xfer.html#notify
    http://www.zytrax.com/books/dns/ch8/soa.html
    http://www.cyberciti.biz/tips/howto-postfix-flush-mail-queue.html
    http://www.postfix.org/addon.html
    http://wiki.apache.org/spamassassin/UsingRazor
    http://wiki.apache.org/spamassassin/UsingPyzor
    http://wiki.apache.org/spamassassin/UsingDcc
    http://www.dcc-servers.net/dcc/
    http://www.kaspersky.com/linux-mail-security
    http://www.postfix.org/FILTER_README.html
    http://www.giac.org/paper/gsec/2824/smtp-gateway-virus-filtering-amavis-postfix/104787
    http://advosys.ca/papers/email/53-postfix-filtering.html
    http://mailtools.anomy.net/
    http://www.dcc-servers.net/dcc/INSTALL.html
    http://www.amavis.org/
    http://spamassassin.apache.org/
    http://www.postfix.org/
    http://onetforum.com/fourm/viewtopic.php?p=27
    http://wiki.apache.org/spamassassin/WritingRules
    http://codesorcery.net/old/docs/spamtricks.html
    http://svn.apache.org/repos/asf/spamassassin/branches/3.3/spamd/README
    http://spamassassin.apache.org/full/3.3.x/doc/Mail_SpamAssassin_Conf.html
    http://wiki.apache.org/spamassassin/FrequentlyAskedQuestions

    VN:F [1.9.22_1171]
    Rating: 5.0/5 (1 vote cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Open Source: Postfix Mail Relay, SPAM filter, DNS Server, Web Server, AWStats, ISPConfig3 and More!


    Everyone out there hates SPAM, right? I know I do. And my domain isn’t out there that much, so I can’t say that I get anywhere near as much SPAM mail as some large enterprise businesses do. What If I told you that your Barracuda Spam filter, or your McAfee Spam Filter, or whatever paid product, is junk? What if I told you that we can get you up and running with a FREE SPAM filter for your mail server. What if I told you that it was just as easy to setup and use as your current SPAM filter? How about this question: How much are you paying for your current SPAM filter?

    Well, this blog post is getting put together for all you people out there that love spending money on useless junk. Welcome to the world of Free Software projects that have been around for well more than a decade. Instead of paying $100+ grand a year on an appliance, how about you employ a real person to manage a few Linux boxes? That’s entirely what we’re planning right here. So come along, we’re going to show you how to setup your already existing Microsoft Exchange server to sit in a more secure, higher tier DMZ, and setup a Debian server, from scratch, to host a Postfix server that is going to work with Amavis, Spam Assassin, and ClamAV to securely inspect all your mail.

    Warning… This blog is long. Be prepared, and make sure you have TIME!

    I very seriously recommend following my previous blog on how to build a Debian Server: Debian Minimal Install

     
     

    But if you want to just push forward, just follow these instructions:
     
     
    Let’s start with getting your Debian server built and running. Start with getting a Virtual Machine up and running. Boot to your Small Debian ISO and kick off the install.
     
    You can really just hit “next” on many of the screens during the install. English language, USA, keyboard layout American English, etc…
     
    Make sure you pick a server name that is going to last a while, like CompanySPAM01, or something unique like that.
     
    Setup your domain name, root password, user accounts, etc…
     
    Setup your partitions however you deem fit, install packages, pick a local Debian Mirror repository, etc…
     
    NOW, when you get to Software Selection, DO NOT INSTALL “Graphical Desktop Environment”. The only thing you need is an SSH Server and the “Standard System Utilities”.
     

    Install the GRUB boot loader as normal, and boom, you’re done!

     
     

    Alright, so boot up your new Debian server, and lets get going. Log in as root or whatever user you created and lets get some housekeeping completed.

     

    So Let’s get a static Address on this thing by editing this file: /etc/network/interfaces

    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).

    # The loopback network interface
    auto lo
    iface lo inet loopback

    # The primary network interface
    #allow-hotplug eth0
    #iface eth0 inet dhcp
    auto eth0
    iface eth0 inet static
    address 192.168.0.100
    netmask 255.255.255.0
    network 192.168.0.0
    broadcast 192.168.0.255
    gateway 192.168.0.1

     

    And you can restart networking with this:

    /etc/init.d/networking restart

     

    Next we’ll get the SSH Server so we can get some remote access to this server.

    apt-get install ssh openssh-server openssh-client

     

    When that’s done you should be able to SSH from your local machine to this virtual host using:

    ssh steve@192.168.0.100

     
     

    You’ll probably want to sudo from this user, so if that’s the case:

    su root
    Password:
    # apt-get install sudo
    #nano /etc/sudoers

     
     

    When Editing the sudoers file, if you break it, have fun! Just copy the line where root is and paste it right below, change the name root to your username. Like this:

    # User privilege specification
    root ALL=(ALL) ALL
    steve ALL=(ALL) ALL

     
     

    Now, we need to update this thing to install “Dotdeb” software. So Edit your “/etc/apt/sources.list”

    # Dotdeb repository
    deb http://packages.dotdeb.org squeeze all
    deb-src http://packages.dotdeb.org squeeze all

     
     

    Now we can add the GPG key:

    wget http://www.dotdeb.org/dotdeb.gpg
    cat dotdeb.gpg | sudo apt-key add -
    Ok!
    #apt-get update
    apt-get upgrade

     
     

    Now we need to make sure that NTP is installed and running properly on our new server, we’ll also need Postfix, Amavisd, SpamAssassin, ClamAV, and a slew of other software. And at the same time go ahead and install Bind9 if you plan on hosting your Externally facing DNS zones from here. It’s not a bad idea, and even if you’re a small company, you can easily do this on your own.

    apt-get install ntp ntpdate

     

    Then you can “sudo nano /etc/ntp.conf”

    # /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help

    driftfile /var/lib/ntp/ntp.drift
    statistics loopstats peerstats clockstats
    filegen loopstats file loopstats type day enable
    filegen peerstats file peerstats type day enable
    filegen clockstats file clockstats type day enable

    # Specify one or more NTP servers.

    server kerberos.mydomain.com #insert your PDC here
    server kerberos2.mydomain.com #secondary DC
    server kerberos3.mydomain.com #third DC
    server 1.ubuntu.pool.ntp.org #fall back to Ubuntu's NTP
    server 2.ubuntu.pool.ntp.org #
    server 3.ubuntu.pool.ntp.org #
    #

     
     

    Now install more software:

    apt-get install postfix postfix-mysql postfix-doc mysql-client mysql-server openssl getmail4 rkhunter binutils

     

    During the install, Postfix will ask you for what type of site, make sure to choose “INTERNET SITE”. The System mail name is going to be the primary domain name that you own and operate. In my case this is “erdmanor.com” Then you’ll be prompted to setup passwords for MySQL.

     

    If you do a “netstat -ntap” you’ll see that MySQL is running binded to local loopback (127.0.0.1). We don’t want this. We need to make sure that MySQL is listening on all Interfaces, so edit out the bind address in this file “/etc/mysql/my.cnf”. Make sure to look at all the other options you can set in there too. It’s a pretty big conf file.

     

    And when you’re done, restart the MySQL Server like this: “sudo /etc/init.d/mysql restart”

    #bind-address = 127.0.0.1

     
     

    Now rerun your “netstat -ntap” and verify that it’s running on 0.0.0.0:3306.

    tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
    tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN -
    tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN -

     
     

    Alright, so let’s get some SPAM killing software installed. Running this command will prompt you to install this software and a ton of dependencies. Save your scroll back and you can go through that stuff later.

    apt-get install amavisd-new spamassassin clamav clamav-daemon zoo unzip bzip2 arj nomarch lzop cabextract apt-listchanges libnet-ldap-perl libauthen-sasl-perl clamav-docs daemon libio-string-perl libio-socket-ssl-perl libnet-ident-perl zip libnet-dns-perl nginx

     
     

    Awesome, now we have most of the software we need. Let’s get the website up and running for our PHPMyAdmin site and ISPConfig3 software. Now, I’m no PHP wizard or expert, but all of these packages are necessary. If you need more information, I’ve left some links in the sources portion of this blog, all the way at the bottom. Again, you’ll see a bunch of dependencies installed here.

    apt-get install php5-fpm php5-mysql php5-curl php5-gd php5-intl php-pear php5-imagick php5-imap php5-mcrypt php5-memcache php5-ming php5-ps php5-pspell php5-recode php5-snmp php5-sqlite php5-tidy php5-xmlrpc php5-xsl fcgiwrap

     
     

    Now we’re ready to install PhpMyAdmin:

    apt-get install phpmyadmin

     

    You’ll see that Apache is installed at this time, again with many other dependencies. When installing this software, make sure that you answer these questions:
    1. Webserver to reconfigure: (this is a checkbox, dont check either of them).
    2. Configure database for phpmyadmin with dbconfig-common?: NO

    PhpMyAdmin is installed into this directory: “/usr/share/phpmyadmin/” You can check it out like this:

    ls -alh /usr/share/phpmyadmin/

     
     

    Like I stated before, Apache is installed now. We need to stop the Apache service while we’re configuring the server, and we need to make sure that Apache doesn’t start with the system too. We’ll turn it back on later. Then we can get nginx (Pronounced, Engine-X) started up.

    sudo /etc/init.d/apache2 stop
    sudo insserv -r apache2
    sudo /etc/init.d/nginx start

     
     

    Now we can get DNS working, but first we need to install it. We’ll configure it later.

    apt-get install bind9 dnsutils

     
     

    If you’re looking to get some statistics from your server and analize logs, etc… you’ll want to get some stat software installed.

    “Vlogger is a little piece of code borned to handle dealing with large amounts of virtualhost logs. it’s bad news that apache can’t do this on its own. vlogger takes piped input from apache, splits it off to separate files based on the first field. it uses a file handle cache so it can’t run out of file descriptors. it will also start a new logfile every night at midnight, and maintain a symlink to the most recent file. for security, it can drop privileges and do a chroot to the logs directory.”

     

    “The Webalizer is a fast, free web server log file analysis program. It produces highly detailed, easily configurable usage reports in HTML format, for viewing with a standard web browser.”

     

    “AWStats is a free powerful and featureful tool that generates advanced web, streaming, ftp or mail server statistics, graphically. This log analyzer works as a CGI or from command line and shows you all possible information your log contains, in few graphical web pages. It uses a partial information file to be able to process large log files, often and quickly. It can analyze log files from all major server tools like Apache log files (NCSA combined/XLF/ELF log format or common/CLF log format), WebStar, IIS (W3C log format) and a lot of other web, proxy, wap, streaming servers, mail servers and some ftp servers.”

     

    apt-get install vlogger webalizer awstats geoip-database

     
     

    First thing we’ll do here is stop the AWStats cron job by commenting out all the lines in the AWStats Cron job. Start by editing this file: “/etc/cron.d/awstats”

    #*/10 * * * * www-data [ -x /usr/share/awstats/tools/update.sh ] &amp;&amp; /usr/share/awstats/tools/update.sh
    #
    # Generate static reports:
    #10 03 * * * www-data [ -x /usr/share/awstats/tools/buildstatic.sh ] &amp;&amp; /usr/share/awstats/tools/buildstatic.sh

     
     

    Next we’re going to make sure that Apache is stopped and that nginx is running so that we can install ISPConfig3. This is super important, otherwise you’ll have all kinds of issues when you install ISPConfig3!

    sudo /etc/init.d/apache2 stop
    sudo /etc/init.d/nginx restart

     
     

    Now you need to download ISPConfig3 from their website. http://www.ispconfig.org/ispconfig-3/download/

    cd ~/tarballs #create this directory if it doesn't exist.
    wget http://prdownloads.sourceforge.net/ispconfig/ISPConfig-3.0.4.6.tar.gz
    tar -zxvf ISPConfig-3.0.4.6.tar.gz
    cd ~/tarballs/ispconfig3_install/install/
    sudo php -q install.php

     
     

    Now that the installer is running for ISPConfig3, and this will help you configure all the necessary services for you.

    steve@:~/tarballs/ispconfig3_install/install$ sudo php -q install.php
    PHP Deprecated: Comments starting with '#' are deprecated in /etc/php5/cli/conf.d/ming.ini on line 1 in Unknown on line 0
    PHP Deprecated: Comments starting with '#' are deprecated in /etc/php5/cli/conf.d/ps.ini on line 1 in Unknown on line 0

    --------------------------------------------------------------------------------
     _____ ___________   _____              __ _         ____
    |_   _/  ___| ___ \ /  __ \            / _(_)       /__  \
      | | \ `--.| |_/ / | /  \/ ___  _ __ | |_ _  __ _    _/ /
      | |  `--. \  __/  | |    / _ \| '_ \|  _| |/ _` |  |_ |
     _| |_/\__/ / |     | \__/\ (_) | | | | | | | (_| | ___\ \
     \___/\____/\_|      \____/\___/|_| |_|_| |_|\__, | \____/
                                                  __/ |
                                                 |___/
    --------------------------------------------------------------------------------


    &gt;&gt; Initial configuration

    Operating System: Debian 6.0 (Squeeze/Sid) or compatible

    Following will be a few questions for primary configuration so be careful.
    Default values are in [brackets] and can be accepted with .
    Tap in "quit" (without the quotes) to stop the installer.

    Select language (en,de) [en]: en

    Installation mode (standard,expert) [standard]: standard

    Full qualified hostname (FQDN) of the server, eg server1.domain.tld [server.erdmanor.com]:

    MySQL server hostname [localhost]:

    MySQL root username [root]:

    MySQL root password []: {generate a long password here}

    MySQL database to create [dbispconfig]: {something clever}

    MySQL charset [utf8]:

    Apache and nginx detected. Select server to use for ISPConfig: (apache,nginx) [apache]: nginx

    Generating a 2048 bit RSA private key
    .......+++
    ..................................................................+++
    writing new private key to 'smtpd.key'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [AU]:US
    State or Province Name (full name) [Some-State]:Ohio
    Locality Name (eg, city) []:
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:Erdmanor.com
    Organizational Unit Name (eg, section) []:IT-IS
    Common Name (eg, YOUR name) []:Steve Erdman
    Email Address []:webmaster
    Configuring Jailkit
    Configuring SASL
    Configuring PAM
    Configuring Courier
    PHP Warning: chmod(): No such file or directory in /home/steve/tarballs/ispconfig3_install/install/lib/installer_base.lib.php on line 838
    Configuring Spamassassin
    Configuring Amavisd
    Configuring Getmail
    Configuring Pureftpd
    sh: cannot create /etc/pure-ftpd/conf/ChrootEveryone: Directory nonexistent
    sh: cannot create /etc/pure-ftpd/conf/BrokenClientsCompatibility: Directory nonexistent
    sh: cannot create /etc/pure-ftpd/conf/DisplayDotFiles: Directory nonexistent
    sh: cannot create /etc/pure-ftpd/conf/DontResolve: Directory nonexistent
    Configuring MyDNS
    Configuring nginx
    Configuring Vlogger
    Configuring Apps vhost
    Configuring Bastille Firewall
    PHP Notice: Undefined index: fail2ban in /home/steve/tarballs/ispconfig3_install/install/install.php on line 263
    Installing ISPConfig
    ISPConfig Port [8080]:

    Do you want a secure (SSL) connection to the ISPConfig web interface (y,n) [y]: y

    Generating RSA private key, 4096 bit long modulus
    .................................................................................................................................................................................................................................................++
    .............................................................................++
    e is 65537 (0x10001)
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [AU]:US
    State or Province Name (full name) [Some-State]:Ohio
    Locality Name (eg, city) []:Concord-Twp
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:Erdmanor.com
    Organizational Unit Name (eg, section) []:IT-IS
    Common Name (eg, YOUR name) []:Steve Erdman
    Email Address []:webmaster@erdmanor.com

    Please enter the following 'extra' attributes
    to be sent with your certificate request
    A challenge password []:
    An optional company name []:Erdman.cc
    writing RSA key
    Configuring DBServer
    Installing ISPConfig crontab
    no crontab for root
    no crontab for getmail
    Restarting services ...
    Stopping MySQL database server: mysqld.
    Starting MySQL database server: mysqld ..
    Checking for tables which need an upgrade, are corrupt or were
    not closed cleanly..
    Stopping Postfix Mail Transport Agent: postfix.
    Starting Postfix Mail Transport Agent: postfix.
    Stopping amavisd: amavisd-new.
    Starting amavisd: amavisd-new.
    Stopping ClamAV daemon: clamd.
    Starting ClamAV daemon: clamd .
    Reloading PHP5 FastCGI Process Manager: php5-fpm.
    Reloading nginx configuration: nginx.
    Restarting nginx: nginx.
    nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
    nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
    nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
    nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
    nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
    nginx: [emerg] still could not bind()
    Installation completed.

     
     
     

    Now that you have ISPConfig3 installed, pop open a web browser and head over to your new ISPConfig3 control panel. The default credentials are super secure: admin:admin. Obviously you’re going to be changing those… RIGHT?!

     

    You need to start by adding a new website to your ISPConfig3 admin console. So Click on “Sites” then “Create new website…”

     

    Here you need to fill out the proper information. Server the site is hosted on, Domain Name you’re hosting, if you need CGI, SSI, SSL and the type of PHP you want. Obviously it’ll be active.

     

    From what I’ve seen out on some other websites, we need to create some “mod_rewrite” aliases. Reason being is that the PhpMyAdmin console needs to be available from a few different URL’s. So If you’re hosting multiple hostnames or domains from this server, you’ll basically need to create an vhost alias for each one. It’s a lot of manual work, but at the end of the day it’ll be worth it. I got this code snippet from the www.howtoforge.com website, so make sure to visit them and say thanks!

    This code MUST go into the “nginx Directives” field on the Options tab of each website managed inside ISPConfig3, as you can see in the graphic:

     location /phpmyadmin {
    root /usr/share/;
    index index.php index.html index.htm;
    location ~ ^/phpmyadmin/(.+\.php)$ {
    try_files $uri =404;
    root /usr/share/;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include /etc/nginx/fastcgi_params;
    fastcgi_buffer_size 128k;
    fastcgi_buffers 256 4k;
    fastcgi_busy_buffers_size 256k;
    fastcgi_temp_file_write_size 256k;
    fastcgi_intercept_errors on;
    }
    location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ {
    root /usr/share/;
    }
    }
    location /phpMyAdmin {
    rewrite ^/* /phpmyadmin last;

     
     

    Now, from a security perspective, I would highly recommend disabling http (port 80) and only using https (SSL over port 443). I’m not stupid though and realize that not everyone can afford to pay for a site certificate. If you’re a small organization, make sure to only allow access to this server from the Internal network of your organization. Obviously this server should be sitting in your multi tiered DMZ as I outlined in a previous blog Serious network architecture that works for everyone.

     location /phpmyadmin {
    root /usr/share/;
    index index.php index.html index.htm;
    location ~ ^/phpmyadmin/(.+\.php)$ {
    try_files $uri =404;
    root /usr/share/;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_param HTTPS on; # fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include /etc/nginx/fastcgi_params;
    fastcgi_buffer_size 128k;
    fastcgi_buffers 256 4k;
    fastcgi_busy_buffers_size 256k;
    fastcgi_temp_file_write_size 256k;
    fastcgi_intercept_errors on;
    }
    location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ {
    root /usr/share/;
    }
    }
    location /phpMyAdmin {
    rewrite ^/* /phpmyadmin last;
    }

     
     

    If you are using HTTPS across your site and you want to force user to use that, then you need to edit your “/etc/nginx/nginx.conf” conf file with this code below. Make sure that code gets placed inside your braces of the HTTP area, otherwise you’ll have all sorts of issues getting this to work:

    http {
    ## Detect when HTTPS is used
    map $scheme $fastcgi_https {
    default off;
    https on;
    }
    }

     

     

    Then restart nginx:

    sudo /etc/init.d/nginx restart

     
     

    For nginx to work over both HTTP and HTTPS, you’ll need to go into your “nginx Directives” again and instead of the “fastcgi_param HTTPS on”, you need to add the line “fastcgi_param HTTPS $fastcgi_https” so that requests will work over both protocols.

     location /phpmyadmin {
    root /usr/share/;
    index index.php index.html index.htm;
    location ~ ^/phpmyadmin/(.+\.php)$ {
    try_files $uri =404;
    root /usr/share/;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_param HTTPS $fastcgi_https; # fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include /etc/nginx/fastcgi_params;
    fastcgi_buffer_size 128k;
    fastcgi_buffers 256 4k;
    fastcgi_busy_buffers_size 256k;
    fastcgi_temp_file_write_size 256k;
    fastcgi_intercept_errors on;
    }
    location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ {
    root /usr/share/;
    }
    }
    location /phpMyAdmin {
    rewrite ^/* /phpmyadmin last;
    }

     
     

    Now, lets get back to the Mail setup. Start with running the “newaliases” command, then restart Postfix.

    newaliases
    /etc/init.d/postfix restart

     
     

    So from here on out everything should be able to be managed from the ISPConfig3 Control Panel. if you have any further questions, feel free to contact me!

     
     
     
     

    Sources:
    http://www.dotdeb.org/
    http://wiki.nginx.org/Main
    http://php-fpm.org/about/
    http://php.net/manual/en/book.apc.php
    http://www.if-not-true-then-false.com/2012/php-apc-configuration-and-usage-tips-and-tricks/
    http://nginx.localdomain.pl/wiki/FcgiWrap
    http://wiki.nginx.org/Fcgiwrap
    http://community.linuxmint.com/software/view/vlogger
    http://www.webalizer.org/
    http://awstats.sourceforge.net/
    http://www.howtoforge.com/perfect-server-debian-squeeze-debian-6.0-with-bind-dovecot-and-nginx-ispconfig-3

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Linuxy Stuff: DavMail

    So I actually have a few things I’m working on here, but I’ll focus this on just 1 topic. In talking with a coworker a couple weeks ago, he introduced me to some great software that acts as a proxy to Microsoft Exchange. I’ve tested it with Exchange 2010, but I’m sure it works in previous releases as well. The name of the software is DavMail and it works pretty damn well.

    I do hate POP mail, since you can only sync the Inbox folder. So if your already existing Exchange account has multiple folders setup and rules moving mail around, have fun with that. For sanity, I separate my mail quite a bit. For projects and certain people I get a lot of mail from, I create folders. For people or departments I don’t get much mail from, I make folders for them. It makes searching and archiving much easier.

    So after I tested out the POP connector, I promptly switched to IMAP (not that I like IMAP any better, but it can sync multiple folders). The sync still isn’t blazing fast, but it’s not a “push” service either. Your system will check every 10 minutes (that setting is configurable) and allow your mail client to download the mail from DavMail. The process for syncing every folder if quite lengthy, but once everything is setup it’s pretty nice after that.

    The main shining point here is for people who use mail clients and phones that don’t support Exchange integration. Evolution, Thunderbird, some older Android phones, etc… That’s what I’m using it for (Thunderbird). I have multiple VMs that I’m in all day long, and I cant keep switching back and forth with my Windows VM running Office 2010. So when I’m in my Linux Mint VM, I can still get updates to mail while I’m working.

    At the end of the day, I’ll be honest, I’m not sure how happy I’d be exposing this to the Internet to publish for mobile phones. By all means, please try to connect first with your phone to the Exchange server. There’s no need to throw more middle-ware out there and open up more ports inbound to your organization (even if you’re just a home user).

    http://davmail.sourceforge.net/

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

    Setting Up a SVN Server Using SSH Certificates


    A while back there was a need for one of my clients to manage some files between a team of their employees. They asked if I could set them up a secure location for the files to be stored in, as well as using an encrypted channel for moving the documents, and code they were writing, to and from the server. So I embarked on setting up an SVN server for them that would use SSH to encrypt the communications.

     

    This should work on Debian 6 (Squeeze), though I actually built this on a Ubuntu 12.04 server. Theoretically, this should work on most versions of Ubuntu as well.
    So if you need one, here’s how I built mine:

    sudo apt-get update && sudo apt-get dist-upgrade
    sudo apt-get install subversion subversion-tools

    Make sure to allow all dependencies to be installed, like Apache, etc…

    Now we need to store our files somewhere

    sudo mkdir /var/svn/
    sudo mkdir /var/svn/{team-name}
    #
    # Replace {team-name} with whatever you'd like

    Now that that the software is installed we need an SVN user account

    sudo useradd svn -s /bin/false

     

    Give your group ownership of the repos directory.

    chown -R svn:svn /var/svn/{team-name}/
    sudo chmod -R 770 /var/svn/*

     

    Let create a group for SVN (makes it easier to manage permissions for the repo)

    sudo groupadd svn

     

    If you need any people to use the SVN, now is the time to add them, though you can add them later too…
    and we’ll add those people to the SVN group at the same time

    sudo useradd -G svn -d /home/steve -m steve
    sudo useradd -G svn -d /home/mike -m mike
    sudo useradd -G svn -d /home/john -m john

     

    If you have existing users, make sure to add them to the SVN group (if they need to be)

    usermod -a -G svn

     

    We’ll need to set some temporary passwords for our new users (do this for all newly added users, have them change this password later!)

    passwd john

     

    Now we can create the svn repository

    sudo svnadmin create /var/svn/{team-name}/

     

    Now we can setup SSH keys on this system so that you can easily log in from your main Linux Desktop machine.

     

    So go to your home directory on your local machine (NOT THE SERVER!) and your navigate to your home folder. From here CD into your .ssh directory and we’ll create your SSH Certificates.

    cd ~/.ssh/
    ssh-keygen -t rsa
    {save as default file, press enter}
    {enter your own password and hit enter}
    {confirm your password}

     

    Once this is done we’ll setup your host with keys to stay authenticated (substitute the IP address 192.168.0.100 with the actual IP address of your server!)

    cat ~/.ssh/id_rsa.pub | ssh steve@192.168.0.100 "cat - >> ~/.ssh/authorized_keys"
    ssh-agent
    ssh-add

     

    And now you can test your new ssh keys by doing this:

    ssh steve@{server-IP-Address}

     

    That should’ve connected you without an issue. Type exit to quit

     

    Now let’s get the SVN Server actually serving data

    svnserve -d -r /var/svn/{team-name}

     

    Now lets setup the home directory for the svn local store on your local computer

    cd ~
    mkdir team-scripts (OR WHERE EVER YOU WANT THIS TO BE)

     

    Let’s test to see if the Server will allow a checkout.

    svn co svn+ssh://{server-IP-Address}/var/svn/{team-name}/

     

     

    IF YOU ARE USING A MAC COMPUTER, IN ORDER FOR YOUR MAC TO IMPORT OR ADD FILES TO THE REPO,

             YOU NEED TO RUN THIS COMMAND!!!

    export SVN_EDITOR=nano

     

    (optional) to a test to make sure the server is working, make a file.

    echo "testing svn repo" > grsscripts/stevetestsvn.txt
    svn import -m "test svn+ssh" grsscripts/ svn+ssh://{server-IP-Address}/var/svn/grsscripts/

     

    Now your local and server side repos are setup.
    To update, issue this command:

    svn update team-scripts/

     

    To save any changes to files in the repo do this:

    svn commit team-scripts/

    #                     This will also work from any sub folder.
    #                     So lets say you were in ~/team-scripts/building/stuff/
    #                     you could just issue
    svn update
    #
    <code>

    &nbsp;

    To add new files and folders you could copy anything you want into your "~/team-scripts/" and issue
    <code>svn add team-scripts/{new-folder}
    svn commit team-scripts/

     

    YOU MUST RUN THE COMMIT COMMAND TO UPLOAD YOUR MODIFIED FILES TO THE REPOSITORY!

     

     

     

    If you have any questions, comments or concern, please contact me via LinkedIn.

     

    Thanks! 🙂

    VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)