It’s a bitch when you cant clear ARP tables…

So we got an RMA’d Watchguard unit in the other day and I finally got time to install it last night. The first Watchguard unit started having issues last night and I figured that was a perfect time to replace the old unit with the new. It was right at that moment we started having issues. Earlier in the week I rebuilt the firewall ruleset, proxy configurations and static NAT translations to mimic exactly what the current Watchguard was doing. As any company does, this client has an email server, a few websites, and other inbound and outbound ports that need to be open.

The original Watchguard was being replaced with a better one that is able to handle more connections and users. The old one was having issues last night around 4:15 and I made a decision on the fly to install the new Watchguard seeing as how it was close to the end of the day already. After installing the new Watchguard, it took a little bit for the Internet connection to come back up.

There is an MPLS connection that was down because it was having a conflict with the MAC Address on the new Watchguard. The Watchguard was showing a different MAC Address than the ISP’s AdTran‘s were used to seeing. For those of you who dont know, the ARP Table is a long list of Hardware Addresses (every single device ever made, wireless or Ethernet enabled, has a unique hardware address called a MAC Address (Media Aceess Control). Manufacturers are given a specific amount of these addresses and they are required to “code” the network cards with different  “unique” addresses for every unit that is shipped out the door). So basically, all the computers, network devices, printers, etc… are listed in the ARP Tables for the Watchguard and the AdTran as well as many other computers and equipment.

Because the old Watchguard has a different hardware address than the new one (based on what I said in the above paragraph), the AdTran got confused and didn’t know what to do with the MPLS (inter-office) network traffic because the ARP Tables were different (think of it as the two devices had different information and were arguing about who was right). I suspected that this may have been an issue last night and rebooted all the network equipment to clear the ARP Tables, but the AdTran’s all have a security mechanism built in so that an attacker (someone with malicious intentions) can’t “poison” the ARP tables.

It was this security mechanism that also protects the ARP table from being cleared by rebooting the device. That is why I needed to call our ISP and have them manually clear the ARP tables on the AdTran’s. This security mechanism is part of the Managed Services that we have contracted with our ISP to provide, and this “test” proved that the AdTran is doing what it is supposed to do. The issue is that we were unaware that the AdTrans were setup this way.

After the ARP tables were cleared the MPLS came right back up and running, the phones started working again and everything went back to normal. Moral of the story: Make sure you clear your ARP Tables.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Vulnerability Assessments are not Penetration Tests!!

Too often I, as well as many of my co-workers, go into a client and throughout whatever assessment I am working on, general questions come up like, “when’s the last time you’ve had a pen test?” And the client responds, “Ohhh, we do those annually with ‘Some Corporation.’ ” And after looking at ‘Some Corporation’s’ website and seeing what they consider to be a penetration test, I am again disgusted to see that they show up with a vulnerability scanner, run it, validate some findings, and are off to their next client.

Now I know these are some brash comments made toward some random security companies, but let’s be honest here: If you’re going to do something, do it right the first time and provide your client the value of the assessment they paid for. Additionally, give the client what they paid for. If I go to a salesman to buy a sports car and he tries to sell me a Honda Civic, I’m going somewhere else to get what I asked for. On the other side of the coin is the fact that a lot of companies that want a penetration test don’t really understand what it is to begin with. It seems to me that gone are the days of true pen testing when the dreaded “Red Team” shows up to strike real fear into the hearts and minds of Security Practitioners at Fortune 1000 companies.

Edit: I think those days are back now though. A great friend of mine, Dave Kennedy, started a company named, “TrustedSec”. I urge you to check them out. He’s a ridiculously good pen tester and it ultra-knowledgeable in the security field. A Pen test from him is money well spent! 🙂

Any kid in their parent’s basement with savvy computer skills can fire up a Nessus scanner, Web Application Scanners, or Qualys Guard against a network and some of those people can actually interpret the results to make sense out of them. Trust me, everyone on SecureState’s Profiling Team can do that with their eyes closed, but how many security companies out there can actually run a legitimate pen test? I’m not calling anyone out and challenging them, but in all reality, I just want to know how many companies are willing to admit that what they call a “Penetration Test” is actually just a vulnerability assessment? Even worse is the number of companies who perform so called “penetration tests” and truly believe that a vulnerability assessment is the same thing as a pen test.

So let’s all be clear here: a true penetration test is over 85 percent manual and the remaining 15 percent can be a vulnerability scanner to get some other findings in a report in order to provide additional value to the client. And let’s also define manual attacks as to not be ruling out all tools. Using a port scanner is way different than using nCircle, Qualys, or Nessus. Automated scanners like these are the tools that don’t really help a pen test. And just because you use a tool like the Metasploit Framework and many of the tools in Back|Track 4, doesn’t mean you are running a vulnerability scanner. NMAP has the ability to run scripts as well, but again, it doesn’t belong in the Vulnerability Scanner category.

Many times, companies perform Attack and Penetration’s due to compliance, or potentially other reasons, which is a bad idea. It gives those companies the opportunity to choose malicious compliance over the desire for truly assessing the security of the entire company. Malicious compliance is a term used when companies do the bare minimum in order to achieve a stamp of approval for whatever standards they are trying to satisfy the needs of. When companies choose to perform pen tests on only their systems affected by compliance, such as PCI or HIPAA systems, they are missing entire networks of systems which aren’t tested. When this happens, companies aren’t getting the true value of what a Pen Test can provide.

… is a trend setting company, and this is where we are going to step in and say, “We Pen Test!” The PCI DSS Council has at least defined what they consider a penetration test. In section 11.3 the Council defines it to be: “vulnerability assessment simply identifies and reports noted vulnerabilities, whereas a penetration test attempts to exploit the vulnerabilities to determine whether unauthorized access or other malicious activity is possible.” Even the EC Council states that, “Penetration testing simulates methods that intruders use to gain unauthorized access to an organization’s networked systems and then compromise them. Penetration testers may use proprietary and/or open source tools to test known technical vulnerabilities in networked systems. Apart from automated techniques, penetration testing involves manual techniques for conducting targeted testing on specific systems to ensure that there are no security flaws that may have gone undetected earlier.”

The SecureState Profiling Team utilizes lower risk vulnerabilities in some systems with additional vulnerabilities in other systems and links them together into larger attacks. By pulling off an attack in this fashion, the Profiling Team utilizes what is called the Vulnerability Linkage Theory in which we can show why it’s important to maintain system baselines and other security measures. The Vulnerability Linkage Theory shows how the attack was pulled off by coupling vulnerabilities in many systems to result in the end compromise. For instance, username enumeration from a website, coupled with a brute force attack on the mail system, could allow SecureState to access mail from a company. From here we can email the tech support team and social engineer them into divulging information on how to access the corporate VPN and voila: access to the internal corporate network. There is no way a vulnerability scanner can do that.

Penetration tests zero in to specific systems in order to break in and see what information can be divulged. Pilfering computers and file shares will explain the benefits of Pen Tests by finding the important documents and unencrypted data. Even finding password protected Microsoft Office files can be cracked to release potentially serious data about a company we’re hacking into. Pen Tests can also be used by Security Departments to show why things need to be fixed and get budget to move forward.

There are conflicting views on Pen Tests and Vulnerability Scans. Pen Tests aren’t performed to find vulnerabilities; they are done in order to compromise systems and networks. The main difference between the two is that in a pen test the attackers are actually exploiting vulnerabilities in systems, adding user accounts, and compromising machines across the network. A full or total compromise, which means total control over the entire network, is the end goal of a pen test. Throughout a pen test, the attackers will inevitably generate a list of findings. Many of these findings may be the same as what a vulnerability assessment will also come up with, but there are many vulnerabilities that scanners just can’t find, which leads to the fact that tools can’t think; consultants can. Consultants are able to interpret results and decide on how to use them in order to leverage certain attack vectors against machines and networks.

Don’t get me wrong: I am not discounting the need, want, or value of a vulnerability assessment. These assessments, as well as pen tests, have their place and need. What I am saying is that these assessments need to be better understood in order to know how and when they should be performed. Additionally, there have been companies that run regular vulnerability assessments and the same vulnerabilities keep coming up every single scan. These companies are either overwhelmed with the amount of vulnerabilities present in their networks and don’t know how to fix them, or they don’t see the value or need in fixing them. Penetration tests can enforce the reasoning. In turn, by better understanding what the difference is, the clients will understand what to expect as a final product and won’t be dissatisfied with the results of each test.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

SSDs and Hard Disk Performance

Well, if you saw my blog, “The Spin Stops Here,” you see that we’ve already covered topics such as battery life, and the difference between traditional disks and the newer SSDs, among other things. But in this blog, I wanted to cover a couple other issues and facts surrounding SSDs. You may want to read my last blog on SSDs, as I will be using acronyms and information mentioned previously.

Since my last SSD blog, prices have come down and larger drives have become available. I am in transition on my own laptop in running off a SSD with a secondary drive as a SATA disk, mainly for storage and running VMs (not counting the 1.5TB external USB drive I travel with). Also in that time frame I have transitioned to Windows 7, which, may I add, is a fantastic OS, probably the best OS Microsoft has ever released.

So, other than the fact that SSDs are more durable, faster, last longer, consume less power, and create less heat, there are a few things I didn’t cover last time. We briefly touched on forensics analysis and how data is actually stored on SSDs, but we didn’t cover what operating systems (specifically Windows 7) are doing to take advantage of SSD technology. Seeing as how SSDs are falling in price (64GB SSDs are nearing $150, which is the size drive I am using), I feel more people will be moving to them, especially in high-end gaming rigs and laptop power users.

Why does the hard drive make such a difference? There are a variety of reasons, and they relate to both the OS and the hardware. We’ll start with sequential read and write speeds. My game machine at home has 3 1TB SATA drives in it. Those drives can sustain about 100-120MB/s read speeds and 60-80MB/s write speeds. My SSD is rated at 270MB read/150MB write, which equals a big difference!

The other metric to keep in mind is the random speeds. Sure, when you’re moving movies or ISO images or other large files, that’s sequential speed. But what about when your OS is writing a temp file, accessing the pagefile, or accessing data from your user folder? Those random reads depend greatly on access times. SSDs are now reaching blazing fast access times: well under 1ms in many cases (Intel’s X25-M is reaching .01ms access times!). Sequential reads are hitting well over 250MB/s and write speeds are consistently over 100MB/s. Even on my old SATA disk, Windows was booting fairly quickly, getting to a login prompt in about 20 seconds. After making the switch to the SSD, my boot time is under 20 seconds, which includes the BIOS checks and logging in. Programs launch in ridiculously faster times, and everything is ultra snappy.

How are SSDs obtaining these speeds? As I talked about in my last blog, it’s because there are no moving parts. The data is stationary and the logic board just says, “Hey, data; come here, the CPU needs you.” And off it goes, while with traditional platter drives, the logic board needs to move the read/write heads over a platter, find where bits are stored, and then transfer the data at the speed the platter spins. There’s a lot of latency in that type of setup. In some cases, random read times are over 100 times faster at 4KB data chunks, which is what most Windows-based computer files are.

The next big thing is TRIM. To understand TRIM operations, you have to understand, at least a little of how SSDs really work. Last time we went over NAND and NOR memory types. Again, SSDs use NAND flash cell and are made up of millions of these memory cells. They are (in most cases) “clumped” together in 4kb chunks called pages. These pages are written to only at the size they are created, in this case 4kb. These pages can only be deleted or cleared at 128 pages at a time (which when you do the math is 512kb). The biggest issue that SSDs run into is that the drive never knows when a file on the system is deleted. So if you send an item to the recycle bin and then write another file over it, the drive will never know. So the SSD has to keep tabs on every memory cell it has. This is where the ATA-TRIM instruction comes into play.

The great thing about Windows 7 is that it supports TRIM instructions. TRIM instructions are simple. They tell the SSD that certain memory locations are empty so that the SSD doesn’t have to worry about keeping tabs on those locations until data is re-written there. SSDs track those memory locations by adding and dropping them from the Free Block Pool. Before TRIM support, performance degradation was a serious issue and many people were noticing SSDs degrade worse than traditional HDDs. Those were the days that Wipe and Reload would fix your problem. Now those problems are history… mostly. It’s still an issue, but a MUCH smaller issue than the pre-TRIM days.

Now, with that understood, we can talk about Random Write times. Random writes are being done all the time on your system, and you’ve probably never noticed it. If you install an SSD in your system, you’ll notice, as many people have, the big difference in speed. A big portion of that increase in speed is attributed to these Random Write times becoming so much faster. In the old mechanical drives there was a section of cache memory installed that ranged from 2-32MB of space. The new 2TB drives are shipping with 64MB of cache. This cache is used as a temporary holding spot, mainly for incoming data, so the drive controller could spin up the platters and move the read/write heads into position (an operation that could take between 5 and20ms). The drive will cache as much data as possible and send a success signal back to the OS so there is minimal interruption in the system.

With SSDs, this cacheing isn’t necessary. The memory pages are instantly available and writing of data takes micro-seconds instead of milli-seconds (ms). Remember what a page file is? It’s that huge 1-4GB file on your C: drive called “pagefile.sys.” It looks like 1 file, but actually it contains small chunks of data the OS looks for instead of going to the system memory bus. Over 80% of the reads and writes to the pagefile consists of less than 20KB of data at a time. When Input/Output Operations Per Second (IOPS) are so much faster on SSDs than traditional HDDs, it’s no wonder OS speeds are so much snappier as well. Although I’ve heard people say not to use a page file when you have a ton of RAM in your machine, you should still use about a 1GB page file for programs that were written to use it.

One of the things Microsoft did to increase performance was decrease the amounts of random writes to disk. But what else can be done to help increase the performance of your SSD and help lengthen its life span? There’s a lot actually. For starters, ensure that Disk Defragmenter is disabled. On old HDDs, everyone knew that when you defrag, your system seemed smoother. This is one of the worst things you can do to a SSD, so turn it off. Also, turn off Hibernation and System Restore. You should already have System Restore off; it’s a waste of space and a breeding ground for mal-ware. And if you’ve ever used Hibernation, you already know it stinks. Also, disable Superfetch and ReadyBoost. These technologies were built for performance increases when using mechanical HDDs and can pose performance issues to your new SSD. It is also recommended to disable Search Indexing in Windows.

Lastly, if you move to a SSD, I would recommend a few other things. The first should go without saying: install your OS from scratch. Don’t try using Partition Magic or some other HDD cloning software. Install a fresh OS, and, if you can, use 2 hard drives. Install your OS to your SSD and install ALL your aftermarket programs to your second drive. Your second drive, if it isn’t a SSD, should be a high RPM disk like a Western Digital VelociRaptor. This way you will keep all your documents and your User folder on your SSD, which are normally the smaller files SSDs fly with. You’ll notice big increases in both speed and overall performance.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Fast GPU’s arent just for Gaming Graphics!

I have always been a fan of the latest and greatest hardware and always been amazed on how fast new hardware is getting. Well now the Security field is going to have to start worrying about how this hardware is being leveraged to crack passwords. The Nvidia Corporation has harnessed the functionality of the C programming language and integrated it with their newest GPU’s to form the CUDA Technology.

In Fact, even the Lenovo T60 and T61’s are loaded with Nvidia Quadro graphics cards that can run CUDA software. There are even Python bindings for CUDA and many other languages may enter this arena. Applications for Fluid Dynamics, Digital Media, Electronic Design, Finance, Game Physics, Audio and Video, and many more have already been developed and more are on the way.

What I mentioned before is about Information Security. There is also software released to take advantage of “password recovery” and it is stunningly fast. Modern dual-core CPU’s such as the Intel Core 2 Duo and the AMD Athlon X2’s are able to test approximately 2 trillion passwords in about 3 days whereas CUDA based “Password Recovery” software can do 55 trillion in the same time frame. That is almost 25 times faster!

The reason these new cards are able to run software like this is because these new generation chips, named the G80 series, are able to compute fixed-point operations. The new Nvidia GTX 280 Graphics card boasts 1GB of 1100 MHz GDDR3 memory on a 512-bit path with 240 processing cores (actually called ALU’s) running at 600-650 MHz at a cost of $450/card.

Nvidia says the card is able to reach close to 1 Teraflop (Trillions of floating point operations per second) of compute capability. The first super computer to reach the 1 Teraflop barrier was in December of 1997 and was the size of a mid-sized house. It was 76 computer cabinets holding 9072 Pentium Pro Processors. (http://www.sandia.gov/media/online.htm) You can check back athttp://www.top500.org/ for the fastest super computers in the world.

So with these Desktop Super-Computers doing tasks that multi-million dollar Teraflop computers are capable of, what if someone found a way to harness this technology to crack passwords in your organization? Or if they captured enough data and were able to un-encrypt classified data? How about AES-256 bit encrypted hard drives? Let’s look at it this way. Most likely your company’s password complexity is too weak. The only way to make it harder (still not impossible) is to force your users to use long pass-phrases, strengthen your domain policies and provide user awareness training.

The thing that makes this CUDA Technology so fast is that threads are able to communicate. These 240 cores work in tandem using Parallel Data Cache (A.K.A. shared memoryhttp://www.beyond3d.com/content/articles/12/3) which saves clock cycles since it isn’t going all the way out to the card’s GDDR memory for additional data or temporary storage. Additionally, with the current Nvidia software and the proper hardware configuration, you can strap 1-4 of those cards to a quad core CPU and have an absolutely amazing system that could reach the 2 TeraFLOP range. And if that isn’t enough, Nvidia allows their consumers to over clock within the Nvidia Control Panel software.

One company has already gone the distance to “recover” passwords. Elcomsoft makes a software package that allows up to 10,000 distributed client workstations to “recover strong encryption keys” with each client having up to 4 GPU’s each. What government agency or research lab wouldn’t want something as powerful as that? The software is capable of “recovering” MS Office 97-07 passwords, Zip and RAR passwords, MS Money, Open Documents, all PGP passwords, Personal Information Exchange certificates – PKCS #12 (.PFX, .P12), Adobe Acrobat PDF, Domain Cached Credentials, Unix passwords, Intuit Quicken Passwords, MD5 Hashes, Oracle Passwords and WEP, WPA and WPA2 Passwords. (http://www.elcomsoft.com/edpr.html) Many of those operations are considered to be “GPU Accelerated” Options.

According to Elcomsoft’s own press release “Elcomsoft Distributed Password Recovery allows using laptop, desktop or server computers equipped with supported Nvidia video cards to break Wi-Fi encryption up to 100 times faster than by using CPU only.” The software is said to support ATI Graphics cards early next year. I would find it only a matter of time until the underground community uses this technology to crack DRM as well as other cryptographic enabled media. (http://www.elcomsoft.com/pr.html)

Remember, this technology doesn’t have to be used in just password “recovery.” (http://www.nvidia.com/object/cuda_home.html#) There is such a large amount of science and technology that will benefit from this. Mathematics, Digital media, Programming and best of all, Games.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Firewall Ruleset Reviews and Firewall Management

I’ve done a lot of firewall ruleset reviews for companies large and small. There is a pattern forming in almost every firewall I’ve seen.

Bad management.

It’s not about blaming people though. The economy is in the sewer and layoffs plague every company across the planet. Most every security team is dealing with tons of ongoing work to stay secure and low budgets and resources to get the job done.

The firewall rule sets I’ve seen range from 50 lines to 10,000+ lines. Some are so complex that we schedule a week of work to audit and determine what can be taken out, what needs to stay and what shouldn’t have been there in the first place.

Let’s face it; many firewalls have dead rules, non-existent networks and “permit any” rules. Those are the low lying fruit that we look for first and when fixed, automatically increase security surrounding the attached networks.

Any access list that ends in “permit ip any any” is wasted CPU power and increased RAM usage. Why make your firewall go thru all of those rules if you permit everything at the end anyways? Not to mention, if you’re going to do that, you could have saved yourself hundreds or thousands of dollars and just gotten a router and used static routes to forward traffic. But in the security world that isn’t an option.

Too often we see timeout settings that are too large, insecure protocols being used and lack of ingress or egress rules. The worst cases are the firewalls that are built backwards (a whole slew of deny statements followed by a permit any statement).

Overall, the largest issue is lack of egress filtering. Time and time again, we run into this. And in many of our assessments we capitalize on this. In both Social Engineering attacks and Penetration tests we are able to accomplish many tasks by using these lax rules. Even if you aren’t worried about the next major virus or worm, you should care in not helping spread the infection. Close your doors and be a good neighbor!

All of these issues add up to the sum of bad network security which is caused by bad management. There needs to be process and documentation of all rules and configuration settings within a configuration. Asking “Hey Chuck, there’s a strange rule in here, did you do that?” doesn’t count as documentation either.

At the end of the day, if you skip on a little security here and a little security there, what’s the point of implementing high dollar equipment? If you’re going to implement your firewall properly you should have dedicated process behind the ruleset. Justify every rule, every business segment, set it, and forget it. There should be no need to constantly be modifying your firewall. I can see the need if you’re installing a new server or software package, but adding and dropping lines daily or even weekly is not efficient use of anyone’s time.

The moral of the story is, if you are making constant changes, have bad rules, or an insecure configuration, then you should start over and build your configuration properly. A regular audit of the firewall ruleset is always a good idea and should be budgeted for. Put in the proper change control, documentation and justification, and you will be amazed how much more secure your network will become.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

SSDs: The Spin Stops Here

So who out there has ever experienced a Hard Disk failure? Sure, a lot of you have. Current magnetic spinning disks are (very) slowly being replaced by faster, longer lasting, Solid State Disks. Solid State Disks are nothing new. But they do offer many benefits over their predecessor. The major benefits over older technology are what really set apart this new technology.

With no moving parts, higher resistance to dropping and lower power consumption, laptop users are loving this new technology. Accident prone people who drop their laptops are enjoying their data staying put, road warriors are enjoying longer battery life, and power users are enjoying longer disk life.

The market share for Solid State Disks is relatively small at this point in time. Some people project the whole solid state market to be at 7.5 Billion in 2012, whereas the current market for Seagate (a hard disk manufacturer) was 3 Billion… just for the third quarter of 2008. So the traditional hard disks aren’t going anywhere soon.

So when a drive fails, how easy is it to retrieve the data from the drive? Depending on the drive size and whether or not there is any type of drive encryption, it is fairly easy given the proper tools. This trend is about to change with SSDs. SSDs don’t use the same technology that the traditional SATA (Serial Advanced Technology Attachment) and PATA (Parallel Advanced Technology Attachment) drives do.

But what kinds of issues are still alive for SSDs? Well, the costs are still outrageous for the common end user (especially in today’s economy). Disks are small in size, averaging 64GB of space for $150 which puts them on par with current SAS (Serial Attached SCSI (pronounced Skuzzy)) Drives. SAS Drives are built for Servers and offer stunning specs, including lots of cache, 10,000 and 15,000 RPM spin speeds, and lightning fast response times. Older desktop and laptop drives spin anywhere from 4600RPM to 7200 RPM which makes them much slower, but at the same time more affordable.

SSDs also are able to endure more abuse. This is rated in G-force. One G is equal to a mass’ normal weight. SSDs are able to take 1500G and more; that is 1500 times its own weight! This happens when a hard drive is dropped. Traditional drives can’t take that much abuse, while SSDs are said to be able to be dropped off of a 2 story building and still work. Try that with a normal SATA drive.

Think of your USB thumb drive that is used to transfer files from computer to computer; that is the type of technology used with Solid State drives. The difference is that traditional HDDs seek data at specific memory locations that are assigned distinct locations on a platter of the hard disk. SSDs don’t have this because there isn’t a spinning platter. This is what is called NAND Technology. This technology makes these drives much faster than traditional HDDs. You can’t physically point out a location for memory in NAND technology.

There are two types of Flash Memory; NAND and NOR. While NOR is best suited for small memory sticks, NAND is used in USB and now SSDs. NOR operates by sending electrical signals to the cells that form the whole storage memory. Each cell contains either a 1 or a 0 and all of those 1s and 0s make up your files, pictures and word documents. NAND on the other hand, operates by using gates. Writing data is called “tunnel inject” and reading data is called “tunnel release.” Because of how data is stored in flash memory, access times are significantly faster. Flash memory just about instantly accesses the data location and responds. In some cases random access times are dropped to 1-3ms! Traditional hard disks have normal access times of 7-10ms.

Also, these data locations are stored in onboard RAM that can have almost instant access to a memory location allowing for up to 250MB/sec data access. To put that into perspective SATA drives work at about 40-70MB/sec and PATA drives are even slower. So moving large files, such as MP3s and AVIs, goes much quicker (on the order 4-7 times faster!) This is most significant when moving Gigabytes of data.

Comparing three types of drives, SATA, SAS and SSD, how do you know what to do these days? There are many factors that can affect your decision. Cost, speed, reliability, MTBF (Mean Time Before Failure), form factor and transfer speed. Well for now, in mobile platforms, you will most likely be looking at SATA drives, although, if you have the extra money you could get a SSD. The server market eventually will be using SSDs but for now they are still using older SCSI drives or newer SAS drives. Normal desktop machines will probably be the last to move to SSDs due to size. People with 500 GB of music and terabytes of movies won’t be able to afford SSDs to replace their SATA drives for at least another 5-10 years.

The biggest issue with hard drives is that hardly anyone backs up their data. Even today, with disk space so cheap, backups are still almost non-existent in the home. Businesses are doing much better these days than even a few years ago in backing up data, but these backups are still on slow tapes that still take a lot of time to backup to and restore from.

So what’s the moral of the story? SSDs are moving in at a slow pace. But eventually all hard drives will be some kind of non-mechanical disk. They are energy efficient, fast and very long lasting. Even so, if the MTBF is 1,000,000 or longer as advertised by many SSDs, backup is crucial to data. Multiple copies of data in multiple locations is the only safe way to store data. What happens when a disk fails? That’s when you call us to get your data back.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

The Human Exploit

So you’re sitting at your desk and the phone rings. “Hey this is Mark from information security. We are noticing that your computer is creating a lot of traffic out to the internet. Are you noticing that anything on your computer is out of the ordinary lately?”

What would you say? Well, in the average Social Engineering test we perform, the answer is quite honestly a, “yeah my computer is slow… can you guys finally come and fix it?”

That’s when we say, “Sure! We’d be glad to *cough* help! Go here, download this patch, and run it…” and a couple minutes later we have fully compromised a system sitting behind a firewall in a corporate environment and easily getting past the antivirus software as well.


On average, we are able to get over 70% of end users to comply with anything we want them to do in “fixing” their computer, by just dialing their number and talking to them. How would you feel knowing that your end users are freely giving their computers and data away to attackers over the phone?

So what can you do to stop it? Well, a lot actually. Depending on your budget (which these days is low for everyone) you have the option to proxy all of your outbound connections, close down your firewall, install HIPS/NIPS protection, and the list goes on.

Sure you can do a lot to MASK the problem, but when are you going to stop the problem at its source? No, I am not advocating firing everyone you work with, but I am saying that there should be policies, procedures and MOST of all, end user training to teach people about these attacks.

People are most always willing to help, lend a hand and be polite and courteous to others on the phone. In reality, this type of attack could happen to virtually any company. In fact, the larger the company is, the easier it is to exploit.

The moral of the story is that unless you have some type of training involved for employees, they are very susceptible to Social Engineering. Even these days. Next time, it just might not be a pen tester on the other end of the phone, it could be someone with some serious malicious intent.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Writing Security Policies and Procedures

Anyone who has ever written a set of policies and procedures knows how time consuming, headache ridden and tedious they are to create. For those of you who are in need of updating or creating new policies and procedures, this blog will be going over some things to keep in mind. We’ll also go over tips to making your writing easier for you and easier for your users to understand.

Whether you’re just starting to write brand new policies and procedures or you’re doing your yearly updates on existing ones, I think the thing that everyone needs to keep in mind is simplicity. So that means, for any lawyers that may be reading this, you guys aren’t allowed to write these. This also means that the techie-guys that don’t know how to speak to normal people, you aren’t allowed to write these either. There are some companies out there that have policies that have the procedures right in the same document, and this single document spans hundreds of pages. Who will honestly read and follow that?

Now I know there are a ton of policies and procedures that are needed for different aspects of business but we are only going to look at the one that most people see: the End User Security Policy. The ideas we cover for this policy can easily be applied to any other policy you’re putting together. This end user security policy is notoriously filled with jargon that lawyers and techies love to throw in there. People, you have to realize that your end users need to be able to read and understand this policy so that they can easily remember it and apply it when necessary. They shouldn’t be long, there’s no need for that. The policy should be short enough that people can read and interpret it within 10 mins. Any longer than that and your end users will read half of it, maybe skim the rest of it and throw it in a big pile of papers on their desk. That creates two problems: 1. they didn’t read the whole thing, and 2. they won’t sign off on it that they did read and understand it.

Many end user policies are not enforceable based solely on the language used in the policy, or the fact that the policy doesn’t set metrics on what will happen if you do break the policy. The policy has to be enforceable, and when you are writing the policy you have to think of what you would do, seriously, in the same situation. If you say, you have to set a strong password for your windows login, there’s no way around that in a domain, so you don’t have a choice, you have to follow that. But let’s say you specify to your end users that if they receive corporate mail to their phone they have to have it password protected. Are you going to do that on your phone? Do you have a way to enforce that? Are you planning on running an internal security assessment to test this? If you find a user not doing that, what is the penalty? Is the penalty the same for every user? (The answer to that question had better be YES! Otherwise you’re going to be talking to those pesky lawyers about discrimination!)

What would you do if a user brought their home laptop in and plugged it in to the corporate network? Or worse, a wireless access point? What is the policy on that and what are the repercussions? What about non-employees such as contractors, visitors and such? Make sure to set the policy on this stuff, and remember to make it enforceable. Now if you say something like, the person who does this is fired, then what are you going to do if the CEO brings his son to work? His kid gets on the network just to use the internet. Are you going to fire the CEO?

What security methods are you going to make your end users aware of? Do they need to understand what drive encryption is or how to use it? Your end users may be complete tech novices, but your security policy/end user information technology policy should try to lay out what a user needs to know. And in this they need to understand the expectation of privacy. The expectation of privacy shouldn’t be documented too much here because you should have logon banners on every single network device that can be logged into. But you should explain what the banner says and what it means. And technically, if you don’t remove the expectation of privacy on a company computer or network device, then your user still maintains it, so make sure you stay consistent.

Obviously if your end security policy exists then you need to have an end user security procedure document as well. Every policy document should have a corresponding procedure document. And this procedure document needs to be referenced in your policy document. So, for instance, if your policy states that you have to make sure Windows Updates are installed every month, you should follow that up with something like, “Page 7 of the End User Procedures document explains how to do this.” You can’t expect your end users to know what Windows Updates are, let alone how to install or make sure they are installed every month. Try to do this as often as possible. It may take a lot of time to set up initially, but you can cut down on a lot of IT helpesk calls if you reference your procedures and keep all of this documentation easily accessible on the corporate intranet website. C’mon, throw a dog a bone. We aren’t the police here, we’re here to help, not criminalize.

Security doesn’t only mean network equipment either, now does it? Physically speaking, how do company employees get in the building? Do you require swiping an ID badge at every outside door? How about pin numbers or cipher locks? What doors do you not allow people to use unless emergency? Who gets keys to the building? What’s the policy for getting an ID badge or set of keys to the building? There are a lot of things to think about from an end user perspective. Be sure to go through every possible scenario that your company has and address it in your policy and explain how to do it in your procedures. Again, what about non-employees in the building? Do you require people without badges to show proof of who they are or ask questions on how they got in?

How about protecting intellectually property? What is the company policy for that one? This one is a touchy subject that needs to be discussed with some folks at the top of the chain. If you can get the Director of HR, the CEO and your CIO/CSO to give you their opinions on this first, you’d be in a better place to write this policy. Either way, when writing this policy, you have to make sure you are consistent. Any contradictions in your policies and procedures and you can throw these out the window.

Lastly, now you need to follow what you wrote and so does everyone else in the company. That means that all employees should review the policies once a year and sign off that they read them. Encourage people to ask questions. Get a training group together if needed. And if you think you’re going to get away with not having to follow your own document, you better believe other people are going to try their best not to follow it as well. If you made the cool-aid, you better drink it, unless you’re a malicious person by nature, then you shouldn’t be making cool-aid, let alone writing security policies. If you follow these simple tips, I promise that writing your company information technology and security policies and procedures will be much easier, not to mention that your end users will follow them much better.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

The Network Neutrality Debate: Good or Evil?

So for a long time now there has been a bill around in congress about Network Neutrality. Some people like it, some people don’t, others just don’t care. But who’s really looked into it? I mean, it sounds good. It sounds like it could help everyone out, right? It’s keeping the Internet neutral, right?

Well, for those of you who haven’t looked into Net Neutrality, its time you hear about it. Let’s look at the up side of this debate. The original idea was great: Ensure that all traffic on the Internet was treated equally by all Internet Service Providers. Net Neutrality is supposed to mean no discrimination and tries to prevent Internet Service Providers from blocking, speeding up or slowing down Web content based on its source, ownership or destination. That sounds good, right? I like this original idea, but as with many ideas that get turned into legislation, the point gets missed and in the case of this idea, the point is being completely smothered.

Now that the government has its hands on it, Net Neutrality will go the way of all the other bills that have gone through congress, adding pork at every congressman along the way. Net Neutrality line items state that every citizen in the US should be given free broadband Internet access. The proponents of this bill state that if there is only one provider to give Internet access that they can’t block content or stop the end user from getting to the site they want to view so the government should intervene. The proponents of this bill are in the mindset that if government can control this the ISPs won’t be able to implement a tiered Internet access model.

Others say that if Net Neutrality isn’t passed, companies will start to charge more to get to certain content on the Internet or that Internet Service Providers (ISP) can sign agreements with certain companies to give special access to that company’s website. For instance, if I went to Google, but my ISP signed a contract with Yahoo or Microsoft’s Bing I wouldn’t be able to get to Google or the speed would be so slow I would have to use something else to search the Internet. People think that without Net Neutrality ISPs can tax content providers for using the backbone of the Internet to move data, or discriminate in favor of certain traffic, or block access to certain sites all together. Again, let me stress how this original idea makes complete sense and how much I agree with it at this point.

With the amount of service providers out there, the scenarios mentioned earlier (blocking content and discrimination of data) would never happen because if it did, people would just switch to another provider. Look at it this way, the Internet, in its current setup, has been operated for over 20 years without regulation or government interference. Net Neutrality protections have existed for the entire history of the Internet. Additionally, since its conception and the start of its mainstream use, the government has wanted to tax the usage of the Internet. Back at the beginning of the Internet a group of congressmen banded together and said, “No” to taxing Internet usage. But now with the government is trying to grab power from all over, and congress feels that it should control, monitor, and secure the Internet as well.

And it doesn’t stop there; if Net Neutrality goes through, the government will not only do a power grab over the Internet, but include wireless phone companies too since they are also part of digital communications. The FCC would basically be able to moderate and know everything that is being transferred over the Internet or wireless phones. Security and Privacy would be thrown out the window in this scenario. The Internet has been the source of the highest levels of freedom the world has ever known. There have never been any restrictions on speech, religion, or information on the Internet (some sites have their own policies, but you can always find information out there somewhere).

Aside from the Internet being a place for freedom, think about what will happen when the government steps in and tries to regulate and monitor it. Think about anything the government tries to run; it gets clouded in paperwork and the service is degraded to a level no one wants. The phone companies are a prime example of this, the government stepped in at the state and federal level and the prices skyrocketed. But the market innovated coming up with VoIP and free phone servers that utilize the Internet. The free market is responsible for having such a vast, open set of connected networks that make up the Internet; it would do nothing but hurt companies that try to impede this open communication of all types of content.

So now for some added truth on this; Net Neutrality is going to essentially going to cause these things to happen, just from a different angle. Now that H.R. 3458 has been introduced and federal stimulus money has been part of the deal, the government is going to pork the bill up so much you won’t even recognize it right before it is voted on.

Let’s put it in perspective: Over the last 3 or 4 years the telecommunications industry has pumped over 100 billion dollars into the data backbone and it has resulted in blazing fast speeds, lower price per kilobyte of bandwidth, and provided a higher level of competition. Now think about this: the government stimulus package invested 7.2 billion dollars in to this Net Neutrality bill and they call that “just a down payment” according to the diversity czar Mark Lloyd. His opinion is that managing the media, control of it by the state, can help level the playing field for those that aren’t fortunate enough to get all the news. Now why would you want to pay for Net Neutrality when you already pay for the Internet? Just with the thought of the government stepping in the price has already gone up in the form of taxes.

Almost everyone pays for the Internet in some way, either in your cell phone bill, your cable bill, your land line phone bill, and your VoIP phone (in some cases). All this money pays to keep the Internet up and running. When you purchase Internet access you are expecting a certain level of quality and service from the provider you are paying; be it AT&T, Sprint, Verizon, Time Warner, Comcast, just to name a few. Basically your monthly bill on these services goes to keeping the Internet up and running (I say this because basically everything is transmitted digitally).

Mark Lloyd, Chief Diversity Czar of the Federal Communications Commission said, “It should be clear by now that my focus here is not freedom of speech or the press. This freedom is all too often an exaggeration. At the very least blind references to freedom of speech or press serves as a distraction from the critical examination of other communication policies. The purpose of free speech is warped to protect global corporations and block rules [by the government], fines, and regulations that would promote democratic governance.”

This statement is coming from a guy who is a devoted liberal progressive (AKA Marxist) looking to stifle your freedom of speech. Mark Lloyd, a disciple of Saul Alinsky and fan of Hugo Chavez, wants to destroy talk radio and says free speech is a distraction. Mark Lloyd also says Venezuela is an example we should follow and he feels that the government should control all media outlets. In his statements he  also wants to tax media outlets equal to that of their total operating cost to help subsidize public media. If he is willing to do that with media outlets, what is he willing to do with censoring the Internet?

Government’s first duty is to protect the people, not run their lives. It is not to tax you for your freedoms, it is not to regulate the things you do in life, and it is not the goal for government to interfere with every aspect of the country. If the government takes control of the Internet the way they are planning in this Network Neutrality bill, I promise you that the quality and value of the Internet will degrade and it will be the start of the end of the Internet as we know it.

Throughout the bill there are statements like, “unfettered access,” “lawful usage, devices and services,” “severely harmed,” “economic interest,” and “prevention of unwanted content.” The problem with this is that they never state who will be monitoring this or setting the standards on the content, bandwidth, and what they consider to be lawful. http://thomas.loc.gov/cgi-bin/query/D?c111:1:./temp/~c111u6UoXZ::
Ronald Reagan once famously said, “Government’s view of the economy could be summed up in a few short phrases: If it moves, tax it. If it keeps moving, regulate it. And if it stops moving, subsidize it.”
Let’s keep the Internet free and open as it was designed. And let’s also keep Net Neutrality exactly how it was designed; to protect the freedoms of the Internet.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)