Enterprise IT


2012-03-11; A simple "How-To" for using a software iSCSI initiator on Linux

Here is the the current draft of a little How-To that I wrote, that provides a basic procedure for installing and configuring a software iSCSI initiator on Red Hat Enterprise Linux (RHEL) version 6;

Devons iSCSI Initiator Configuration howto

it is a "work in progress" to be sure. I still need to sort out some important issues, the main one being that if you log off an iSCSI target, the iSCSI initiator software will continue to poll the target to see if its still there. If the target is no longer on the network, the initiator software will throw errors into /var/log/messages incessantly. For a test lab environment, you can just uninstall the iSCSI initiator package and reinstall as needed. But for a production box, this kind of behavior is unacceptable. I have no doubt there must be a command to force the initiator to completely let go of the target, but I have not found out how to do so. If you know how to do this, please contact me.


2012-03-10; A followup on DNS

In the previous post on Cloud Computing, I brought up the issue of DNS for hosted and cloud computing. Let me clarify a couple of security issues regarding DNS.

One issue is simply that of a Denial Of Service attack, where one or more attackers either corrupt your ISP's DNS database, sending your users to some bogus site and interrupting their work. A white paper discussing DNS Cache Poisoning can be found here.

Another issue is that of "pharming", where a DNS database is corrupted for the purpose of sending users to a malicious web server that looks exactly like the proper web server, causing the user to enter username/account number and password. As a worst case scenario, imagine a corrupted DNS database sending your finance department users to a criminal-controlled web server in eastern Europe* rather than your actual bank. They enter their account number and password at the login prompt, just as they always do. The malicious web server might then throw up a "system down, check back later" message to allay suspicions, or it might act as a proxy and forward your account/password into to the real bank's web site.  Either way, the criminals know your bank, your account number, and your password. Banks in particular now use some techniques to prevent most pharming attacks, but most online stores don't. Thus, credit card information is vulnerable.

* Eastern Europe is a hotbed of criminal online activity; it is so bad, some organizations have their firewalls block all traffic to and from IP addresses assigned to known-bad countries in that region.

2012-03-10; A Note on Cloud Computing

The Hot New Thing right now is cloud computing, obviously. Its hard to tell how far cloud computing is going to penetrate the enterprise and small/medium business markets, but it seems likely that CC is going to capture some substantial share of the market. As it is with all Hot New Things, the vendors and their allies in the press are extolling all the virtues and avoiding mention of the drawbacks. Two drawbacks immediately come to mind when considering the scenario of an organization moving all or most of its critical applications to "the cloud" (whether that be a public cloud owned and operated by a service provider, or a private cloud owned and operated by the customer, or something in between).

The first drawback is simply the impact of the organization's WAN (Wide Area Network) connection(s). In a traditional server/client topology, application servers that are physically and logically local to the organization's LAN are relatively immune to connection outages on the ISP's network or service drop. In the cloud model, any hiccup in the ISP's service results in work stoppage and possibly lost prior work as well. While ISP's offer Service Level Agreements (SLAs) promising some impressive level of uptime, the penalties for downtime are rarely big enough to compensate for the wasted labor hours. And depending on where your facility is, you may experience a lot of outages due to the classic "Layer 1 Backhoe". One facility I worked in, years ago, experienced weekly network outages of 1-2 hours during one summer due to construction workers cutting the buried cables. Likewise for power, since the electrical power on that street was buried curbside. I ended up keeping a flashlight in my desk drawer. Depending on location, a "dual-homed" WAN topology using two independent ISPs and physical connections may be possible. Having two WAN links creates an interesting problem in the organization's routing and switching; which WAN link will be used by the LAN's border router as the default? There are a handful of First Hop Redundancy Protocols which can be used to handle this routing challenge; an advanced topic, but fortunately a solvable one. Observe that, as usual, outsourcing the power/cooling/administration burden from the organization's IT department solves one problem but creates another.

What to do when the organization's location does not offer two reliable and independent ISPs? I don't know. Cloud computing and various forms of hosted computing are very appealing to small & medium businesses, but those are the organizations most likely to be located in remote industrial parks and smaller cities with limited Internet services. I've asked around a bit, but have not talked to any IT workers who have dealt with this problem.

The second drawback to a cloud solution (and hosted solutions, for that matter) is that of DNS (Domain Name System). It is likely that most cloud service providers will issue hostnames rather than IP addresses to their customers, since there are advantages to doing so (particularly the use of server load balancing). So access to mission-critical application servers on the cloud will depend not only on the local LAN and WAN links, but to fast and reliable hostname resolution. In my experience, DNS can be pretty slow and sometimes flaky; perhaps the #1 problem with WAN services. And DNS was not designed for security. Cloud clients are vulnerable to a variety of attacks against the ISP's DNS servers as well as attacks against the clients' LAN. Amusingly, one way to ensure reliable and safe access to application servers on the cloud may be to hard-wire in static DNS records to a local DNS server that is manually updated by network administrators, rather than relying on the traditional dynamic updates. 



2012-03-03; A Peek at Social Engineering

Here's an interesting article in Wired Magazine; The Little White Box That Can Hack Your Network

For me, here is the money quote;

In this test, bank employees were only too willing to help out. They let Street go anywhere he wanted — near the teller windows, in the vault — and plug in his little white device, called a Pwn Plug. Pwn is hacker-speak for “beat” or “take control of.”

“At one branch, the bank manager got out of the way so I could put it behind her desk,” Street says. The bank, which Street isn’t allowed to name, called the test off after he’d broken into the first four branches. “After the fourth one they said, ‘Stop now please. We give up.’”

In the context of IT security,  Wikipedia  defines social engineering as the art of manipulating people into performing actions or divulging confidential information. Social engineering leverages three characteristics of human behavior that create vulnerability to social engineering:
  1. We "manage by exception" and ignore the routine. An apparent utility company worker in a uniform of sorts, a deliveryman carrying a clipboard or bundle of flowers, a janitor; all are people we see around the workplace. When we see what we expect to see, our guard remains down.
  2. We like to be helpful. In the above story, the bank employees' first reaction was not suspicion, it was helpfulness.
  3. We are afraid of getting in trouble. People on the bottom of the org chart are particularly vulnerable to adverse consequences for situations not in their control, and social engineers can take advantage of that.

The best way to prevent social engineering attacks is periodic security training. Two things need to come across in security training. One, people need to recognize their own vulnerabilities. Two, they need to be empowered to say "no".



2012-03-03; Storage Management

Storage management is one of the most boring, yet most important, aspects of enterprise IT. One way to put it in perspective is this. Consider your organization being hit by a natural disaster, terrorist attack, or large scale hack. With a credit card, you can replace workstations, cabling infrastructure, network routers and switches, server farms, and so forth. Likewise for software, both operating systems and applications. But you simply can't replace lost data with a credit card, nor repair corrupted data with a bank check.

Consider this article in Government Computer News; Virginia fights computer failures. The Commonwealth of Virginia had centralized data and database servers for many of its customer-oriented departments and functions. A pair of hardware failures in the EMC storage arrays used in the main datacenter caused both database corruption and persistent outages. There was a warm site that failed to come up and absorb the load from point-of-sale workstations.

While the basic hardware cost per unit of  of raw storage continues to drop, the rate at which data are being created is accelerating. The importance of data is greater than ever, now that we communicate almost exclusively in digital form, perform all banking and financial accounting online, perform and manage manufacturing and services electronically, etc.

Storage management presents the same management challenges as other aspects of enterprise IT; performance, security, cost. There are some differences between storage management and other enterprise IT functions. Storage management as a distinct IT discipline is pretty new, so there aren't as many courses, textbooks, and workers in the field as there are for something like Windows Server administration. Getting an early start on formal planning and implementation of storage management will encounter some growing pains, but getting on top of the storage problem can really impact performance, availability, and disaster recovery in a big way.


2012-03-02; Update 

Talk about timely...

NASA Laptop Stolen With Command Codes That Control Space Station

"WASHINGTON (CBSDC) — NASA’s inspector general revealed in congressional testimony that a space agency computer was stolen last year with the command codes to control the International Space Station."

Another disturbing quote from the CBS story;

"The Office of Management and Budget reported that only 1 percent of NASA’s portable devices and laptops have been encrypted this year."

2012-03-01; Hard Disk Encryption

IT security types often categorize IT security into two categories: "data at rest" and "data in motion". "Data in motion" indicates data moving from one location to another. The vast majority of the time, this means data being transmitted over a network, but it can also mean backup tapes or other media being transported from one site to another.  A third case of data in motion is that of a laptop being carried along somewhere away from the organization's facilities. Laptop theft is very common, and with the large hard disks now available on them, the amount and value of data that a thief can acquire along with the hardware is frightening. And keep in mind that usernames and passwords give no protection at all to the data on the hard disk, when the thief can simply remove the hard disk from the laptop chassis and read it on another computer. Therefore, protection of the data on the laptop means the data must be encrypted in one form or another.

As an aside, note that laptop vulnerability is one example of an IT security axiom that is sometimes phrased as "If I can touch the box, I own the box." Meaning that if I, the attacker, can literally put my hands on the computer, network device, or whatever, I can break into it. Perhaps ironically, people worry a lot about firewalls and intrusion detection, but leave their network routers and switches in an unlocked or easily breached server room. With Cisco routers and switches, any attacker with a laptop, a rollover cable, and knowledge of the password recovery procedures for the specific router/switch model numbers can reset the system passwords and "own the box". Keep in mind that all of your data on the network travel through your routers and switches. If your facility is unattended nights and/or weekends, a criminal with a laptop, a few cables, and basic lock picking tools and skills can break in, put in eavesdropping software and/or hardware, and capture everything from e-mail traffic and legal documents to banking account numbers and passwords. Thus, before you get all worked up about the latest Cool Trend, make damned sure your physical security is up to the task.

There are many hard disk encryption products available. Most are licenseware, but there is a free product that I have had success with. I have been using TrueCrypt for several years now. I find it to be well-documented, easy to use, and the authors seem to demonstrate responsibility in their programming practices. TrueCrypt is an "on-the-fly" disk encryption system, meaning data read from the hard disk are decrypted as needed. There are rumors that TrueCrypt can be broken, but at this point I believe that only the largest governments have the capability to reliably crack TrueCrypt. Security is all about creating unacceptable cost/benefit profiles for attackers; few criminals are willing to spend $100 of labor and computer time to gain $10 worth of data. If you are known to work for the US State Department, then somebody somewhere around the world will probably want to spend a million dollars of effort to crack your laptop data; if you are a manager for a shoe store, not so much.

I use TrueCrypt on one of my laptops in full disk encryption mode. This means the operating system files (Windows 7 Pro, in my case), the swapfile, and all user data are all encrypted. I have my laptop configured to Hibernate when the lid is closed; when it is powered back up, TrueCrypt requires its password (which is required to decrypt the disk so that it can boot the OS) again, indicating that the data have been secure. There is a window of vulnerability after the lid is closed and the system is writing the hibernation file to disk; for my rather old laptop, that window lasts about 3 minutes. Note that Standby mode, where the contents of RAM are kept alive by battery power but the CPU and hard disk are shut down, is very convenient but not secure.

Also, for completeness, I will mention that there is a rather esoteric attack available, usually referred to as a "cold boot attack" that can sometimes extract data from RAM chips after a machine is shut down. After a machine is shut down, attackers trained and practiced in the method open up the chassis of the desktop or laptop machine, blast the RAM chips with refrigerant (i.e. the "canned air" pressurized blasters used to blow lint out of computer cases) to rapidly cool them, then reboot the computer using a hacking program that preserves much or all of the data on the RAM long enough for it to be copied to a thumb drive or whatever. Once the RAM contents have been preserved, that data can be scanned for passwords, encryption keys, and clues that enable cracking programs to break any disk encryption. Therefore, with a configuration such as mine, closing your laptop lid is a necessary but not sufficient condition. If I were to put my laptop in a position to be stolen, it would be safer to wait 5 minutes or so before leaving it unattended.

A last comment on TrueCrypt and security. The default way to provide TrueCrypt with the encryption key is to type the password on the keyboard when prompted to do so. The key generated from that password is then stored in RAM. The password itself is vulnerable to hardware keyloggers hidden in your keyboard, as well as the cold boot attack mentioned previously. An alternative method is to store the encryption key on a USB thumb drive. The advantage of that method is that, once the thumb drive is removed, the key cannot be recovered from RAM, and is not vulnerable to keyloggers. It is vulnerable to theft of the thumb drive, and also to a "sniffer" that intercepts data flow from the USB port to the motherboard. For the average user, I'm not convinced the USB key approach is substantially better than a conventional password entry. A two-factor authentication system, requiring both a password and a USB encryption key or SmartCard, probably offers a higher level of security. As far as I know, two-factor methods do nothing about vulnerability to hardware keyloggers. This vulnerability illustrates the difficulty in securing laptops and other portable computing devices that are unattended for any length of time.


2012-02-27; Virtualization & Cloud Computing

Virtualization, along with "cloud computing", is the hot topic in computing these days. What do these terms mean?

A simple definition of virtualization is the isolation of of software from the real, physical hardware. An example that you're probably using right this minute is virtual memory. The operating system on the computer you are using to read this web page most likely uses virtual memory management. Virtual memory management is a generic term used to describe the process of extending the physical RAM memory on your motherboard by "swapping" or "paging" unused data in RAM to a swapfile/pagefile.  What makes this work seamlessly is that the OS provides an abstraction layer between the hardware (RAM chips and hard disk swapfile) and application software (for example, your browser client). What the abstraction layer accomplishes is presenting all application developers with a single, consistent Application Programming Interface (API) that insulates the application from needing to manage RAM and swapfiles.

In the enterprise environment, there are many facets of computing that can be virtualized. Currently, the most commonly virtualized elements are application servers. Rather than the application server consisting of a server OS (such as Windows Server 2008 or some version of Linux) running directly on the server hardware, and the applications (such as a DBMS) running under that server OS, there is an additional software layer between the server hardware and the server OS. That layer is called a hypervisor. Virtualization of application servers provides many benefits, but also involves some additional costs. Likewise, organizations can use storage virtualization, desktop virtualization, and network virtualization.

For most organizations, it currently appears that desktop virtualization is going to be the "home run" of virtualization. The majority of IT/IS labor hours are spent on pretty basic support of end users and their workstations. Virtualization isn't going to prevent the need for password resets and recovery of accidental file deletions, but it offers some fast and convenient ways to recover from crashed disks, resolve conflicting applications, manage licensing and usage, and disaster recovery.







return to Devon's home page