One of the pillars of fiscal responsibility -- individually or as an organization -- is knowing what you have. After all, how can you justify getting more if you don’t know what you've already got or more importantly perhaps, haven't got. Right? Unfortunately, many organizations today have exactly that problem. Often times, that budgetary black hole known as IT claims all of the money that it needs and more. I have personally worked with enterprises who were completely ignorant of hundreds, and even thousands, of servers that were no supported and had consequently managed to slip from sight. Sometimes those servers are just running quietly in the corner. In fact, I’ve heard urban (IT) legends about NetWare 3.x servers that were so reliable that the company forgot about them and actually framed a wall in front of them. Amazing.

Yeah. About that...

But all facetiousness aside, a lot of companies don’t know what they have. If they have any idea about their nodes, they may not know what the software profile is for that server. What software is on the device? How about the version of that particular software? What services are offered by that device? For every answer, there is at least two questions. It's a bad situation.

Where to Begin?

What mechanism can you use to collect the data? Is your network consistent in platform? Almost entirely Windows? Unix/Linux? NetWare, even? If your network is particularly heterogeneous, I would suggest Perl as a mechanism. Not only is it powerful, high performance, and modular, it is also highly portable. You can run your Perl code on a Windows environment or a Linux environment without changing your code.

Be on the Lookout

You know what tools you want to use, but where do you start? There are a few ways that you can discover your environment. If your environment is fairly simple and centralized, ICMP (ping) sweep may be acceptable. But, that can be cumbersome and network intensive. Depending on your environment, if you’re using an enterprise directory service, like eDirectory or Active Directory, you can often discover network information from within your directory service. However, my suggestion on how to do it would be to leverage an event based system. For instance, capturing ARP new address/mac announcements. That way, you get them, even if they come online and drop back offline in between your discovery interval.


Once you’ve got a network address, the rest is fairly simple. Some mechanisms that you can use to collect configuration information off of the nodes The most global mechanism that I’m familiar with is SNMP. By using SNMP in your environment, you can collect configuration management information from your entire infrastructure, regardless of what operating system the system is running. Information such as running software, software version, interface information, disk space utilization, can be collected without even specifying the operating system. Then, you can get more specific operating system specific information by using vendor specific MIBs. Finally, if there is specific information that you want off of individual nodes or platforms, you can use platform specific mechanisms. For instance, it is possible to use a perl script to remotely collect useful information such as antivirus definition version out of the registry.

Why Bother to Automate Audits?

It’s true, you can collect all of this information without writing a single line of code. You can collect the data manually. But at what cost? How long would it take for you to have your staff visit every node and collect every useful tidbit of information off of that device? If you calculate out the person hours, it would likely be substantially more expensive. And as if the cost wasn’t justification enough, your data is stale nearly as soon as you’re done collecting it. With an appropriately constructed script, it can run and collect your data multiple times a day, ensuring that your configuration management information is perpetually fresh. Fresh, and passively collected data is the holy grail. Really, now, what more could an organization striving for improved ITIL compliance hope for? Next up in our Sensible IT column: We’ve Got the Data...Now What?