125 x 125 Ad Section

Senin, 13 Desember 2010

0 Weekend Project: Intrusion Detection on Linux with AIDE

Front-line measures like firewalling, strong authentication, and staying on top of security updates are mandatory steps to keeping your system secure. But you also need to check your system's health frequently and make sure a compromise didn't slip past you unnoticed. A good place to start is with an intrusion detection system (IDS) that monitors your machine's resources and flags any changes that might indicate an intruder or a rootkit. The Advanced Intrusion Detection Environment (AIDE) is an open source IDS that you can set up in a weekend.

Before we get started, though, it's vital to understand how an IDS like AIDE functions. AIDE is a host-based IDS, which basically means that it scans the filesystem and logs the attributes of important files, directories, and devices. Each time it runs, it compares its findings against the previous, "known good" data, and alerts you if something has changes. But the downside is that if your system is already compromised before you install and run AIDE initially, you won't be able to detect it.
Of course, the odds that your system is already compromised aren't high, but the fact remains that the only way to guarantee that it is clean is to install and run AIDE right after you install the OS, but before you connect to the network. Put that on your to-do list when deploying new machines from now on, and as for your existing Linux boxes, make do as best as you can.
AIDE runs on Linux and most other Unix-like operating systems. It is provided by most of the distributions, but you may want to consider grabbing the code from the project's Web site and compiling from source — there are a few advanced options that are only available at compile-time via the ./configure script. If you've already installed an AIDE binary, run aide -v, which will dump out the version number and the compile-time options used. For now, though, we'll talk about the basic installation and usage.

Setup and First-Run

AIDE works its magic by reading in a configuration file that contains both a list of directories and files to scan, and the attributes of each entry to log. It then works its way through the tree of nodes to be scanned, and writes out a database of the attributes found. There are currently thirteen attributes that AIDE can log — including permissions, owner, group, size, all three timestamps (atime, ctime, and mtime), plus lower-level stuff like inode, block count, number of links, and so on.

On top of those, AIDE supports multiple has algorithms with which it can generate checksums for each file. By default, the list includes MD5, SHA-1, SHA-256, SHA-512, RMD-160, Tiger, HAVAL, and CRC-32. If you compile AIDE with the mhash option to the configuration script, you can also use GOST and Whirlpool hashes.
Binary packages probably include a decent example configuration file in /etc/aide/aide.conf — in a bit, we'll explain why you might want to use a different location, but for the moment, open the file in an editor, and take a look at the configuration directives.
Near the top are rule definitions, which are just user-supplied names followed by an equal sign and a list of attributes and hashes. For example:

SizeOnly = s+b
SizeAndChecksum = s+b+md5+sha1
ReallyParanoid = p+i+n+u+g+s+b+m+a+c+md5+sha1+rmd160+tiger+whirlpool
The first line activates just the size (s) and block count (b) attributes. The second adds MD5 and SHA-1 hashes, and the third logs just about every supported feature, including inode (i), timestamps (m, a, and c) and a fistful of additional hashes.
Below these rule definitions you'll find a lines listing the directories and files to check, using regular-expression based formulas. For example:

/etc SizeAndChecksum
/sbin ReallyParanoid
/var Size

The first three lines are "positive" expressions, which tell AIDE to include everything that matches the regular expression. The leading exclamation point on the last two indicate a "negative" expression, which in this case says to exclude the rapidly-changing /var/log/ and /var/spool/ directories. As you can see, each positive expression is followed by the name of one of the rule definitions. You could also use a literal string instead of a rule name here, such as /var/www/intranet p+s+b+a+sha256 — the rule names are just for easier reading.

Correctly defining your regular expressions and rules is the trickiest part of using AIDE. Too many files and directories, and you can end up with extremely long logs to read through on every integrity check. Too narrow of a set, and you risk missing an intruder. It's also not trivially easy to get the right balance of regular expression syntax when you want to match some files in a directory hierarchy, but not others. It's a good idea to consult the AIDE user manual and read man aide.conf for help combining wildcards with positive and negative expressions.

Of course, there's no substitute for actually trying out your configuration with a real run. Run sudo aide --init, and AIDE scans the designated files and directories, writing its findings to a database. The location of the database is determined by a line of the form database=URL in the configuration file. The data is also copied to stdout, so you can watch the process from a terminal window. If you're missing some files and need to tweak your expressions, you can do so and re-run the process before proceeding.

Comparing Subsequent Checks

Now comes the important (and potentially confusing) part. Time goes by, and whenever you feel it's prudent, you decide to run another scan with sudo aide --check. Except that unless you specify otherwise, AIDE looks for a configuration file in the current working directory — which raises the question of where you should keep that all-important configuration file.
At first glance, it might seem like you should store it in a standard location like /etc/aide/, but that's actually a bad idea, because if your system ever was cracked, intruders could not only read your AIDE configuration and look for directories that you've elected to ignore, but they could alter the URL of the output database in order to trick subsequent AIDE scans.

So the best plan is, in fact, to store the aide.conf file on removable media (preferably read-only), that you mount just before running a scan. For similar reasons, the safest place to store AIDE's output database is also on this removable media, so inside the configuration file the database line might be database=/media/AIDE_CD_012345/aide.db. You tell AIDE where the configuration file is with the --config command-line parameter.

Thus, the correct scan command to run is (in this example) sudo aide --check --config=/media/AIDE_CD_012345/aide.conf.

If someone replaces your copy of /sbin/fsck, the second scan should notice it and report it to you on stdout. On the other hand, you may have good reason to alter a system file like /etc/pam.conf yourself, in which case you don't need AIDE throwing a red flag every time you run a scan. You can invoke AIDE as sudo aide --update --config=/path/to/your/aide.conf to both run a scan and output an updated copy of the database. AIDE will save the new database at the URL you specify in the configuration file after database_out=. If you're following proper protocol, this will be someplace mounted read-write, and you will subsequently copy the new database to your read-only media.
So how often should you run scans? It depends on the machine. On the plug-computer that runs your house's lighting control and sprinkler system, rarely. On the company's Internet gateway, daily at least.

Extra Credit: ACLs, SELinux, and Database Signing

As you've probably realized, all AIDE can do is alert you to changed files. The ball is in your court to recognize whether the change is a sign of a security breach. Some system files should never change; some (such as tty devices) change owner and permissions during regular operation. You have to get to know your system.

Depending on your system, you may not find the generic AIDE binaries supplied by your distribution up to the task. That is because there are a few attributes that AIDE can monitor only if they are configured as compile-time options, such as support for SELinux's security contexts, access control lists, and extended file attributes. If you need any of those features, you'll want to compile AIDE yourself from source. Luckily it's not hard; the standard GNU compiler chain is all that is required.

As mentioned above, the compile-time options also include support for additional hash algorithms. Which you prefer is largely a matter of personal mathematical preference. But there is also one other compile-time option that you should consider if you want a really secure setup. Configuring AIDE with HMAC (hash-based message authentication code) support allows you to cryptographically sign both the configuration file and the output database.

This is a compile-time option, not a run-time option, because adding it in prevents the aide binary from running a scan in the event that the signature of the config file or database doesn't match. That's what you want: a binary that cannot be tricked into using a compromised database. Because let's face it — the read-only media you're using is only as secure as the locker you store it in. To configure AIDE with HMAC support, read the final section of the AIDE online manual. You'll need to pick a pair of encryption keys, but otherwise it's a fairly simple process. After that, it's sit back and scan-as-usual.

0 Foil Firesheep and other Nuisances on Linux

You've probably heard a lot about Firesheep, the Firefox extension that exposes user credentials and allows almost anyone to take over an account on Facebook, Twitter, and many other sites with a few clicks. But what do you do to defeat it? Read on, and you'll be able to foil Firesheep in no time.

A lot of Web sites use cookies to store authentication information. You'll log in via an HTTPS connection, but then revert to HTTP when you've authenticated. Then the cookie — with your authentication information — is sent over plain HTTP. This is no big deal when you're on your home network (assuming you trust all the people in your home, of course, and you use WEP or WPA for your Wi-Fi). But if you're in a coffee shop, at a conference, or using some other public network then sending your cookies over HTTP makes it easy for someone else to hijack your session.

Understanding Firesheep

  Firesheep makes it really easy. There's been a lot of press about Firesheep, and some of it gives the impression that HTTP hijacking is a new development or suddenly made possible by Firesheep. It's not new, but the tools to do it were a bit more primitive and you actually needed to know what you were doing a little bit. Firesheep makes it a point and click operation.

Firesheep is a Firefox extension that examines open wireless networks for authentication cookies, and then displays a list of users and Web sites that it's captured. If the Firesheep user snags someone's credentials, all it takes is a double-click to log into the site as the victim. Note that this only works for sites that Firesheep supports — so if you're logging into Twitter, Facebook, etc., then you should worry. You might be safer (from Firesheep, anyway) when logging into lesser trafficked sites. But why chance it?

None of this is entirely new. You could do the same thing with Wireshark (formerly Ethereal), but it takes a wee bit more smarts to piece together the bits with Wireshark.

Want to try Firesheep out on your Linux box to see what it does? Unfortunately, or perhaps fortunately, you can't. Not right now, anyway — it only runs on Mac OS X and Windows, even though it's a Firefox extension. You can grab it from GitHub. There's also a port for, of all things, WebOS.

Fighting Firesheep

If you're convinced that Firesheep and its ilk are a problem, what's the solution? It's a two-parter, unfortunately — and only some of it rests on the users.

What you can do as a user is to force HTTPS connections on sites that support them, and avoid using authentication over public networks if the sites in question do not support secure connections. For instance, if the company Content Management System (CMS) or Webmail doesn't have a SSL cert, wait until you're able to connect over a private network.

Another thing, for Firefox users, is HTTPS Everywhere, which is being developed by the Tor Project and Electronic Frontier Foundation (EFF). Like Firesheep, HTTPS Everywhere doesn't support every Web site — but it does support a wide range of popular sites, including Github, Dropbox,, Twitter, Hotmail, and Facebook. Note that Facebook requires a few additional steps, see the instructions with the 0.9.0 release. In particular, you'll also want the AdBlock Plus extension for Facebook, because a lot of the ads come from insecure sites that have Facebook trackers. (Thanks, Facebook...)

Chrome and Chromium users can use the --force-https option to force HTTPS connections, though this has some drawbacks in that you can't connect to non-HTTPS sites. If you're doing your normal browsing plus connecting to sites using authentication, it can be a pain. Chrome also denies self-signed certificates when using the --force-https option, so that means you're going to be out of luck if your company or project is using self-signed certificates.

In general, the best things you cando is to be aware of the problem and be conscious of when your information could be exposed.

Supporting Your Users

Admins can help by offering secured connections. If you're running a server with any kind of user authentication, you should be offering your users secured connections. This means having an SSL certificate installed and, preferably, only allowing HTTPS connections when authentication is involved.
Computationally, this is a bit more expensive — but you shouldn't need to upgrade your hardware just to support additional HTTPS connections unless you're supporting a lot of traffic.

The other consideration is an SSL certificate. You can purchase an SSL cert from several different companies. Many domain registrars provide SSL certs for a fee, some also provide certs for free with domain registration. Some projects, like StartCOM also provide certs for free. Be careful with some of these — if the root certificate for the SSL provider is not distributed with the browser, the browser will treat the certificate like a self-signed certificate. In other words, the connection may be secure from Firesheep — but it's not verifying that you're connecting to the site you think you're connecting to.

Self-signed certificates are fine for organizations that want to encrypt traffic for a handful of users, and also good if you want to secure your own personal site. For example, I'd much rather have a self-signed cert for my blog when I'm updating it from a conference than to connect in the clear. We'll cover creating self-signed certificates soon.


Firesheep is just the latest threat to users, but it only exposes a vulnerability that's been there all along. Though there's been a lot of fingers pointed at the Firesheep developer for unleashing this, I think it's a good thing that it's out there. It shows just how vulnerable users have been and, I hope, will prompt users and IT departments, to take these threats more seriously. Linux users are used to scoffing at the malware that continually causes problems for Windows users — but this is a cross-platform problem. Make sure you're taking steps to be safe.

0 Understanding the Stable Linux Kernel

Will we have a new kernel by Christmas? I remember wondering that back in 1998 when 2.2 was on the horizon. Things have changed quite a bit since then in kernel development, and with Greg Kroah-Hartman's announcement last week about changes in the stable kernel release procedure, it's a good time to look at how the stable tree works.

A Little Backstory

Kernel development hasn't always worked this way. If you're relatively new to Linux, you may not remember the long, long, long wait for development kernels to become stable during the early 2.x series. The kernel developers would maintain a stable kernel with an even minor number (2.0.x, 2.2.x, 2.4.x) and a development kernel with an odd minor number (2.1.x, 2.3.x, 2.5.x).

In the case of 2.2.0, for example, Linus Torvalds was talking about releasing the 2.2.0 kernel in June or July of 1998. Months passed, and the question became "will we have 2.2 by Christmas?" We didn't. The 2.2.0 kernel came out in January of 1999, quite a while after the 2.0.0 kernel which was released in June of 1996. 2.2.0 carried some big features people were eager to get — like improvements to SMP (multiple processors) and support for PowerPC processors.

It took quite a while for features to make it into stable kernels — so companies would port features from the development series into the stable series, and as Kroah-Hartman says it wasn't easy:
2.5 was hard. Real hard. We had all these ideas of things we wanted to do, we adopted new tools and were figuring out how to use them, we did all sorts of crazy development as a large number of us were now being paid to do this development, so we had more time to do it.

And, because we all had more time to do this work, it took even longer. 2.6.0 didn't come out until almost 3 years later, taking 86 releases.

Because of the long development cycle, companies really wanted to use parts of the 2.5 kernel in their "stable" releases. So they backported things from the 2.5 tree into the 2.4 trees, and started calling them "enterprise" kernel releases. These releases were _hard_ to do by the distros and the developers in charge of them. Usually the same developer had to make the changes in the 2.5 kernel and then turn around and try to figure out how to backport the changes to the 2.4 kernel for a distro to use. It was a nightmare for the developers, something that none of them who lived through it ever wanted to do again.
So, a number of changes were gradually introduced after the 2.6.0 kernel was released. Instead of starting a 2.7 development tree, Linus would release an -rc that had gradual changes from the previous kernel. Releases came out rapidly — every two to three months — but there were more modest changes between releases.

This is Kernel Development. There are Rules

The only objection to that style of development was a lack of a "stable" tree that could have bugfixes, security fixes, and so on. Three months is not a long time to wait for SMP — three months is eternity waiting for a security release!

So Kroah-Hartman started up the stable kernel tree to bridge the gap between the development kernel and the stable kernel. Right now, for example, the stable kernel is But also, and, and several others. If that sounds confusing, it is. More on that in a second.

So what goes into the -stable kernels? There's a few guidelines that are used to keep disruption in the -stable kernel to a minimum. It can't be a large patch, it can't be a new feature or an unnecessary trivial change (like spelling changes), and it has to be accepted by the subsystem maintainer. Security changes are, of course, high priority and appropriate for the -stable tree.

But a couple of kernels have been maintained as stable longer than others. Specifically, the 2.6.27 and 2.6.32 kernels. Why? Well these kernels have been used by major enterprise distributions. 2.6.32, for instance, was used for SUSE Linux Enterprise 11, Red Hat Enterprise Linux 6, and Ubuntu 10.04 LTS, to name a few. So that kernel will be on enterprise systems for quite some time. But the other kernels were maintained somewhat unevenly and there was a bit of confusion over how long kernels would be maintained and which one should be used.

Stable Kernels Now, and Longterm

So Kroah-Hartman has clarified how the releases will be maintained. Kroah-Hartman has introduced the concept of "longterm" kernels — which will include 2.6.27 and 2.6.32, and will be adding 2.6.35 to the list of longterm kernels. That will be maintained by Andi Kleen.
But Kroah-Hartman will only be maintaining the last kernel released as -stable.

Have questions or want to jump in on stable kernel development? There's now a stable mailing list anyone can subscribe to to watch or work with stable development.

So, overall — not a massive change in the scheme of things. But it's important, and will further refine the way the Linux kernel is developed and used, and continue the long tradition of open development that Linux users have enjoyed for nearly 20 years.

0 Weekend Project: Set Up Safe Guest Wi-Fi with Linux

The holiday season is upon us, and you know what that means: relatives coming over wanting to use your WiFi. If you'd like to find a solution somewhere between "run an open, unsecured AP" and "hand out your WPA2 password to people who write things like that down on sticky notes" then setting up a captive portal is a convenient option.

There are several open source captive portals to choose from for your Linux box, including NoCat, HotSpotPA, PacketFence, and ChilliSpot. Unless you have time to explore the features and options, however, a simple, actively-developed solution like WiFiDog is probably the best bet. WiFiDog will run on any Linux distribution, and is an optional package on most of the embedded Linux router firmware projects (like OpenWRT and DD-WRT).

Obviously the task that a captive portal package performs depends on the presence of two (or more) network interfaces on the machine; the portal intercepts connections originating on the restricted interface, performs some sort of authentication, and subsequently begins routing traffic to the unrestricted interface, whether it is also on the LAN or is a WAN port. The canonical layout of this setup is like that on a typical router; the Wi-Fi interface is the restricted side, and the WAN interface is the unrestricted route to the Internet upstream. Other, wired interfaces on the router typically do not involve transient devices, so the portal software does not listen or restrict access on them.
But a dedicated router is not necessary; the same setup will work just as well if you install a Wi-Fi card on a standard Linux server. Nor do you necessarily need to lock yourself out of the Wi-Fi network along with your visitors: if your router is capable of running multiple WLAN SSIDs simultaneously through "virtual interfaces," you simply choose which interface is to be used for the captive portal, and which interfaces are not.

A basic WiFiDog setup consists of two components: the gateway, which listens for client connection requests, and the authentication server, which approves clients and maintains the active connections. You can run both programs on a full-fledged Linux box, but an embedded router will typically only have enough resources to run the gateway locally; you will want to run the authentication server on another machine.

In terms of actual usage, WiFiDog requires users to create an "account" that is unique to their email address. When the user visits the Wi-Fi gateway, the login page allows them to either log in, or to create a new account, and temporarily opens up network access so that they can check their email for the registration email. When they attempt to log in with the gateway, the gateway forwards the request to the authentication server — if the credentials check out, the gateway and the server exchange tokens, and traffic is permitted from the new client. That may seem like an odd authentication handshake, since the client and the gateway never exchange cryptographic tokens, but it allows WiFiDog to work for a wide variety of devices — unlike systems that require the client to download an SSL certificate or keep a JavaScript script running during the entire browsing session.

Installation and Setup: The Gateway

The gateway package is available as a source code bundle from the WiFiDog project, although if you are installing the gateway on a server or desktop you may find it available through your distro's package management system. If you are interested in running the gateway on your Linux router, your best bet for installation instructions is to check with the firmware project's documentation, but you will probably discover a pre-compiled binary available or included in the default builds.

For everyone else, though, the source is easy to compile, as it uses only Linux's standard netfilter and iptables functionality. Just unpack the source, cd into the directory, and do the traditional ./; make; sudo make install three-step.

This creates the wifidog binary and a skeleton configuration file in /etc/wifidog.conf. Open the configuration file in the editor of your choice; the file created documents every option in comment blocks, so it is comparatively easy to read through and supply the right settings. The most important settings are the GatewayID, ExternalInterface, GatewayInterface, and AuthServer.

GatewayID is a name that you assign to this specific node, such as MyHomeWifi. WiFiDog supports administering multiple gateways from a central location, thus the need for naming, but if you only have one node, just pick a name you'll remember. The ExternalInterface and GatewayInterface settings are the upstream and downstream network interfaces on the system, respectively. In a common configuration, you would set eth0 (a wired Ethernet interface) to be the ExternalInterface, and wlan0 (a wireless network card) to be the GatewayInterface.

The AuthServer setting consists of a block describing the authentication server that this node needs to contact to authorize new clients. Multiple AuthServer blocks are permissible; again this is to allow distributed setups — the node will try them in order until it gets a response. A sample block looks like this:

AuthServer {
    Hostname mylocalserver.lan
    SSLAvailable yes
    Path /wifidog/
This tells the node to contact the authorization server at https://mylocalserver.lan/wifidog/. You can customize the URIs used to reach the authentication server in several ways, including tweaking the path to the user login script and success page; the default settings are generally fine unless you already run a complicated Apache configuration on the server.

You can also set timeouts and "keepalive" ping intervals in the config file, along with basic firewall rules, a list of clients to automatically pass-through (the TrustedMACList option), and a custom "portal landing page." When the file is set up, save it, then start the gateway service from a shell prompt with sudo wifidog -f -d7. The -f switch keeps the process running in the foreground, and the -d 7 switch sets it to maximum verbosity — this way you can work out all of the kinks. For production usage, you would omit both switches and let wifidog start as a daemon.

Installation and Setup: The Authentication Server

Setup and prerequisites are noticeably higher on the authentication server side; you'll need Apache, PHP, PostgreSQL, and about a dozen PHP extensions. Most are common, like xmlrpc and mhash, but some are not, such as Auth_RADIUS. Consult the official documentation for a current list. The official docs also suggest increasing the memory size in your php.ini file to at least 32MB.

With the prerequisites in place, though, setup is straightforward. Download and unpack the source code, placing it wherever you choose in your DocumentRoot. It should be available to other machines in the URL you set in /etc/wifidog.conf on the gateway — in this example, http://mylocalserver.lan/wifidog/. When you visit http://mylocalserver.lan/wifidog/install.php, you will be greeted by the authentication server install wizard.

The current installer can check package dependencies, making sure that you have a supported version of PHP and PostgreSQL, but it does not create the PostgreSQL database for you automatically. It does explain how to do so, though, including step-by-step example commands to create the "wifidog" database user, followed by the database and tables. Step through the package validation pages one at a time; verifying the proper version of the PHP libraries and file permissions. Finally, you will be asked to upload the database schema, and create an administrator account. Once this is done, remove the install.php file.

Configuring the authentication server is done by editing the file config.php in the wifidog directory. As with the gateway, examples and comments document each of the options. At the top are typical database settings (hostname, database user) that you may need to adjust. Below that in the "WiFiDog Basic Configuration" section are some options relating to Google Maps and other bling; for the most part you can leave these as-is.

Managing a Gateway

When the server is up and running, connect to the base URL (but not your Wi-Fi gateway...) — you should see the WiFiDog login page. Here you should log in with the administrator account you created earlier; this provides access to the web-based management tool. The first time you connect, nothing will be configured, so you begin by creating a "network" from the Network administration menu.

In the simple case (one authentication server and one gateway), you can assign it any memorable name of your choosing, such as MyBasicNetwork. By default, WiFiDog uses email account verification to authenticate users; if you want to use another authentication method, you can choose it under "Network authentication." The "Validation grace period" setting allows a newly logged-in user a configurable time slot during which they can check their email for the validation message (by default it is 20 minutes).

After your new network is defined, you add your gateway node to the network by choosing "Add Node" from the "Node administration" menu. The Node ID that you enter must be the name of the node that you created in wifidog.conf on the gateway machine: it was called GatewayID in that file (the inconsistency is WiFiDog's problem, not yours); our example used MyHomeWifi. Save your settings, and clients will be able to connect through the gateway node, based on the general rules that you set up in under MyBasicNetwork. Here again, WiFiDog's ability to remotely manage multiple nodes on multiple networks allows for great flexibility in a distributed system, even if it seems like overkill for a single hotspot.

Now, from the web administration interface, you can see which accounts are logged in, check their bandwidth consumption, graph network usage, and check the registration logs. You can also manage user accounts, if you detect strangers that don't belong, or suspicious account creation activity.

Extra Credit: Extra Vigilance

The procedure outlined above won't protect you from ne'er-do-wells all on its own; we simply set up a basic gateway without any service restrictions. To keep your hotspot visitors from bringing down your LAN, you have to take extra precautions — starting with setting up firewall rules on the gateway in wifidog.conf. You can use all of the same packet filtering schemes iptables supports on Linux, so you can set up rules to block Bittorrent traffic, restrict access to particular IP address, and more. There are examples detailed in the comments in wifidog.conf, or you can consult any iptables HOWTO.

On top of that, though, it is still your responsibility to notice who is connecting to your network. Recognizing and blocking nefarious visitors is by no means a simple task; one of the problems of Wi-Fi in general is that attackers can sniff the signal, note authenticated IP and MAC addresses, and spoof packets from them to imitate a logged-in client. So as is usually recommended, when it comes to securing your WiFiDog captive portal, you can't get safer than deny by default. Just be sure not to wander too far off, for when the relatives need you to open up a few more ports.

Rabu, 08 Desember 2010

0 Five Tips for Successful Linux Deployments

Linux deployments still carry high expectations. Before pushing the button on that Linux deployment, make sure you have all the bases covered, and we have a few tips to help save the day.

You could, and some have, write a book on deploying Linux. Here we have a few tips to help ensure that your Linux deployment is a successful start to a long relationship with Linux in your organization.


The major Linux enterprise distributions have fairly comprehensive documentation, but much of it goes unread. Sure, folks start reaching for the documentation when something goes wrong, or to check a specific problem — but there's something to be said for reading the documentation before you encounter problems.
Why? Take a look at Red Hat's set of documentation as an example. Red Hat provides guides for deployment, installation, resource management, and so on. Some of it is obvious or something that will be stumbled across in due course of an installation. Other things — like how to use Cgroups or tips on power management — are not things that are presented as installation options or are generally non-obvious.
Once the deployment team has read the documentation, make sure it's available to others as well — and encourage them to read it!

Test, Test, Test

Remember the old joke about asking directions to Broadway? How do you get to Broadway? Practice. How do you ensure a successful Linux deployment? Testing. Specifically, a test deployment.
Before deploying Linux, make sure that you have run it through its paces hard. You want to make sure that the hardware you've selected for a deployment is up to the task. Linux performs well, but it's not a miracle worker — whatever you're doing with Linux needs to have ample hardware underneath. Make sure you're not only ready for average loads, but spikes in activity. Retailers, for example, ought to be set for holiday sales and other peak seasons — not just the average daily sales.
Test your users, too. Get feedback from users before pushing Linux out, to avoid unpleasant surprises that can't be solved with more software.

Cross-Platform Preparedness

Odds are, you're not deploying Linux in a Linux-only shop. Be sure that Linux systems will be a first-class citizen on your network. Linux clients need to have the tools to access Microsoft systems and you need to know if your users are going to need access to Windows-only tools. If you're using SharePoint or other Web tools that favor Windows or Internet Explorer, make sure you've accounted for that and have come up with workarounds. You may even need to leave a few users on Windows, or ensure that they have access to Windows VMs.
Authentication is also an issue. We've written before about Single Sign-On (SSO). A surer way to success is to let users start accessing the new systems with existing credentials out of the box. Don't ask users to have separate credentials for Windows systems and Linux systems.

Secure It

After the deployment, it's time for some penetration testing. You've probably done some of this as part of the initial testing, we hope. But after the deployment, it's time to do it again. And again.
Make sure that you're using an Intrusion Detection System (see our Weekend Guide) and that you've ensured no unnecessary services are running, no unnecessary ports are open, etc.
And make sure that you have in place an update policy for not only the underlying OS, but the software running on Linux — especially if it's not part of the vendor's package offerings. How often will you be running updates? Set regular maintenance windows and plan for special cases when zero day or other urgent fixes come up.

Post-Deployment Feedback

You've deployed the systems and everything is working as expected. Job done, right? Wrong. Successful IT deployments are never done — just at a different stage in the lifecycle. After deploying Linux, get some feedback from the stakeholders and make sure that they really are satisfied.
Even better, ask them what can be done better next time or what might help make this deployment more successful. A lack of complaints or request tickets isn't a guaranteed indicator that nothing is wrong — just that it's not egregious enough to motivate people to stop work to complain. Eliciting feedback not only gives you a learning opportunity, but it also helps users realize that the IT folks care. Also? It's possible that some deployments have been successful, but unnoticed. Most of the time, IT is invisible to the rest of the company, until something breaks. If you've had a successful deployment, call attention to it by asking for feedback.


Linux has become sufficiently mainstream that deploying it isn't a mystery. Most companies already have some expertise in working with it — though some have more than others, of course. Share your experiences deploying Linux in the comments — the more we know, the more successful Linux will continue to be.

(Taken from :

0 Interactive Terminal Greeting

Anyway, I'm not going to elaborate on basic terminal command(s) can learn more about that from here --> Unix Toolbox

I wanted to make a very short tutorial about how you can make terminal more interactive & fun to work in. Well if you're interested, then read on.
Basically you need to install the applications cowsay & fortune from your package manager.
If you're running Ubuntu or it's derivatives, simply type the following in a terminal:

$sudo aptitude install fortune cosway

After you're done with the installation, open up /etc/bash.bashrc with your favorite text editor, or type in a terminal:

$sudo gedit /etc/bash.bashrc

Now, scroll down to the bottom & paste the following lines:

# Spicing up Terminal
fortune | cowsay -n -f small.cow

It should be similar to something like this:

After you're done, save the changes & exit.

Now, whenever you open a new terminal, a character will greet you with a totally random quote

Selasa, 07 Desember 2010

0 Upgrade Ubuntu 10.04 to 10.10 Desktop and Server

With the release yesterday of Ubuntu 10.10, on 10/10/10, I decided to put together a brief tutorial on how to upgrade your Ubuntu 10.04 Desktop or Server to the lastest version, Ubuntu 10.10. Before beginning, there's a few things we expect you've done:
  • Be sure you apply all updates to the edition of Ubuntu that you're currently running
  • If you're running a dynamic website, put it into maintenance mode, just to be on the safe side
  • You should probably read the release notes for Ubuntu 10.10, which outline any known problems or issues you may encounter during your upgrade
  • We suggest backing up your system (really, we do)
To upgrade Ubuntu Desktop 10.04 to 10.10:
  1. Type Alt+F2 - this will open a "run" window
  2. In this box, type (without quotes) "update-manager -d"
  3. This will open the update manager window - at the top of this window, you should see "New Ubuntu release '10.10' is available": click the "Upgrade" button next to that.
  4. Proceed with the on screen instructions
To upgrade Ubuntu Server 10.04 to 10.10:
  1. Install the package "update-manager-core" by typeing the following: sudo apt-get install update-manager-core
  2. Type the following to allow update-manager to offer the option of upgrading to regular releases, not just "LTS" releases: sudo sed 's/Prompt=normal/Prompt=lts/g' -i /etc/update-manager/release-upgrades
  3. Now, simply tell your system to start upgrading: sudo do-release-upgrade -d
  4. Follow the instructions on screen
And that covers the upgrade process! Hope this helped out!

Make Money While You Sleep