why unix | RBL service | netrs | please | ripcalc | linescroll


Some people might ask that in today's day and age, why do we still use command lines and not GUI's. The simple answer has to be along the lines of keep it simple stupid. The fewer the lines of code, the lesser the chance of something failing. qmail is based on the idea that software should be built up from many small programs doing small and simple tasks that fit together to create a larger purpose. The software has well defined inputs and outputs. This is pretty much the essence of unix software.


A colleague of mine at work noted that one of the boxes has had an instance of apache running since Jul 1999, he did point out that this might be a world record for length of time that an Apache HTTP server had been running.

Apache is designed very well and efficiently with a very modular approach. The inputs and outputs for each module are very clear and well defined. It's no surprise that uptimes for a single process of this duration can be achieved on a unix platform.

[edn@www0-rth ~]$ uname -a
SunOS www0-rth* 5.7 Generic sun4m sparc SUNW,SPARCstation-20
[edn@www0-rth ~]$ uptime
  6:22pm  up 3958 day(s),  4:54,  3 users,  load average: 0.10, 0.09, 0.08
[edn@www0-rth ~]$ ps auxwww | grep apache
nobody     744  0.2  1.1 3196 1332 ?        S 17:30:19  0:03 /usr/local/apache/bin/httpd -f /etc/httpd.conf
edn       1100  0.2  0.5  880  568 pts/2    S 18:23:00  0:00 grep apache
nobody     958  0.2  1.1 3180 1304 ?        S 18:08:55  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody    1081  0.2  1.1 3180 1304 ?        S 18:22:33  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     805  0.1  1.1 3196 1332 ?        S 17:40:34  0:02 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     957  0.1  1.1 3180 1304 ?        S 18:08:33  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody    1016  0.1  1.1 3180 1304 ?        S 18:14:24  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     954  0.1  1.1 3180 1304 ?        S 18:05:52  0:01 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     981  0.1  1.1 3180 1304 ?        S 18:09:23  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     990  0.1  1.1 3180 1304 ?        S 18:12:37  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody    1058  0.1  1.1 3180 1304 ?        S 18:20:12  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     735  0.1  1.1 3196 1332 ?        S 17:29:59  0:03 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     712  0.1  1.1 3180 1304 ?        S 17:26:58  0:03 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     798  0.1  1.1 3196 1332 ?        S 17:39:39  0:02 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     866  0.1  1.1 3180 1304 ?        S 17:52:36  0:01 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody    1027  0.1  1.1 3180 1304 ?        S 18:15:35  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody    1060  0.1  1.1 3180 1304 ?        S 18:22:02  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody    1017  0.1  1.1 3180 1304 ?        S 18:14:41  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody    1029  0.1  1.1 3180 1304 ?        S 18:17:35  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     787  0.1  1.1 3180 1304 ?        S 17:39:13  0:02 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody    1026  0.1  1.1 3196 1332 ?        S 18:15:11  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     955  0.1  1.1 3180 1304 ?        S 18:06:10  0:01 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     895  0.1  1.1 3180 1304 ?        S 17:56:54  0:01 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     982  0.1  1.1 3180 1304 ?        S 18:09:55  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     989  0.1  1.1 3180 1304 ?        S 18:12:13  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     991  0.1  1.1 3180 1304 ?        S 18:13:53  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     992  0.1  1.1 3180 1304 ?        S 18:13:54  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody    1028  0.1  1.1 3180 1304 ?        S 18:16:27  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     745  0.1  1.1 3180 1308 ?        S 17:31:04  0:03 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     774  0.1  1.1 3196 1332 ?        S 17:38:58  0:02 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     993  0.1  1.1 3180 1304 ?        S 18:13:54  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     956  0.1  1.1 3196 1332 ?        S 18:08:17  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody     806  0.1  1.1 3180 1304 ?        S 17:41:06  0:02 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody    1098  0.1  0.7 3164  856 ?        S 18:22:51  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
nobody    1059  0.1  1.1 3180 1304 ?        S 18:21:45  0:00 /usr/local/apache/bin/httpd -f /etc/httpd.conf
root      2560  0.0  1.7 3140 2120 ?        S   Jul 15 17:11 /usr/local/apache/bin/httpd -f /etc/httpd.conf
[edn@www0-rth ~]$ ls -al /proc/ | grep 2560
dr-x--x--x   5 root     other        736 Jul 15  1999 2560

It carried on running just fine until either 2012 or 2014, when a panic happened.


With UNIX and Linux it's very possible to LD_PRELOAD a library that wraps all the network syscalls to interface with a SOCKS library. I don't even know of anything in Windows remotely close.

Consider that you have a corporate network behind a firewall. Using UNIX all you need is ssh and something such as tsocks. tsocks can pass all the network system calls through ssh -D 1085 to pass all your programs through the firewall that you're connecting to. This makes life easy for any road warrior.

When using a program that has to interface with a set of system calls such as disk or network that require a layer of alteration (such as socks or file revisions) it's far easier to use LD_PRELOAD to interface with that change in a simple way without recompilation of the software.

The best mail client that I've ever used is named mutt, it has a text console interface, which means you can run it remotely over ssh without any real concerns for bandwidth.


One of the things that I've always disliked about Windows and other systems of that nature is that most of the automation can only be done by linking to DLL's (DLL Hell) and attempting to call functions from there, even for rudimentary tasks. However, in UNIX/Linux land there are lots of things that make automation a really simple task. Take screen for example and if that's no use there's always perl/awk :)


In Windows you're primarily stuck with MS Office (or WordPad). That's what MS want's you to buy into. Even when MS attempts to convince the public they're cooperating and interacting in open formats they just can't do it right.

On UNIX/Linux there's a wonderful thing called latex which can be used to output in a variety of formats.

What's very interesting though is that things which people can do on the command line easily and freely with Linux (such as versioning and scripting) MS try and put into a documenting system and things become difficult to understand since its very hard to see the inner workings of the document.

Having the document in text format with structure makes a piece of work easier to manage and easier to distribute and track changes with a diff tool, better still with subversion which makes for very easy collaboration.

Other alternatives: groff, openoffice.org, abiword, lyx.

graphical documentation

graphviz and xfig are processed from text files, the process outputs a graphic binary. This, just like the text processing tools above make for very easy version control. There is also dia which is very useful and easy to use, however, it is not so easy to collaborate.

Another really great way to display information is with gnuplot which takes parameters as values (when reading from a file) and then can display it graphically. This comes into big effect when you're monitoring or collecting statistics.

interested in unix

If you're interested in UNIX and have never given it a go, why not get a free shell account from the DMOZ directory?

why linux and not unix

My primary reason for choosing Linux above a more traditional UNIX (*BSD or OpenSolaris) is the availability of ionice. This is a scheduling tool for processes on Linux to run in priority groups, either when IO is available, realtime or normal. This is something that makes Linux more of a QoS choice. Even if the system has just one user, then that user may choose to have processes backgrounded in favour of more UI intensive tasks.

Other brilliant reasons for Linux include the wealth of programs available in pre-compiled form from the distribution repositories. Although for some service applications it is nice to control the build from source code to ensure that features don't change without your knowledge, having packages available immediately after install is a huge bonus.

why not vista/windows{7,8,10}

Basically, tonight, ten minutes wasted looking at boot up screens, spinning thumbs.

Above I made a tiny error in saying that there is no way to tsocks an application. It turns out there is an application that can do this, it just appears that it can't do "big" things like Internet Explorer or Firefox, which is a shame as they're quite heavily used.

There are other contributing factors to why you might not want to use Windows as your server or desktop operating system:

  1. WordPad vulerable to remote code execution
  2. DLL library loading allows attackers to use libraries in the current working directory

There are other things that you should consider if you're thinking about switching OS, particularly if you're going to switch from something like MacOS to Windows or Linux/BSD to Windows.

In my opinion I personally don't think Windows is ready for the desktop. The main contributing factor for this is my strong belief that the end user should not be be left on the internet with an insecure OS and Windows out of the box is very insecure.

If an end user does not have much computing awareness then Windows is not the ideal OS. Any other OS is ideal and the choice the end user goes with probably should then be dictated by cost.

Typical faults are:

  1. spyware/malware
  2. viruses
  3. worms

The above are mainly due to flaws in the OS or browser. Even with something like Firefox there are still potential problems due to the end user clicking a run dialog.

The best route around this is to give the end user an OS which is probably not going to be able to execute malware.

On the other hand, linux has hardly any viruses. Thanks to the author of that page for compiling such a detailed list of viruses.

getting back some time

Another annoyance for me is the combination of the "run" registry keys and the annoyingly long time it takes to start up and give control to the user and the amount of time takes to shutdown.

Despite repeated cleaning of the "run" registry keys I've not found it a practical means of reducing the startup times.

Packages are yet another huge disadvantage of Windows, despite people touting about software availability for the MS platform its surprisingly difficult to install programs without seeking their publishers web sites.


With Linux it's often a simple case when watching a video through a player for the player to ask you if you wish to install additional codecs and offer download choices.

On Windows however the options are not presented to the user (as I have found out with Windows XP and Vista) and the user has to attempt to locate codecs for themselves. This often leads to downloading of setup packages and other programs which will require administrator privileges to run. This has the scope to return the computer to an unknown state as it is not easy to verify the authenticity of the program.

Examples of this are here

codecs in fedora


I've come to dislike the update mechanism within Windows. It appears to me that it's not correct to term this 'update' because it really doesn't 'update' the system, it can only move it forward a single step. I know this since for the past week each and every time we have booted a Vista laptop it has required more staged update reboots.

If this were truly an 'update' then it should only happen once (just like it does with a Linux package update).

Many times when using Vista or XP I've found my tasks delayed because the system refuses to do anything until the update has been applied. In worse cases I've found the network drivers are disabled part-way through applying updates, rendering the system unusable.

cd writing

There comes a point when I have to give up with Windows. My corporate laptop at work which has a "well looked after" Windows XP build, and works as a glorified MS Exchange client also has a DVD Writer.

It's a very good laptop, it was given to me three years ago when I started work with this company. The laptop has a very good hardware build, it's case is very sturdy and I don't think they come much better.

However, when blank media is inserted into the DVD drive for some reason Nero thinks there is a session already on the disc, I can't find out what has the session open (there's no fuser or similar for this), which is really quite a tricky situation.

It's not really worth my time to try and figure this out as I don't have Admin rights on it, so a simple dual boot or pendrive solves the problem with Ubuntu and wodim.

This sort of thing makes me eternally frustrated with Windows as it's pretty much impossible for me to work with in a engineering role where you have to shovel data from point A to point B. Sounds simple, but it's just not fit for purpose.

Stuff just works in Linux, corporate software stagnates.

data recovery

Imagine the situation, you're up late working on a project and you desperately have to get it in on time tomorrow and the disk starts making really weird sounds. So what do you do?

  1. Drag and drop all your folders on another disk?

Oh you've only got one OS disk and the OS starts to error at you. Bummer.

Fear not, whatever situation you're in, getting a command prompt open in UNIX and running a dd with the bad disk source and output to another disk is the safest thing you can do right now. Taking the raw contents of the disk will at least allow you to mount and possibly boot the disk on more reliable hardware. If you don't have a blank hard disk/usb then taking a image to file will probably suffice for now, maybe pipe through a gzip if space for an uncompressed image isn't available.

One thing that's almost guaranteed to cause problems would be attempting the same thing with a Windows disk. Even if you can read the disk and mirror to another, you can probably bet your savings that Windows will blue screen on you.

backup and restore

In brief, you can backup your entire system using a couple of routine commands in your crontab file, restoration is just as easy.

# dump -0 -f - /dev/sda1 | gzip ...

Restoring is pretty much just the reverse

# restore -rf


One thing I've forgotten to mention so far on this page is packages and later I will mention source code.

Packages are normally pre-compiled pieces of software which contain among other things usually a pre and post install script. These little scripts often tweak systems to accommodate the new software.

Debian is my favourite distribution to use on the desktop and the server. There are a variety of reasons for this choice, one of the biggest reasons for this choice is that the distribution repository contains just about all the software that I could ever want to install in a pre-compiled format that will work just perfectly with my system.

source code

Failing the above on some rare occasions I'll want to install something from source code. It's almost always a very straightforward process.

wget thing.tar.gz
tar zxvf thing.tar.gz
cd thing
./configure --{exec-,}prefix=~/source-build && make && make install

It's a few simple steps to get the job done. The advantage of this is that should I wish to make changes and send those to the author it's very simple to do that.

log files

It still amazes me how hard it is to see what's going on when you do something in Windows. For instance, there's no means of doing anything like strace or tcpdump on a base Windows install, sure the community provide a means, but this isn't aways available without having to "trust" a possibly rogue download site to provide you with a copy.

So, if you can't use tcpdump or strace, what can you do to diagnose a fault with a program? Without awk/grep and syslog messages what useful tools are there to make your life easy?

segregation of role

UNIX systems include out of the box sudo. sudo allows the administrator to delegate user escalation by individual commands to individual users. This means, without giving full user switching to the root/admin you may permit a single user to run just one or more programs as another user, perhaps the root/admin.

Imagine you have an on-call application owner who needs to redeploy their program out of hours, you could give then the ability to run systemctl start/stop/enable/disable appname, or sudo su - app to do full application administration.

I know of nothing else that allows you to do this in a Microsoft supported way. MS has "Just Enough Administration", but it doesn't come close.


Linux, for many users has included a Bell–LaPadula model Mandatory Access Control. This has saved many admins, many times over. It will allow you to delegate ONLY the set of syscalls (and even where those syscalls go). A web server can be prevented from talking to /usr/sbin/sendmail. To make life nice and easy, many of these roles can be toggled by simple toggles.

Similar models exist by different names in proprietary Unixes too.

Linux is battle tested

The NSA contributed SELinux, the kernel team maintain the code base. Getting patches merged into upstream is the best course of action for everyone involved.

As a result, the department of defence can consume maintained SELinux.

This model works much better than Microsoft Windows running smart ships which left a ship and its crew dead in the water.

annoyances with ms specifically

This section points out principles that make working with a Microsoft product particularly painstaking.

For MS, security is second place, "A developer's core job is not to worry about security but to do feature work".

if you paid for the os

You paid for the OS, why are you the AaaS (Audience as a Service) for MS? You shouldn't have to look at the OS as a "freemium" model.

server space fragmentation

Azure uses t-shirt sizes for systems. If you want a machine with 64GB of RAM, 1 CPU core and 4TB of root file system, can you get that in Azure? Probably not. You'll have to pay for more cores than you require, probably more memory than you require just to get a disk of the size that you need.

Server space is limited, what you've just bought will reserve more server than you need which impacts global warming more than it should.

The server will probably have resources spare, but not in a convenient way that someone else can utilise if all someone wanted was a couple of cores and not much RAM. The only reason to sell server space this way is to force the customer to buy resource they don't need. This is akin to forced deprecation of smaller machines because they are left unable to cater for capacity demand in one direction only.

Further to the fragmentation problem, MS will happily cut your service off for "high" IO. Remember in MS terms, IO is already limited to 500 IO/sec, which for SSD is remarkably slow. If your VM is continually performing 500/sec then it is likely that you will find it hung. I'm familiar with this on other providers when the upstream/downstream network is saturated, but never for disk IO before. Often network saturation is a sign of machine compromise when it turns into a virus mill (Windows, I'm looking at your again).

forced deprecation

In the Azure cloud is the ASM (Azure Service Management) and ARM (Azure Resource Manager) incompatible groups. They do not share much, and in order to migrate you'll need to duplicate your infrastructure. The clock is ticking, you'll have to update your infrastructure of code and watch out for bugs. I can't think of many, if any, userspace breaking changes in Unix-like systems, it is just not the done thing.

What makes this ridiculous is that the interface is a web-API where your request could be easily directed to the new end point, that's what mod_rewrite does, after all.

microsoft office

microsoft word

By default opening a document goes straight into a online reading format that doesn't scale. Literally, images overlap text where the didn't before. Not very useful.

Constant incompatibility with previous Word versions. When I was doing my dissertation I had one version of Word at home (97) and the college had another (2000). Once I worked on the 2000 version it was impossible to keep the formatting when saved in 97 version. This lead to documents not holding numbering as intended.

Software should always be backwards compatible, or have a means of converting to previous forms. A lecturer at the time suggested Star Office, that worked. Fortunately Sun opened it, OpenOffice became LibreOffice. Both OpenOffice and LibreOffice are fully compatible. They would both like to be number 1, but their motives are selfless. Unlike MS, where the motives are driven by the shareholders.

This tactic of making the industry communicate on the latest version only is great for sales, it leads people to believe they're not able to function without it.

microsoft outlook

Should you be on the receiving end of spam (perhaps from a rogue MS computer) and you wish to report that spam and it's arrived in your MS Outlook client, how do you report your spam to your ISP?

There are a couple of ways but they're not intuitive. The first is to drag and drop the mail into another mail, so you're attaching the spam. This does work. If you wish to see the entire message, headers and body in the same place, life isn't so easy. In fact, I can't think of anyway to do this.

The problem is inverted with Hotmail, you cannot forward a message as an attachment, but you can see the message source.

Outlook hides email addresses

I've never yet found an intuitive (or other) means of extracting an email address for a colleague from the address book that comes with Outlook. The task seems impossible. The only close method is to start writing an email to somebody, ''right click'' on their name, select ''send email'' and then quickly copy the displayed email address before it vanishes and is replaced with their name. Pathetic and patently broken.

Outlook also hides email

It's quite common for me to find that Outlook could be running in a terminal server and not display any received mail to me, for days. After some send/receive poking it eventually coughs up the mail that's been hidden from my view. Incredibly useless as a communications tool.

Outlook also hides headers

This isn't 100% true, but it does make life really confusing if you don't know where the header viewing toggle is stored. This used to be buried in the 'options' area, now it's buried in another 'ribbon'. Why do MS want to make headers such a secret/difficult affair?

windows 8

In order to make use of the cup holder in your computer (DVD reader) to play movies, you'll need to fork out for more than just the basic version of Windows. Or get an Ubuntu live disc.

windows xp

Windows XP has a very large install base. Yet the software is 10 years old. So Microsoft wants everyone to stop using it because they can't support it anymore.

OpenAFS, for example has been releasing since 2000 and it's still supported and gets regular code updates with a much smaller user base. That's done freely of course. On the other hand, people were asked to pay a lot of money for XP without giving customers a sensible alternative they cut off their support. So if you Pentium 4 which was functioning perfectly ok, is likely to become some form of spam relay once the updates have ceased.

Apache has been continuing to update their HTTP product when there are security flaws. The one exception to this is the 1.3 tree, officially no longer supported by Apache. At least it's possible to receive community patches still since the software is open source. The advice is to upgrade the HTTP server. At least you can do this without having to upgrade hardware. The same goes for the kernel, you can upgrade the kernel without needing to upgrade your hardware.

One of the main assumptions that you can make about Unix programs is that they're highly portable. Machine memory isn't usually an issue.


"Cloud" ~= out sourcing.

Most data centres operate with computers in racks. Networks fail sometimes, computers lock up sometimes, so hardware manufacturers add OOB (Out of Band) management cards. These cards allow operators to mount virtual media, control power, view system hardware diagnostics, configure RAID and interact through the serial port. In some cases they allow keyboard, video and mouse through a virtual interface too.

Azure, unlike almost every other "Cloud" provider has failed to offer these features in their virtual machine service. Software implementations of management cards is trivial to most operators.


Many programs in Windows are single tasking. I don't mean this in the sense that the OS can only run one thing at a time (although often that appears the case) but that the user can only do one thing at a time because the program does not allow for multiple actions to run simultaneously.

Take iTunes as a prime example of this.


Windows was never meant to be a multi-user system. Never. It is a stretch to configure windows with multiple user accounts that co-exist. You cannot switch from one user to another without logging the first out. In some cases Windows can be sold to permit multiple RDP users, I think Server editions can allow two users. There was a Terminal services edition that allowed more. openssh on the other hand limits users by the amount of system memory available. This is a digression. I'm talking about multiple physical desk GUI users logged in to one computer.

I continually switch user accounts on this computer without being logged out. The screen saver allows any other user to login without forcing my account off. It's brilliant.

Different users can have their own font setups, their own TTF directory, their own configuration of which fonts to override with which others.

Windows programmers don't think of multiple users. The perception is one user per licence.

if software were food

Software could be shown in similar forms to food. In the UK we have a stereotypical Fish and Chips shops. They take cod, batter it, then fry and sell to to the consumer. This takes a relatively short time to cook. It's not he healthiest of meals given the fat content. The buyer doesn't get a great view of the nutritional values, it is just fish, batter and grease from the fryer. Normally served with fried chips (fries for American readers). We know, it's not good for us.

There is a similar option in a frozen food shop. You can buy from one of the brands of battered fish. You could choose another type of battered fish other than cod, if you wish. You could even buy the fish without batter and cook it in the oven. There is cooked tinned fish, too. Cooking instructions follow normal standards, optional fan oven and an easy to understand best before end date format.

The first paragraph is commercial software. You don't get to look inside, you're stuck with what is on offer from the fryers. The second option is closer to off-the-shelf open source. You can pick and choose what you want, you can look insider, you can mix-and-match, commodity protocols.


Suppose you need to click the point N times in W application. Simple:

xdotool search W windowraise click --repeat N 1

For example, if you wanted to fill your inventory with 99 blocks (select a block, then something you have 99 of already):

xdotool search minetest windowraise click --repeat 99 1

I'm just using this as an example, the tools are ready in the repository of your distro and so simple to apply a common 'cookie cutter' solution. Use xdotool once, get familiar with it, and it's there for all your other uses.