Seeking a board seat at

I’ve stepped up to be considered for a seat on the Board of the Open Source Initiative.

Why would I want to do this? Simple: most of my technology-based career has been made possible because of the existence of FOSS technologies. It goes all the back to graduate school (Oregon State University, 1988) where I was able to work on a technology called TCP/IP which I was able to build for the OS/2 operating system as part of my MSEE thesis. The existence of newsgroups such as comp.os.unix, comp.os.tcpip and many others on usenet gave me a chance to be able to learn, craft and make happen networking code that was globally useable. If I did not have access to the code that was available on the newsgroups I would have been hardpressed to complete my thesis work. The licensing of the code then was uncertain and arbitrary and, thinking back, not much evidence that one could actually repurpose the code for anything one wants to.

My subsequent involvement in many things back in Singapore – the formation of the Linux Users’ Group (Singapore) in 1993 and many others since then, was only doable because source code was available for anyone do as they pleased and to contribute back to.

Suffice to say, when Open Source Initiative was set up twenty years ago in 1998, it was a formed a watershed event as it meant that then Free Software movement now had a accompanying, marketing-grade branding. This branding has helped spread the value and benefits of Free/Libre/Open Source Software for one and all.

Twenty years of OSI has helped spread the virtue of what it means to license code in an manner that enables the recipient, participants and developers in a win-win-win manner. This idea of openly licensing software was the inspiration in the formation of the Creative Commons movement which serves to provide Free Software-like rights, obligations and responsibilities to non-software creations.

I feel that we are now at a very critical time to make sure that there is increased awareness of open source and we need to build and partner with people and groups within Asia and Africa around licensing issues of FOSS. The collective us need to ensure that the up and coming societies and economies stand to gain from the benefits of collaborative creation/adoption/use of FOSS technologies for the betterment of all.

As an individual living in Singapore (and Asia by extension) and being in the technology industry and given that extensive engagement I have with various entities:

I feel that contributing to OSI would be the next logical step for me. I want to push for a wider adoption and use of critical technology for all to benefit from regardless of their economic standing. We have much more compelling things to consider: open algorithms, artificial intelligence, machine learning etc. These are going to be crucial for societies around the world and open source has to be the foundation that helps build them from an ethical, open and non-discriminatory angle.

With that, I seek your vote for this important role.  Voting ends 16th March 2018.

I’ll be happy to take questions and considerations via twitter or here.


Wireless@SGx for Fedora and Linux users

Eight years ago, I wrote about the use of Wireless@SGx being less than optimal some years ago.

I must acknowledge that there has been efforts to improve the access (and speeds) to the extent that earlier this week, I was able to use a wireless@sgx hotspot to be on two conference calles using and It worked very well that for the two hours I was on, there was hardly an issue.

I tweeted about this and kudos must be sent to those who have laboured to make this work well.

The one thing I would want the Wireless@SG people to do is to provide a full(er) set of instructions for access including Linux environments (Android is Linux after all).

I am including a part of my 2010 post here for the configuration aspects (on a Fedora desktop):

The information is trivial. This is all you need to do:

	- Network SSID: Wireless@SGx
	- Security: WPA Enterprise
	- EAP Type: PEAP
	- Sub Type: PEAPv0/MSCHAPv2

and then put in your Wireless@SG username@domain and password. I could not remember my iCell id (I have not used it for a long time) so I created a new one – They needed me to provide my cellphone number to SMS the password. Why do they not provide a web site to retrieve the password?

Now from the info above, you can set this up on a Fedora machine (would be the same for Red Hat Enterprise Linux, Ubuntu, SuSE etc) as well as any other modern operating system.

I had to recreate a new ID (it appears that iCell is no longer a provider) and apart from that, everything else is the same.

Thank you for using our tax dollars well, IMDA.

Three must haves in Fedora 26

I’ve been using Fedora ever since it came out back in 2003. The developers of Fedora and the greater community of contributors have been doing a amazing job in incorporating features and functionality that subsequently has found its way into the downstream Red Hat Enterprise Linux distributions.

There are lots to cheer Fedora for. GNOME, NetworkManager, systemd and SELinux just to name a few.

Of all the cool stuff, I particularly like to call out three must haves.

a) Pomodoro – A GNOME extension that I use to ensure that I get the right amount of time breaks from the keyboard. I think it is a simple enough application that it has to be a must-have for all. Yes, it can be annoying that Pomodoro might prompt you to stop when you are in the middle of something, but you have the option to delay it until you are done. I think this type of help goes a long way in managing the well-being of all of us who are at our keyboards for hours.

b) Show IP: I really like this GNOME extension for it does give me at a glance any of a long list of IPs that my system might have. This screenshot shows ten different network end points and the IP number at the top is that of the Public IP of the laptop. While I can certainly use the command “ifconfig”, while I am on the desktop, it is nice to have it needed info tight on the screen.



c) usbguard: My current laptop has three USB ports and one SD card reader. When it is docked, the docking station has a bunch more of USB ports. The challenge with USB ports is that they are generally completely open ports that one can essentially insert any USB device and expect the system to act on it. While that is a convenience, the possibility of abuse isincreasing given rogue USB devices such as USB Killer, it is probably a better idea to deny, by default, all USB devices that are plugged into the machine. Fortunately, since 2007, the Linux kernel has had the ability to authorise USB devices on a device by device basis and the tool, usbguard, allows you to do it via the command line or via a GUI – usbguard-applet-qt. All in, I think this is another must-have for all users. It should be set up with default deny and the UI should be installed by default as well. I hope Fedora 27 onwards would be doing that.

So, thank you Fedora developers and contributors.



Quarter Century of Innovation – aka Happy Birthday Linux!

Screenshot from 2016-08-25 14-35-23

Happy Birthday, Linux! Thank you Linus for that post (and code) from a quarter of a century ago.

I distinctly remember coming across the post above on comp.os.minix while I was trying to figure out something called 386BSD. I was following the 386BSD development by Lynne Jolitz and William Jolitz back when I was in graduate school in OSU. I am not sure where I first heard about 386BSD, but it could have been in some newsgroup or the BYTE magazine (unfortunately I can’t find any references). Suffice to say, the work of 386BSD was subsequently documented by the Dr. Dobb’s Journal from around the 1992. Fortunately, the good people at Dr. Dobb’s Journal have placed their entire contents on the Internet and the first post of the port of 386BSD is now online.

I was back in Singapore by then and was working at CSA Research doing work in building networking functionality for a software engineering project. The development team had access to a SCO Unix machine but because we did not buy “client access licenses” (I think that was what it was called), we could only have exactly 2 users – one on the console via X-Windows and the other via telnet. I was not going to suggest to the management to get the additional access rights (I was told it would cost S$1,500!!) and instead, tried to find out why it was that the 3rd and subsequent login requests were being rejected.

That’s when I discovered that SCO Unix was doing some form of access locking that was part of the login process used by the built-in telnet daemon. I figured that if I can replace the telnet daemon with one that does not do the check, I can get as many people telnetting into the system and using it.

To create a new telnet daemon, I needed the source code and then to compile it. SCO Unix never provided any source code. I managed, however, to get the source code to a telnet daemon (from I think although I could be wrong).

Remember that during those days, there was no Internet access in Singapore – no TCP/IP access anyway. And the only way to the Internet was via UUCP (and Bitnet at the universities). I used (an ftp via email service by Digital Equipment Corporation) to go out and pull in the code and send it to me via email in 64k uuencoded chunks. Slow, but hey, it worked and it worked well.

Once I got the code, the next challenge was to compile it. We did have the C compiler but for some reason, we did not have the needed crypto library to compile against. That was when I came across the incredible stupidity of labeling cryptography as a munition by the US Department of Commerce. Because of that, we, in Singapore, could not get to the crypto library.

After some checking around, I got to someone who happened to have a full blown SCO Unix system and had the crypto library in their system. I requested that they compile a telnet daemon without the crypto library enabled and to then send me the compiled binary.

After some to and fro via email, I finally received the compiled telnet daemon without the crypto linked in and replaced the telnetd on my SCO Unix machine. Viola, everyone else in the office LAN could telnet in. The multi-user SCO machine was now really multi-user.

That experience was what pushed me to explore what would I need to do to make sure that both crypto code and needed libraries are available to anyone, anywhere. The fact that 386BSD was a US-originated project meant that tying my kite to them would eventually discriminate against me in not being able to get to the best of cryptography and in turn, security and privacy. That was when Linus’ work on Linux became interesting for me.

The fact that this was done outside the US meant that it was not crippled by politics and other shortsighted rules and that if it worked well enough, it could be an interesting operating system.

I am glad that I did make that choice.

The very first Linux distribution I got was from Soft Landing Systems (SLS in short) which I had to get via the amazingly trusty service which happily replied with dozens of 64K uuencoded emails.

What a thrill it was when I started getting serialized uuencoded emails with the goodies in them. I don’t think I have any of the 5.25″ on to which I had to put the uudecoded contents. I do remember selling complete sets of SLS diskettes (all 5.25″ ones) for $10 per box (in addition to the cost of the diskettes). I must have sold it to 10-15 people. Yes, I made money from free software, but it was for the labour and “expertise”.

Fast forward twenty five years to 2016, I have so many systems running Linux (TV, wireless access points, handphones, laptops, set-top boxes etc etc etc) that if I were asked to point to ONE thing that made and is still making a huge difference to all of us, I will point to Linux.

The impact of Linux on society cannot be accurately quantified.  It is hard. Linux is like water. It is everywhere and that is the beauty of it. In choosing the GPLv2 license for Linux, Linus released a huge amount of value for all of humanity. He paid forward.

It is hard to predict what the next 25 years will mean and how Linux will impact us all, but if the first 25 years is a hint, it cannot but be spectacular. What an amazing time to be alive.

Happy birthday Linux. You’ve defined how we should be using and adoption technology. You’ve disrupted and continue to disrupt, industries all over the place. You’ve helped define what it means to share ideas openly and freely. You’ve shown what happens when we collaborate and work together. Free and Open Source is a win-win for all and Linux is the Gold Standard of that.

Linux (and Linus) You done well and thank you!

This is quite a nice tool – magic-wormhole

I was catching up on the various talks at PyCon 2016 held in the wonderful city of Portland, Oregon last month.

There are lots of good content available from PyCon 2016 on youtube. What I was particularly struck was, what one could say is a mundane tool for file transfer.

This tool, called magic-wormhole, allows for any two systems, anywhere to be able to send files (via a intermediary), fully encrypted and secured.

This beats doing a scp from system to system, especially if the receiving system is behind a NAT and/or firewall.

I manage lots of systems for myself as well as part of the work I at Red Hat. Over the years, I’ve managed a good workflow when I need to send files around but all of it involved having to use some of the techniques like using http, or using scp and even miredo.

But to me, magic-wormhole is easy enough to set up, uses webrtc and encryption, that I think deserves to get a much higher profile and wider use.

On the Fedora 24 systems I have, I had to ensure that the following were all set up and installed (assuming you already have gcc installed):

a) dnf install libffi-devel python-devel redhat-rpm-config

b) pip install –upgrade pip

c) pip install magic-wormhole

That’s it.

Now I would want to run a server to provide the intermediary function instead of depending on the goodwill of Brian Warner.


UEFI and Fedora/RHEL – trivially working.

My older son just enrolled into my alma mater, Singapore Polytechnic, to do Electrical Engineering.  It is really nice to see that he has an interest in that field and, yes, make me smile as well.

So, as part of the preparations for the new program, the school does need the use of software as part of the curriculum. Fortunately, to get a computer was not an issue per se, but what bothered me was that the school “is only familiar with windows” and so that applications needed are also meant to run on windows.

One issue led to another and eventually, we decided to get a new laptop for his work in school. Sadly, the computer comes only with windows 8.1 installed and nothing else. The machine has ample disk space (1TB) and the system was set up with two partitions – one for the windows stuff (about 250G) and the 2nd partition as the “D: drive”. Have not seen that in years.

I wanted to make the machine dual bootable and went about planning to repartition the 2nd partition into two and have about 350G allocated to running Fedora.

Then I hit an issue.  The machine was installed with Windows using the UEFI. While the UEFI has some good traits, but unfortunately, it does throw off those who want to install it with another OS – ie to do dual-boot.

Fortunately, Fedora (and RHEL) can be installed into a UEFI enabled system. This was taken care of by work done by Matthew Garrett as part of the Fedora project. Matthew also received the FSF Award for the Advancement of Free Software earlier this year. It could be argued that perhaps UEFI is not something that should be supported, but then again, as long as systems continue to be shipped with it, the free software world has to find a way to continue to work.

The details around UEFI and Fedora (and RHEL) is all documented in Fedora Secure Boot pages.

Now on to describing how to install Fedora/RHEL into a UEFI-enabled system:

a) If you have not already done so, download the Fedora (and RHEL) ISOs from their respective pages. Fedora is available at and RHEL 7 Release Candidate is at

b) With the ISOs downloaded, if you are running a Linux system, you can use the following command to create a bootable live USB drive with the ISO:

dd  if=Fedora-Live-Desktop-x86_64-20-1.iso of=/dev/sdb

assuming that /dev/sdb is where the USB drive is plugged into. The most interesting thing about the ISOs from Fedora and RHEL is that they are already set up to boot into a UEFI enabled system, i.e., no need to disable in BIOS the secure boot mode.

c) Boot up the target computer via the USB drive.

d) In the case of my son’s laptop, I had to repartition the “D: drive” and so after boot up from the USB device, I did the following:

i) (in Fedora live session): download and install gparted (sudo yum install gparted) within the live boot session.

ii) start gparted and resize the “D: drive” partition. In my case, it was broken into 2 partitions with about 300G for the new “D: drive” and the rest for Fedora.

e) Once the repartitioning is done, go ahead and choose the “Install to drive” option and follow the screen prompts.

Once the installation is done, you can safely reboot the machine.

You will be presented with a boot menu to choose the OS to start.



A helper note for family and friends about your connectivity to the Internet from July 9 2012

This is a note targeted at family and friends who might find that they are not able to connect to the Internet from July 9, 2012 onwards.

This only affects those whose machines were are running Windows or Mac OSX and have a piece of software called DNSChanger installed.  The DNSChanger modifies a key part of the way a computer discovers other machines on the internet (called the Domain Name Server or DNS).

Quick introduction to DNS:

For example, you want to visit the website, You type this in your browser and magically, the CNN website appears in a few seconds. The way your browser figured out to reach the server was to do the following:

a) The browser took the domain name and did what is called a DNS lookup.

b) What it would have received in the DNS lookup is a mapping of the to a bunch of numbers.  In this case, it would have received something like:        60    IN    A        60    IN    A        60    IN    A        60    IN    A

c) The numbers you see in the lines above ( for example) are the Internet Protocol (IP) number of the server on which resides. You notice that there are more than one IP number.  That is for managing requests from millions of systems and not having to depend only on one machine to reply.  This is good network architecture. For fun, let’s look at      59    IN    CNAME    59    IN    A    59    IN    A    59    IN    A    59    IN    A    59    IN    A has 5 IP #s associated to it but you notice that there is something that says CNAME (stands for Canonical Name) in the first line. What that means is that is also the same as which in turns has 5 IP#s associated with it.

d) The beauty of this is that in a few seconds, you got to the website that you wanted to without remembering the IP # that is needed.

What is this important? If you have a cell phone, how do you dial the numbers of your family and friends?  Do you remember by heart their respective phone numbers? Not really or at least not anymore You probably know your own number and a small close group (your home, your work, your children, spouse, siblings).  Even then, their names are in your contact book and when you want to call (or text) them, you just punch in their names and your phone will look up the number and send out.

The difference between your cell phone directory and the DNS is that, you control what is in your phone directory.  So, a name like “Wife” in your phone could point to a phone number that is very different from a similar name in your friend’s phone directory.  That is all well and good.

But on the global Internet, we cannot have name clashes and that is why domain names are such hot things and people have snapped up pretty much a very large chunk of names during the rush in the late 1990s.

Now on to the issue at hand

So, what’s that got to do with this alarmist issue of connecting to the Internet from July 9, 2012?

Well, it has to with the fact that there as a piece of software – malware in this case – that got added to those running Windows and Mac OSX.  In all computers, the magic to do the DNS lookup is maintained by a file which contains information about which Domain Namer Server to query when presented with a domain name like

For example, on my laptop (which runs Fedora), the file that directs DNS looks is called /etc/resolv.conf.  This is the same for a Mac OSX file and I think it there is something similar in the Windows world as well. Fedora and Mac OSX share a common Unix heritage and so many files are in common.

The contents of my /etc/resolv.conf file is:

# Generated by NetworkManager
search lan

The file is automatically generated when I connect to the network and the crucial line is the line that reads “nameserver”. In this case, it points to which happens to be my FonSpot wireless access point. But what is interesting is that my FonSpot access point is not a DNS server per se.  In the setup of the FonSpot, I’ve got it to look up domain names to Google’s public DNS server whose IP #s are and

Huh? What does this mean?  Simply put, when I type in on my browser, that name’s IP# is looked up first by my browser asking the nameserver which is the FonSpot will then return to my browser that it should go ask for an answer. If does not know, hopefully will give an IP # to my browser to ask next.  Eventually, when an IP # is found, my browser will use that IP # and send a connection request to that site. All of this happens in milliseconds and when it all works, it looks like magic.

What if you don’t get to the site?  What if the entry in the /etc/resolv.conf file pointed to some IP # that was a malicious entity that wanted to “hijack” your web surfing?  There is a legitimate reason for this. For example, when you connect to a public wifi access point (like Wireless@SG for example), you will initially get a DNS nameserver entry that belongs to the wifi access provider. Once you successfully logged into that access point, then your DNS lookup will be properly directed. This technique is called “captive portal”. My FonSpot is a captive portal btw.

The issue here is that those machines who have the malware DNSChanger have the DNS lookup being hijacked and directed elsewhere.  See this note by the US Federal Bureau of Investigation about it.

It appears that the DNSChanger malware had set up a bunch of IP# to redirect maliciously all access to the Internet. If your /etc/resolv.conf file has nameserver entries that contain numbers in the following range: to to to to to to

you are vulnerable.

Here’s a test I did with the 1st of those IP#s on my fedora machine:

[harish@vostro ~]$ dig @

; <<>> DiG 9.9.1-P1-RedHat-9.9.1-2.P1.fc17 <<>> @
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34883
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 4, ADDITIONAL: 5

; EDNS: version: 0, flags:; udp: 4096
;            IN    A

;; ANSWER SECTION:        464951    IN    CNAME    241    IN    CNAME 252    IN    A

;; AUTHORITY SECTION:        32951    IN    NS        32951    IN    NS        32951    IN    NS        32951    IN    NS

;; ADDITIONAL SECTION:        33061    IN    A        33061    IN    A        317943    IN    A        33297    IN    A

;; Query time: 305 msec
;; WHEN: Sun Jul  8 21:40:07 2012
;; MSG SIZE  rcvd: 242

Some explanation of what the is shown above. “dig” is a command “domain internet groper” that allows me, from the command line, to see what a domain’s IP address is. With the extra stuff “@”, I am telling the dig command to use as my domain name server and get the IP for the domain Currently is being run as a “clean” DNS server by the those who’ve been asked to by the FBI.

Hence, what will happen on July 9th 2012 is that the request by FBI to give a reply when is used, will expire. Therefore the command I executed above on July 8th 2012 will not return a valid IP number from July 9th 2012. While the Internet will work, there would be people whose systems have been compromised to point to the bad-but-made-to-work-OK DNS servers, will find that they can’t seem to get to any site easily by using domain names. If they instead used IP#s, they can get to the site with no issue.

A quick way to check if your system needs fixing is to go to NOW to check. If it is OK, ie your system’s /etc/resolv.conf is not affected (or the equivalent for those still running Windows).

See the announcement from Singapore’s CERT on this issue.

FUDCon Kuala Lumpur 2012

It is wonderful to see the Fedora Users and Developers Conference kick off in Kuala Lumpur today, May 18 2012. The plan was for me to attend, do a keynote and also pitch a talk for the barcamp. But, Murphy was watching how everything was coming together and pulled the rug from under me on Wednesday. I experienced what I found out later to be “tennis calf”

The symptoms were 100% spot on; felt something hit my calf followed by a pull. Quickly arranged to visit a sports doctor and he advised me about what needs to be done and recommended that perhaps I should not travel for the next two to three days. Bummer. I was so looking forward to being among the Fedora community flying in from Europe, Australia, Vietnam, India, Sri Lanka, Bangladesh etc.

Among the things I wanted to talk about at FUDCon KL was the following:

  1. A demo of the plugable USB2.0 docking station that turns a Fedora 17 machine (server, desktop, laptop – does not matter) into a multi-seat Linux environment. I bought a pair from Amazon. I received it on Wednesday (shipped to Singapore via and it worked exactly as stated – plug the USB to the laptop’s USB port, have a VGA monitor, USB keyboard and mouse plugged into the docking station, and viola, a fresh GNOME login screen. Amazing. You can even do an audio chat and watch streaming video via this setup. Really good stuff and kudos to the developers for main streaming the code into the Linux kernel and working with the Fedora devs to make this workable out of the box on Fedora 17.  What was really amazing from my point of view was the this works even when a machine is booted from a Fedora 17 LiveCD/USB. While this would suggest that the idea of the K12LTSP project is no longer needed, I think there are clear areas where they complement.
  2. My journey in I wanted to share my learnings about OpenShift and Git and all the associated stuff. More importantly, the fact that OpenShift is a technology that is being used for a 24-hour programming contest in Singapore called code::XteremeApps was important to share as well to encourage international participation in the contest.  I am hopeful that this blog post will trigger interest.

I guess all is not lost. The show has to go on and I am glad to have facilitated a lot of it.  But the main kudos has to go to the Malaysian Fedora Ambassadors who managed to pull this off in the 8 weeks when they were awarded the hosting rights!

And it’s live now – SCO Open Server 5.0.5 running in a RHEL 6 KVM

As promised earlier, the final bits of getting an application that runs on the old hardware on to the VM is now all done.  I tried to install the app but, I really did not want to spend too much time trying to figure out all the nuances about it.  Since this is really an effort that would eventually see the app being replaced at some future date, I wanted to get it done easily.

So, over the last long weekend, I did the following:

a) Created a brand new VM running SCO Open Server 5.0.5 on the RHEL 6.2 machine. The specs of the VM are: 2GB RAM, 8GB disk, qemu (not kvm), i686, set the network card to be PC-Net and Video as VGA. This is the best settings to complete the installation of SCO in the VM.

b) Meanwhile on the old machine, I did a tar of the whole system – “tar cvf wholesystem.tar /”. This is probably not the best way to do it, but hey, I did not want to spend time just picking what I wanted and what I did not need from the old machine. The resulting “wholesystem.tar” file was about 2G in size.

c) Ftp’ed the wholesystem.tar file to the VM and did an untar of it on to the VM – “cd /; tar xvf /tmp/wholesystem.tar “. This resulted in a VM that could boot, but needed some tweaks.

d) The tweaks were:

  1. Changing the network card to reflect the VM’s settings
  2. Changing the IP#
  3. Disabling the mouse on the VM

d) SCO is msft-ish (or may be msft learned it from SCO) in that the tool that is used to do the changes “scoadmin” will, after changes are done, need the kernel be rebuilt which then necessitates the rebooting of the VM to pick up the new values

e) Edited the /etc/hosts file to reflect the new IPs and added in /etc/rc.d/8/userdef file a line to set the default route on the VM: route add default

The VM’s IP is and in the /etc/resolv.conf file, the nameserver was set to and (Google’s public DNS)


a) The old machine had two printers – an 80 column and a 132-column dot matrix printer – connected to its serial and parallel ports.  I did not want to deal with this issue for the VM and got hold of two TP Link PS110P print servers. What’s nice about these are that they are trivial to work with (they are running Linux anyway) and by plugging them to the printers (even the serial printer had a parallel port), both printers were on the network and so printing from the SCO VM was now trivial.

b) Configuring the SCO VM to print to the network printer was using the rlpconf command. The TP Link print server has an amazing array of options and I picked the LPR option and the LPT0 and LPT1 device queue on the two TP Link print server. While the scoadmin has a printer settings section, for some reason the remote printers set up by it never quite worked.  In any case, the rlpconf edits the /etc/printcap file to reflect the remote printers and that is all that is needed.  Here’s what the /etc/printcap looked after the rplconf command was run:

cat /etc/printcap
# Remote Line Printer (BSD format)
#       :lp=:rm=rhel6:rp=rhel6-pdf:sd=/usr/spool/lpd/rhel6-pdf:

the IP #s were set in the TP Link print servers and their respective print spools.

c) so, once that was done, running lpstat -o all on the VM shows the remote printer status:

#lpstat -o all
lp1 is available ! (06,05,02,000000|01|448044|443364|04,02,02|8.2,8.3)
lp1 is available ! (03,02,03,000000|01|450384|445932|04,02,01|8.2,8.3)

Networking issues:

Initially, I had set up the VMs using the default networking setting for KVM.  The standard networking in KVM assumes that the VM is going to go out to the network and not running as a server per se. But this VM was going to be accessed by other machines (not the RHEL6 host) on the office LAN, so the right thing to do is to set up the a Bridging network instead of a NATed network. RHEL 6.2 does not, by default, have bridging set up and I think that need to change. NATing is fine, but in order for the VM to be accessed from systems other than the host, there has to be additional firewall rules set up if it is to be NATed, but a one liner iptables rule: “iptables -I FORWARD -m physdev –physdev-is-bridged -j ACCEPT” if it was on a Bridge.

I think the dialog box that sets up the VM via virt-manager should add an option to ask if a you need a bridged network. The option is there, but not obvious. So following these instructions carefully – they work.

Well, that was it. The SCO Open Server 5.0.5 with the application that was needed is now running happily in a VM on a RHEL 6.2 machine and the printing is via the network to a couple of print server.

I must, once again, take my hats off to the awesome open source developers of KVM, QEMU, BOCHS etc for the wonderful way all the technologies have some together in a Linux kernel as fully supported by Red Hat in Red Hat Enterprise Linux. There is an enormous amount of value in all of this, that even a premium subscription of this RHEL installation is a fraction of the true value derived. The mere fact that a 20th century SCO Open Server can now be made to run in perpetuity on a KVM instance is mind-boggling (even if Red Hat does not officially support this particular setup).