So, you want to do computer science, huh?

This has been on my mind for a while. Lots of friends are asking me for advise about their interest in pursuing courses on computer science (undergrad or grad).

First things first: I do have both CS and EE degrees (from Oregon State University), but that was from the 1980s. The world of computer science has evolved significantly since those days. I got to be on APRAnet then and it has evolved into what the Internet today. In that process, lots of the types of knowledge that one needs to do computer science – as an academic program – has also changed – but not evenly across the world.

I do mentor/advise startups and if any of them come to me with proposals that involve buying hardware, setting up software as part of the servers etc, I will promptly throw them out. Create your stuff on the cloud – AWS, Google, Rackspace, DigitalOcean etc. Lots of them out there. At some point, when your project/start-up ideas have gained some form/shape, and you have paying customers, you could consider running your own data centers using Red Hat Open Stack and Red Hat OpenShift  to make sure that you have a means to run your application in-house or in your own data center or onto the public cloud seamlessly.

So, likewise, if anyone is keen on doing a CS degree, do consider the following (not an exhaustive list, but a starting point):

  1. Do look at courses available online (Khan Academy, Coursera, EdX) and understand the breath and depth of what is out there.
  2. Learning online is a big challenge – especially for adults (atleast it is for me). I think I can manage the courses, but having to juggle the course and life when you don’t have a supporting group of classmates, it will be a struggle.
  3. If you are already keen on something – like game development, or artificial intelligence etc, do look out for the sites that are specialised in those fields. I particuarly like for AI related stuff because of the use of Jupyter notebooks (which is really the only sane way to do anything with data science).
  4. Make sure you create and manage your own brand via your own blog – (like this blog itself), code repositories (not only code, but including documents, graphics etc) sitting on sites like or (or even Create and curate your thoughts, ideas and considerations on your blog and repositories. I would not recommend LinkedIn as the primary site, but would encouage use of it as a place for links to your personal sites which should always be primary. You must always retain control of your stuff and not be dependent on others.
  5. Explore the various areas of interest that you have and find out who the thought leaders are including the professors. Contact them and seek their advise. Don’t be concerned that they might ignore your – which is just fine – but you never know who will respond and engage and your path will take a different turn.

If there are any other thoughts, do comment below. Love to hear from others on how they would have answered the same question.




And, they are online now

Over a week ago, I was pinged by @l00g33k on twitter with a picture of a description of a piece of code I wrote in 1982.

That lead to a meet up and reliving a time where the only high technology thing I had was a 6502-based single board computer complete with 2K of RAM. It was a wonderful meet up and @l00g33k  was kind enough to handover to me a bag with 10 copies of the newsletter that was published by the Singapore Ohio Scientific Users Group. That was the very first computer user group I joined.

Suffice to say, I did help contribute to the newsletter by way of code to be run on the Superboard ][ – all in Basic.

I’ve scanned the 10 newsletters and it is now online.

I am really pleased to read in the Vol 1 #3 (page 36) a program to generate a calendar. The code is all in Basic. Feed it a year and out comes the calendar for the whole year.

Another piece of code is in Vol 1 #5 page 36 a program to print out the world map. That code was subsequently improved upon and published by another OSUG member to include actual times of cities – something that could only be done with the addiion of a real time clock circuitry on the Superboard ][.

A third program was in Vol 1 #6 page 26 that implemented a morse code transmitter.

I was very happy then (as I am now) that the code is out there even though none of us whose code was published in the newsletters had any notion of copyright. Code was there to be freely copied and worked on. Yes, a radical idea which in 1984 got codified by Richard Stallman’s Free Software Foundation (

Seeking a board seat at

I’ve stepped up to be considered for a seat on the Board of the Open Source Initiative.

Why would I want to do this? Simple: most of my technology-based career has been made possible because of the existence of FOSS technologies. It goes all the back to graduate school (Oregon State University, 1988) where I was able to work on a technology called TCP/IP which I was able to build for the OS/2 operating system as part of my MSEE thesis. The existence of newsgroups such as comp.os.unix, comp.os.tcpip and many others on usenet gave me a chance to be able to learn, craft and make happen networking code that was globally useable. If I did not have access to the code that was available on the newsgroups I would have been hardpressed to complete my thesis work. The licensing of the code then was uncertain and arbitrary and, thinking back, not much evidence that one could actually repurpose the code for anything one wants to.

My subsequent involvement in many things back in Singapore – the formation of the Linux Users’ Group (Singapore) in 1993 and many others since then, was only doable because source code was available for anyone do as they pleased and to contribute back to.

Suffice to say, when Open Source Initiative was set up twenty years ago in 1998, it was a formed a watershed event as it meant that then Free Software movement now had a accompanying, marketing-grade branding. This branding has helped spread the value and benefits of Free/Libre/Open Source Software for one and all.

Twenty years of OSI has helped spread the virtue of what it means to license code in an manner that enables the recipient, participants and developers in a win-win-win manner. This idea of openly licensing software was the inspiration in the formation of the Creative Commons movement which serves to provide Free Software-like rights, obligations and responsibilities to non-software creations.

I feel that we are now at a very critical time to make sure that there is increased awareness of open source and we need to build and partner with people and groups within Asia and Africa around licensing issues of FOSS. The collective us need to ensure that the up and coming societies and economies stand to gain from the benefits of collaborative creation/adoption/use of FOSS technologies for the betterment of all.

As an individual living in Singapore (and Asia by extension) and being in the technology industry and given that extensive engagement I have with various entities:

I feel that contributing to OSI would be the next logical step for me. I want to push for a wider adoption and use of critical technology for all to benefit from regardless of their economic standing. We have much more compelling things to consider: open algorithms, artificial intelligence, machine learning etc. These are going to be crucial for societies around the world and open source has to be the foundation that helps build them from an ethical, open and non-discriminatory angle.

With that, I seek your vote for this important role.  Voting ends 16th March 2018.

I’ll be happy to take questions and considerations via twitter or here.

Wireless@SGx for Fedora and Linux users

Eight years ago, I wrote about the use of Wireless@SGx being less than optimal.

I must acknowledge that there has been efforts to improve the access (and speeds) to the extent that earlier this week, I was able to use a wireless@sgx hotspot to be on two conference calls using and It worked very well that for the two hours I was on, there was hardly an issue.

I tweeted about this and kudos must be sent to those who have laboured to make this work well.

The one thing I would want the Wireless@SG people to do is to provide a full(er) set of instructions for access including Linux environments (Android is Linux after all).

I am including a part of my 2010 post here for the configuration aspects (on a Fedora desktop):

The information is trivial. This is all you need to do:

	- Network SSID: Wireless@SGx
	- Security: WPA Enterprise
	- EAP Type: PEAP
	- Sub Type: PEAPv0/MSCHAPv2

and then put in your Wireless@SG username@domain and password. I could not remember my iCell id (I have not used it for a long time) so I created a new one – They needed me to provide my cellphone number to SMS the password. Why do they not provide a web site to retrieve the password?

Now from the info above, you can set this up on a Fedora machine (would be the same for Red Hat Enterprise Linux, Ubuntu, SuSE etc) as well as any other modern operating system.

I had to recreate a new ID (it appears that iCell is no longer a provider) and apart from that, everything else is the same.

Thank you for using our tax dollars well, IMDA.

Three must haves in Fedora 26

I’ve been using Fedora ever since it came out back in 2003. The developers of Fedora and the greater community of contributors have been doing a amazing job in incorporating features and functionality that subsequently has found its way into the downstream Red Hat Enterprise Linux distributions.

There are lots to cheer Fedora for. GNOME, NetworkManager, systemd and SELinux just to name a few.

Of all the cool stuff, I particularly like to call out three must haves.

a) Pomodoro – A GNOME extension that I use to ensure that I get the right amount of time breaks from the keyboard. I think it is a simple enough application that it has to be a must-have for all. Yes, it can be annoying that Pomodoro might prompt you to stop when you are in the middle of something, but you have the option to delay it until you are done. I think this type of help goes a long way in managing the well-being of all of us who are at our keyboards for hours.

b) Show IP: I really like this GNOME extension for it does give me at a glance any of a long list of IPs that my system might have. This screenshot shows ten different network end points and the IP number at the top is that of the Public IP of the laptop. While I can certainly use the command “ifconfig”, while I am on the desktop, it is nice to have it needed info tight on the screen.



c) usbguard: My current laptop has three USB ports and one SD card reader. When it is docked, the docking station has a bunch more of USB ports. The challenge with USB ports is that they are generally completely open ports that one can essentially insert any USB device and expect the system to act on it. While that is a convenience, the possibility of abuse isincreasing given rogue USB devices such as USB Killer, it is probably a better idea to deny, by default, all USB devices that are plugged into the machine. Fortunately, since 2007, the Linux kernel has had the ability to authorise USB devices on a device by device basis and the tool, usbguard, allows you to do it via the command line or via a GUI – usbguard-applet-qt. All in, I think this is another must-have for all users. It should be set up with default deny and the UI should be installed by default as well. I hope Fedora 27 onwards would be doing that.

So, thank you Fedora developers and contributors.



Quarter Century of Innovation – aka Happy Birthday Linux!

Screenshot from 2016-08-25 14-35-23

Happy Birthday, Linux! Thank you Linus for that post (and code) from a quarter of a century ago.

I distinctly remember coming across the post above on comp.os.minix while I was trying to figure out something called 386BSD. I was following the 386BSD development by Lynne Jolitz and William Jolitz back when I was in graduate school in OSU. I am not sure where I first heard about 386BSD, but it could have been in some newsgroup or the BYTE magazine (unfortunately I can’t find any references). Suffice to say, the work of 386BSD was subsequently documented by the Dr. Dobb’s Journal from around the 1992. Fortunately, the good people at Dr. Dobb’s Journal have placed their entire contents on the Internet and the first post of the port of 386BSD is now online.

I was back in Singapore by then and was working at CSA Research doing work in building networking functionality for a software engineering project. The development team had access to a SCO Unix machine but because we did not buy “client access licenses” (I think that was what it was called), we could only have exactly 2 users – one on the console via X-Windows and the other via telnet. I was not going to suggest to the management to get the additional access rights (I was told it would cost S$1,500!!) and instead, tried to find out why it was that the 3rd and subsequent login requests were being rejected.

That’s when I discovered that SCO Unix was doing some form of access locking that was part of the login process used by the built-in telnet daemon. I figured that if I can replace the telnet daemon with one that does not do the check, I can get as many people telnetting into the system and using it.

To create a new telnet daemon, I needed the source code and then to compile it. SCO Unix never provided any source code. I managed, however, to get the source code to a telnet daemon (from I think although I could be wrong).

Remember that during those days, there was no Internet access in Singapore – no TCP/IP access anyway. And the only way to the Internet was via UUCP (and Bitnet at the universities). I used (an ftp via email service by Digital Equipment Corporation) to go out and pull in the code and send it to me via email in 64k uuencoded chunks. Slow, but hey, it worked and it worked well.

Once I got the code, the next challenge was to compile it. We did have the C compiler but for some reason, we did not have the needed crypto library to compile against. That was when I came across the incredible stupidity of labeling cryptography as a munition by the US Department of Commerce. Because of that, we, in Singapore, could not get to the crypto library.

After some checking around, I got to someone who happened to have a full blown SCO Unix system and had the crypto library in their system. I requested that they compile a telnet daemon without the crypto library enabled and to then send me the compiled binary.

After some to and fro via email, I finally received the compiled telnet daemon without the crypto linked in and replaced the telnetd on my SCO Unix machine. Viola, everyone else in the office LAN could telnet in. The multi-user SCO machine was now really multi-user.

That experience was what pushed me to explore what would I need to do to make sure that both crypto code and needed libraries are available to anyone, anywhere. The fact that 386BSD was a US-originated project meant that tying my kite to them would eventually discriminate against me in not being able to get to the best of cryptography and in turn, security and privacy. That was when Linus’ work on Linux became interesting for me.

The fact that this was done outside the US meant that it was not crippled by politics and other shortsighted rules and that if it worked well enough, it could be an interesting operating system.

I am glad that I did make that choice.

The very first Linux distribution I got was from Soft Landing Systems (SLS in short) which I had to get via the amazingly trusty service which happily replied with dozens of 64K uuencoded emails.

What a thrill it was when I started getting serialized uuencoded emails with the goodies in them. I don’t think I have any of the 5.25″ on to which I had to put the uudecoded contents. I do remember selling complete sets of SLS diskettes (all 5.25″ ones) for $10 per box (in addition to the cost of the diskettes). I must have sold it to 10-15 people. Yes, I made money from free software, but it was for the labour and “expertise”.

Fast forward twenty five years to 2016, I have so many systems running Linux (TV, wireless access points, handphones, laptops, set-top boxes etc etc etc) that if I were asked to point to ONE thing that made and is still making a huge difference to all of us, I will point to Linux.

The impact of Linux on society cannot be accurately quantified.  It is hard. Linux is like water. It is everywhere and that is the beauty of it. In choosing the GPLv2 license for Linux, Linus released a huge amount of value for all of humanity. He paid forward.

It is hard to predict what the next 25 years will mean and how Linux will impact us all, but if the first 25 years is a hint, it cannot but be spectacular. What an amazing time to be alive.

Happy birthday Linux. You’ve defined how we should be using and adoption technology. You’ve disrupted and continue to disrupt, industries all over the place. You’ve helped define what it means to share ideas openly and freely. You’ve shown what happens when we collaborate and work together. Free and Open Source is a win-win for all and Linux is the Gold Standard of that.

Linux (and Linus) You done well and thank you!

This is quite a nice tool – magic-wormhole

I was catching up on the various talks at PyCon 2016 held in the wonderful city of Portland, Oregon last month.

There are lots of good content available from PyCon 2016 on youtube. What I was particularly struck was, what one could say is a mundane tool for file transfer.

This tool, called magic-wormhole, allows for any two systems, anywhere to be able to send files (via a intermediary), fully encrypted and secured.

This beats doing a scp from system to system, especially if the receiving system is behind a NAT and/or firewall.

I manage lots of systems for myself as well as part of the work I at Red Hat. Over the years, I’ve managed a good workflow when I need to send files around but all of it involved having to use some of the techniques like using http, or using scp and even miredo.

But to me, magic-wormhole is easy enough to set up, uses webrtc and encryption, that I think deserves to get a much higher profile and wider use.

On the Fedora 24 systems I have, I had to ensure that the following were all set up and installed (assuming you already have gcc installed):

a) dnf install libffi-devel python-devel redhat-rpm-config

b) pip install –upgrade pip

c) pip install magic-wormhole

That’s it.

Now I would want to run a server to provide the intermediary function instead of depending on the goodwill of Brian Warner.


UEFI and Fedora/RHEL – trivially working.

My older son just enrolled into my alma mater, Singapore Polytechnic, to do Electrical Engineering.  It is really nice to see that he has an interest in that field and, yes, make me smile as well.

So, as part of the preparations for the new program, the school does need the use of software as part of the curriculum. Fortunately, to get a computer was not an issue per se, but what bothered me was that the school “is only familiar with windows” and so that applications needed are also meant to run on windows.

One issue led to another and eventually, we decided to get a new laptop for his work in school. Sadly, the computer comes only with windows 8.1 installed and nothing else. The machine has ample disk space (1TB) and the system was set up with two partitions – one for the windows stuff (about 250G) and the 2nd partition as the “D: drive”. Have not seen that in years.

I wanted to make the machine dual bootable and went about planning to repartition the 2nd partition into two and have about 350G allocated to running Fedora.

Then I hit an issue.  The machine was installed with Windows using the UEFI. While the UEFI has some good traits, but unfortunately, it does throw off those who want to install it with another OS – ie to do dual-boot.

Fortunately, Fedora (and RHEL) can be installed into a UEFI enabled system. This was taken care of by work done by Matthew Garrett as part of the Fedora project. Matthew also received the FSF Award for the Advancement of Free Software earlier this year. It could be argued that perhaps UEFI is not something that should be supported, but then again, as long as systems continue to be shipped with it, the free software world has to find a way to continue to work.

The details around UEFI and Fedora (and RHEL) is all documented in Fedora Secure Boot pages.

Now on to describing how to install Fedora/RHEL into a UEFI-enabled system:

a) If you have not already done so, download the Fedora (and RHEL) ISOs from their respective pages. Fedora is available at and RHEL 7 Release Candidate is at

b) With the ISOs downloaded, if you are running a Linux system, you can use the following command to create a bootable live USB drive with the ISO:

dd  if=Fedora-Live-Desktop-x86_64-20-1.iso of=/dev/sdb

assuming that /dev/sdb is where the USB drive is plugged into. The most interesting thing about the ISOs from Fedora and RHEL is that they are already set up to boot into a UEFI enabled system, i.e., no need to disable in BIOS the secure boot mode.

c) Boot up the target computer via the USB drive.

d) In the case of my son’s laptop, I had to repartition the “D: drive” and so after boot up from the USB device, I did the following:

i) (in Fedora live session): download and install gparted (sudo yum install gparted) within the live boot session.

ii) start gparted and resize the “D: drive” partition. In my case, it was broken into 2 partitions with about 300G for the new “D: drive” and the rest for Fedora.

e) Once the repartitioning is done, go ahead and choose the “Install to drive” option and follow the screen prompts.

Once the installation is done, you can safely reboot the machine.

You will be presented with a boot menu to choose the OS to start.



A helper note for family and friends about your connectivity to the Internet from July 9 2012

This is a note targeted at family and friends who might find that they are not able to connect to the Internet from July 9, 2012 onwards.

This only affects those whose machines were are running Windows or Mac OSX and have a piece of software called DNSChanger installed.  The DNSChanger modifies a key part of the way a computer discovers other machines on the internet (called the Domain Name Server or DNS).

Quick introduction to DNS:

For example, you want to visit the website, You type this in your browser and magically, the CNN website appears in a few seconds. The way your browser figured out to reach the server was to do the following:

a) The browser took the domain name and did what is called a DNS lookup.

b) What it would have received in the DNS lookup is a mapping of the to a bunch of numbers.  In this case, it would have received something like:        60    IN    A        60    IN    A        60    IN    A        60    IN    A

c) The numbers you see in the lines above ( for example) are the Internet Protocol (IP) number of the server on which resides. You notice that there are more than one IP number.  That is for managing requests from millions of systems and not having to depend only on one machine to reply.  This is good network architecture. For fun, let’s look at      59    IN    CNAME    59    IN    A    59    IN    A    59    IN    A    59    IN    A    59    IN    A has 5 IP #s associated to it but you notice that there is something that says CNAME (stands for Canonical Name) in the first line. What that means is that is also the same as which in turns has 5 IP#s associated with it.

d) The beauty of this is that in a few seconds, you got to the website that you wanted to without remembering the IP # that is needed.

What is this important? If you have a cell phone, how do you dial the numbers of your family and friends?  Do you remember by heart their respective phone numbers? Not really or at least not anymore You probably know your own number and a small close group (your home, your work, your children, spouse, siblings).  Even then, their names are in your contact book and when you want to call (or text) them, you just punch in their names and your phone will look up the number and send out.

The difference between your cell phone directory and the DNS is that, you control what is in your phone directory.  So, a name like “Wife” in your phone could point to a phone number that is very different from a similar name in your friend’s phone directory.  That is all well and good.

But on the global Internet, we cannot have name clashes and that is why domain names are such hot things and people have snapped up pretty much a very large chunk of names during the rush in the late 1990s.

Now on to the issue at hand

So, what’s that got to do with this alarmist issue of connecting to the Internet from July 9, 2012?

Well, it has to with the fact that there as a piece of software – malware in this case – that got added to those running Windows and Mac OSX.  In all computers, the magic to do the DNS lookup is maintained by a file which contains information about which Domain Namer Server to query when presented with a domain name like

For example, on my laptop (which runs Fedora), the file that directs DNS looks is called /etc/resolv.conf.  This is the same for a Mac OSX file and I think it there is something similar in the Windows world as well. Fedora and Mac OSX share a common Unix heritage and so many files are in common.

The contents of my /etc/resolv.conf file is:

# Generated by NetworkManager
search lan

The file is automatically generated when I connect to the network and the crucial line is the line that reads “nameserver”. In this case, it points to which happens to be my FonSpot wireless access point. But what is interesting is that my FonSpot access point is not a DNS server per se.  In the setup of the FonSpot, I’ve got it to look up domain names to Google’s public DNS server whose IP #s are and

Huh? What does this mean?  Simply put, when I type in on my browser, that name’s IP# is looked up first by my browser asking the nameserver which is the FonSpot will then return to my browser that it should go ask for an answer. If does not know, hopefully will give an IP # to my browser to ask next.  Eventually, when an IP # is found, my browser will use that IP # and send a connection request to that site. All of this happens in milliseconds and when it all works, it looks like magic.

What if you don’t get to the site?  What if the entry in the /etc/resolv.conf file pointed to some IP # that was a malicious entity that wanted to “hijack” your web surfing?  There is a legitimate reason for this. For example, when you connect to a public wifi access point (like Wireless@SG for example), you will initially get a DNS nameserver entry that belongs to the wifi access provider. Once you successfully logged into that access point, then your DNS lookup will be properly directed. This technique is called “captive portal”. My FonSpot is a captive portal btw.

The issue here is that those machines who have the malware DNSChanger have the DNS lookup being hijacked and directed elsewhere.  See this note by the US Federal Bureau of Investigation about it.

It appears that the DNSChanger malware had set up a bunch of IP# to redirect maliciously all access to the Internet. If your /etc/resolv.conf file has nameserver entries that contain numbers in the following range: to to to to to to

you are vulnerable.

Here’s a test I did with the 1st of those IP#s on my fedora machine:

[harish@vostro ~]$ dig @

; <<>> DiG 9.9.1-P1-RedHat-9.9.1-2.P1.fc17 <<>> @
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34883
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 4, ADDITIONAL: 5

; EDNS: version: 0, flags:; udp: 4096
;            IN    A

;; ANSWER SECTION:        464951    IN    CNAME    241    IN    CNAME 252    IN    A

;; AUTHORITY SECTION:        32951    IN    NS        32951    IN    NS        32951    IN    NS        32951    IN    NS

;; ADDITIONAL SECTION:        33061    IN    A        33061    IN    A        317943    IN    A        33297    IN    A

;; Query time: 305 msec
;; WHEN: Sun Jul  8 21:40:07 2012
;; MSG SIZE  rcvd: 242

Some explanation of what the is shown above. “dig” is a command “domain internet groper” that allows me, from the command line, to see what a domain’s IP address is. With the extra stuff “@”, I am telling the dig command to use as my domain name server and get the IP for the domain Currently is being run as a “clean” DNS server by the those who’ve been asked to by the FBI.

Hence, what will happen on July 9th 2012 is that the request by FBI to give a reply when is used, will expire. Therefore the command I executed above on July 8th 2012 will not return a valid IP number from July 9th 2012. While the Internet will work, there would be people whose systems have been compromised to point to the bad-but-made-to-work-OK DNS servers, will find that they can’t seem to get to any site easily by using domain names. If they instead used IP#s, they can get to the site with no issue.

A quick way to check if your system needs fixing is to go to NOW to check. If it is OK, ie your system’s /etc/resolv.conf is not affected (or the equivalent for those still running Windows).

See the announcement from Singapore’s CERT on this issue.

FUDCon Kuala Lumpur 2012

It is wonderful to see the Fedora Users and Developers Conference kick off in Kuala Lumpur today, May 18 2012. The plan was for me to attend, do a keynote and also pitch a talk for the barcamp. But, Murphy was watching how everything was coming together and pulled the rug from under me on Wednesday. I experienced what I found out later to be “tennis calf”

The symptoms were 100% spot on; felt something hit my calf followed by a pull. Quickly arranged to visit a sports doctor and he advised me about what needs to be done and recommended that perhaps I should not travel for the next two to three days. Bummer. I was so looking forward to being among the Fedora community flying in from Europe, Australia, Vietnam, India, Sri Lanka, Bangladesh etc.

Among the things I wanted to talk about at FUDCon KL was the following:

  1. A demo of the plugable USB2.0 docking station that turns a Fedora 17 machine (server, desktop, laptop – does not matter) into a multi-seat Linux environment. I bought a pair from Amazon. I received it on Wednesday (shipped to Singapore via and it worked exactly as stated – plug the USB to the laptop’s USB port, have a VGA monitor, USB keyboard and mouse plugged into the docking station, and viola, a fresh GNOME login screen. Amazing. You can even do an audio chat and watch streaming video via this setup. Really good stuff and kudos to the developers for main streaming the code into the Linux kernel and working with the Fedora devs to make this workable out of the box on Fedora 17.  What was really amazing from my point of view was the this works even when a machine is booted from a Fedora 17 LiveCD/USB. While this would suggest that the idea of the K12LTSP project is no longer needed, I think there are clear areas where they complement.
  2. My journey in I wanted to share my learnings about OpenShift and Git and all the associated stuff. More importantly, the fact that OpenShift is a technology that is being used for a 24-hour programming contest in Singapore called code::XteremeApps was important to share as well to encourage international participation in the contest.  I am hopeful that this blog post will trigger interest.

I guess all is not lost. The show has to go on and I am glad to have facilitated a lot of it.  But the main kudos has to go to the Malaysian Fedora Ambassadors who managed to pull this off in the 8 weeks when they were awarded the hosting rights!

And it’s live now – SCO Open Server 5.0.5 running in a RHEL 6 KVM

As promised earlier, the final bits of getting an application that runs on the old hardware on to the VM is now all done.  I tried to install the app but, I really did not want to spend too much time trying to figure out all the nuances about it.  Since this is really an effort that would eventually see the app being replaced at some future date, I wanted to get it done easily.

So, over the last long weekend, I did the following:

a) Created a brand new VM running SCO Open Server 5.0.5 on the RHEL 6.2 machine. The specs of the VM are: 2GB RAM, 8GB disk, qemu (not kvm), i686, set the network card to be PC-Net and Video as VGA. This is the best settings to complete the installation of SCO in the VM.

b) Meanwhile on the old machine, I did a tar of the whole system – “tar cvf wholesystem.tar /”. This is probably not the best way to do it, but hey, I did not want to spend time just picking what I wanted and what I did not need from the old machine. The resulting “wholesystem.tar” file was about 2G in size.

c) Ftp’ed the wholesystem.tar file to the VM and did an untar of it on to the VM – “cd /; tar xvf /tmp/wholesystem.tar “. This resulted in a VM that could boot, but needed some tweaks.

d) The tweaks were:

  1. Changing the network card to reflect the VM’s settings
  2. Changing the IP#
  3. Disabling the mouse on the VM

d) SCO is msft-ish (or may be msft learned it from SCO) in that the tool that is used to do the changes “scoadmin” will, after changes are done, need the kernel be rebuilt which then necessitates the rebooting of the VM to pick up the new values

e) Edited the /etc/hosts file to reflect the new IPs and added in /etc/rc.d/8/userdef file a line to set the default route on the VM: route add default

The VM’s IP is and in the /etc/resolv.conf file, the nameserver was set to and (Google’s public DNS)


a) The old machine had two printers – an 80 column and a 132-column dot matrix printer – connected to its serial and parallel ports.  I did not want to deal with this issue for the VM and got hold of two TP Link PS110P print servers. What’s nice about these are that they are trivial to work with (they are running Linux anyway) and by plugging them to the printers (even the serial printer had a parallel port), both printers were on the network and so printing from the SCO VM was now trivial.

b) Configuring the SCO VM to print to the network printer was using the rlpconf command. The TP Link print server has an amazing array of options and I picked the LPR option and the LPT0 and LPT1 device queue on the two TP Link print server. While the scoadmin has a printer settings section, for some reason the remote printers set up by it never quite worked.  In any case, the rlpconf edits the /etc/printcap file to reflect the remote printers and that is all that is needed.  Here’s what the /etc/printcap looked after the rplconf command was run:

cat /etc/printcap
# Remote Line Printer (BSD format)
#       :lp=:rm=rhel6:rp=rhel6-pdf:sd=/usr/spool/lpd/rhel6-pdf:

the IP #s were set in the TP Link print servers and their respective print spools.

c) so, once that was done, running lpstat -o all on the VM shows the remote printer status:

#lpstat -o all
lp1 is available ! (06,05,02,000000|01|448044|443364|04,02,02|8.2,8.3)
lp1 is available ! (03,02,03,000000|01|450384|445932|04,02,01|8.2,8.3)

Networking issues:

Initially, I had set up the VMs using the default networking setting for KVM.  The standard networking in KVM assumes that the VM is going to go out to the network and not running as a server per se. But this VM was going to be accessed by other machines (not the RHEL6 host) on the office LAN, so the right thing to do is to set up the a Bridging network instead of a NATed network. RHEL 6.2 does not, by default, have bridging set up and I think that need to change. NATing is fine, but in order for the VM to be accessed from systems other than the host, there has to be additional firewall rules set up if it is to be NATed, but a one liner iptables rule: “iptables -I FORWARD -m physdev –physdev-is-bridged -j ACCEPT” if it was on a Bridge.

I think the dialog box that sets up the VM via virt-manager should add an option to ask if a you need a bridged network. The option is there, but not obvious. So following these instructions carefully – they work.

Well, that was it. The SCO Open Server 5.0.5 with the application that was needed is now running happily in a VM on a RHEL 6.2 machine and the printing is via the network to a couple of print server.

I must, once again, take my hats off to the awesome open source developers of KVM, QEMU, BOCHS etc for the wonderful way all the technologies have some together in a Linux kernel as fully supported by Red Hat in Red Hat Enterprise Linux. There is an enormous amount of value in all of this, that even a premium subscription of this RHEL installation is a fraction of the true value derived. The mere fact that a 20th century SCO Open Server can now be made to run in perpetuity on a KVM instance is mind-boggling (even if Red Hat does not officially support this particular setup).


Fedora 17 before it is released

I decided to take the plunge and run Fedora 17 before it’s officially launched in May.  My system has been running Fedora 16 x86-64 since the launch last November and I must say that it has been solid – including the GNOME 3.x stuff.

What I did was the following:

a) Updated the system fully – “yum update -y”

b) Ensure that “preupgrade” is installed – “yum install preupgrade -y”

c) Run the “preupgrade” command and let it set the system up.  This last step could take a few hours depending on your Internet speed. This was exactly what I did in November as well when I went from Fedora 15 to Fedora 16.

When it finally completed the preupgrade, I rebooted the machine, then it went through the final install and, viola, all was good. The key apps I need to use on a daily basis – mutt, msmtp, Firefox, Chromium, x-chat, Thunderbird, vlc, twinkle, calibre, virt-manager all worked as before. Or so I thought.

For what it’s worth, all of them work with the exception of vlc which will play ogg, mp3 but fails to play flv and mp4 (complains that it needs h264 codecs). I thought it should be there, but I guess something might not have been properly updated.  Oh well. Not the end of the world really. Everything else works.

The version of the kernel right now is:

[harish@vostro ~]$ uname -a
Linux 3.3.4-1.fc17.x86_64 #1 SMP Fri Apr 27 18:39:03 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

I did, however, encounter an interesting problem when I rebooted the machine to the newest kernel – my wifi did not come on. For a moment I thought something broke. I rebooted the machine from a liveUSB running Fedora 16 and the wifi worked so it is not hardware issue.  What I had to do was to use the “Fn + F7” key combination (to turn on and off the wireless in the machine) and bingo, the wifi came back on.  My machine is a Dell Vostro v13.

[harish@vostro ~]$ lspci
00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07)
00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07)
00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07)
00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03)
00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03)
00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03)
00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03)
00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03)
00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03)
00:1c.2 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 (rev 03)
00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03)
00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 03)
00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03)
00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03)
00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03)
00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03)
00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93)
00:1f.0 ISA bridge: Intel Corporation ICH9M-E LPC Interface Controller (rev 03)
00:1f.2 SATA controller: Intel Corporation ICH9M/M-E SATA AHCI Controller (rev 03)
00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03)
07:00.0 Network controller: Intel Corporation WiFi Link 5100


[harish@vostro ~]$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 003: ID 10f1:1a1e Importek Laptop Integrated Webcam 1.3M
Bus 003 Device 006: ID 0a5c:4500 Broadcom Corp. BCM2046B1 USB 2.0 Hub (part of BCM2046 Bluetooth)
Bus 003 Device 007: ID 413c:8161 Dell Computer Corp. Integrated Keyboard
Bus 003 Device 008: ID 413c:8162 Dell Computer Corp. Integrated Touchpad [Synaptics]
Bus 003 Device 009: ID 413c:8160 Dell Computer Corp. Wireless 365 Bluetooth

Let’s hope that by the time Fedora 17 is Generally Available, this little toggle is long gone.

Microsoft’s “open technology” spinoff

While I would like to stand up and cheer Microsoft on them setting up the “Microsoft Open Technologies, Inc”, I am not convinced that they are doing this in good faith.

Microsoft’s founder, Bill Gates, said in 1991 – 21 years ago – that

“If people had understood how patents would be granted when most of today’s ideas were invented, and had taken out patents, the industry would be at a complete standstill today.”

only to have all of that conveniently forgotten years later when they themselves started patenting software and suing people all over. These are the kinds of actions taken by a company who cannot innovate or create anything that is new and valuable.  It is also the same company whose CE goes around saying things like:

“Linux violates over 228 patents, and somebody will come and look for money owing to the rights for that intellectual property,”

Too many of these statements and blatant lies from a company that has lost its ethical compass. This is the same company that is now pro-CISPA even after backing down from being pro-SOPA. Do read this statement from EFF about what’s wrong with CISPA.

Never mind all that. Clearly, Microsoft sees money in FOSS. It is business as usual for them in creating their new subsidiary.

If they are really serious about FOSS being part of their long-term future, I am sure they will be reaching out to many people in the FOSS world to join them. Thus far, all I have seen is a redeployment of their internal, dyed-in-the-wool MSFTies.

I think Simon’s commentary on the plausible reasons for Microsoft setting this new entity up is a good set of conspiracy theories, but I think Simon gives Microsoft too much credit.

Exposing localhost via a tunnel

I came across this tool, localtunnel, that offers a way to expose a localhost based webserver (for example) to the internet. It is a reverse proxy that brings you to your machine way behind a firewall by bouncing off of a externally reachable host running localtunnel.

I tested it out on my Fedora 16 laptop (all I had to do was to run “gem install localtunnel” as I had ruby already installed).

I like the idea, but am not entirely convinced about the security exposure.

The Value of being Heard and Consulted

Some of you would know that I am employed by a company called Red Hat since September 2003, it will be nine years with the organization. That’s longer than I have been with any of my startups (Inquisitive Mind and Maringo Tree Technologies) combined. In many ways it is not about Red Hat per se, but about Free Software (and Open Source for that matter) and how the culture of Red Hat very much reflects the ethics and ethos of the Free Software movement.

Yes, Red Hat has to earn its keep by generating revenues (now trending past US$1 billion) and the magic of subscriptions which pegged the transfer of significant value to the customers by way of high quality and reliable software and services, ensures that Free Software will continue to drive the user/customer driven innovation.

All of this is not easy to do. When I joined Red Hat from Maringo Tree Technologies, I went from being my own boss, to working for a corporation. But the transition was made relatively easy because the cultural value within Red Hat resonated with me in that Red Hat places a very high premium on hearing and engaging with the associates. I was employee #1 in Singapore for Red Hat and my lifeline to the corporation was two things: memo-list and internal IRC channels. Later as the Singapore office took on the role of being the Asia Pacific headquarters, we hired more people and it is really nice to see the operation here employing over 90 people.

But inspite of the growth in terms of people, the culture of being heard and consulted is still alive and thriving. It is a radically different organization which will challenge those joining us from traditionally run corporations where little or no questions or consulting is done and all decisions are top down.  I am not saying that every Red Hat decision is 100% consulted, but at least it gets aired and debated. Sometimes your argument is heard, sometimes it is accepted and morphed, sometimes it is rejected.  I think this interview of Jim Whitehurst that ran in the New York Times is a good summary.

Red Hat Enterprise Linux 6 comes to the rescue of SCO Open Server in a VM!

After about two years ago to the day (plus or minus), I’ve finally gotten around to moving a friend’s ancient SCO OpenServer 5.0.5 to run on a modern operating system within a virtual machine.

My friend acquired a brand new Dell Xeon server with 8GB of memory and tonnes of disk space.  It came pre-installed with Red Hat Enterprise Linux 6. I got him to register with Red Hat Network and then set up the system and got it fully updated.  All’s well on that count.

Next was to take the experience from two years ago where I managed to install the SCO OpenServer 5.0.5 on a RHEL 5.4 system and make that happen in the latest and greatest of systems.

First was to create the ISOs of the CDs needed (dd if=/dev/cdrom of=NameOfCD.iso) and kept it in a directory for ISOs which I created in the /opt directory.

Second was to fire-up virt-manager (from the GUI so that my friend knows what is happening), and then go about creating a new VM. The virt-manager had problems to start up which puzzled me.  This is 2012 and this machine is a server class machine. It could not be that Dell shipped the machine with support for virtualization turned off in the BIOS, could it? Was I so wrong. For reasons I cannot explain, Dell chose to DISABLE support for virtualization in the BIOS even for this server class machine. I had to reboot the machine, go into the BIOS settings, enable the virtualization option and restart RHEL.

This time, firing up virt-manager worked like a charm and the proceeded to create a new VM.

The following screenshots are self-explanatory including the installation screens from SCO:

The key choices in the dialog boxes were as follows:

a) Check on the “Customize configuration before install”

b) Set Virt Type as qemu and Architecture to be i686

c) Change the NIC type to pcnet

d) Change the Video to vga

With those settings, the installation of the VM started.

The SCO installation is so archaic and ancient that it amazes me that I could still install it into a 21st century virtual machine! And kudos to the KVM and virt engineers!

As the SCO installation proceeds, there are few things that need to be chosen:

a) The installation device is an IDE CDROM on the secondary master.

b) When chosing the “Hard Disk Setup”, change the “Tracking” to “Bad Tracking Off”. This enormously speeds up the “formating” of the drive by SCO.

c) Change the “Network Card” to manual select and then chose “AMD PCNet-PCI Adapter”

d)And continue to the last screen and go ahead with installation.

So, a few minutes later, it is all installed and the system will shutdown.  You can then safely restart the VM and you should be in the default text console. Like any Linux machine, you do have alternate screens available by using the menu options of the VM window “Send Key” and send “Ctl-Alt-F1″ etc to the VM and it will switch to the various virtual consoles available.  

Once you are logged into the system, you can go ahead and use it.

I will follow-up with the installation of a product called “Throughbred 8.4.1” in a subsequent post.

In the meantime, if you have additional SCO CDs such as:

a) SCO-Optional-Services.iso, or

b) SCO-RS-505A.iso, or

c) SCO-SkunkWare.iso, or

d) SCO-Vision-2K.iso, and

e) SCO-Voyager-5-JDK.iso,

You can use Virt-Manager’s interface for the VM-in-question’s “Details” menu option and chose the CDROM option to connect to the ISO that is needed. Once it is linked up, switch over to the VM’s console, and assuming you are logged in as root, type in “mount /dev/cd0 /mnt” to mount it. For some reason, the first time I type the command it throws an error, and have to do it a second time when it succeeds. Then you have access to the ISO as a local CD.

Cool tech tip

Saw this on feed:

“@climagic youtube-dl -q -o- | mplayer – cache 1000 – # Watch youtube streaming directly to mplayer”

So, do yum install youtube-dl mplayer on your Fedora machines, then you can pull in youtube videos with the youtube-dl command and then pipe it (the “|”) to the video player, mplayer and watch it immediately. No need for a browser and this is really cool.

Naturally, if all you wanted was to download the youtube video and keep a copy, just use youtube-dl [URL].

You can replace mplayer with vlc as well so the one above would look like this:

youtube-dl -q -o- | vlc

OSCON 2011 – Tuesday July 26

Finally, I’ve found time and motivation to attend the O’Reilly organized Open Source Convention 2011 in Portland, Oregon.

It has been many years since I was in Portland – in fact, the times I spent in Portland was when I was in school in OSU in the latter half of 1980s. Most times, I will drive up from Corvallis on a Saturday morning, go to Powell’s and spend the whole time there. I did do some hiking around the area, but it was Powell’s for me.

So, it is somewhat of a de ja vu and yet new.

I have signed up for the sessions on Tuesday/Wednesday/Thursday and will also be supporting the Fedora team who has a booth as well as the Open Source for America.

Tuesday’s sessions show a few HTML5 talks.  Looks like HTML5 is indeed the next new shiny thing. May be not. But it is nice to be in a techie session and actually do some coding – it is always a good adrenalin rush for me. Coding and hacking has always been.

Here’s the site the speaker Remy Sharp is using for his talk “Is HTML5 Ready for Production” –, and Cool stuff – he is now showing WebSockets as well. Awesome. WebSocket servers have to be node.js machines for superfast connections.

Change and Opportunity

Change and evolution are hallmarks of any open source project. Ideas form, code gets cut, repurposed, refined and released (and sometimes thrashed).

Much the same thing happens with teams of people.  In the True Spirit of The Open Source Way, people in teams will see individuals come in, contribute, leave. Sometimes, they return. Sometimes, they contribute from afar.

Change has come to Red Hat’s Community Architecture and Leadership (CommArch) team.  Max has written about his decision to move on from Red Hat, and Red Hat has asked me to take on the leadership of the group.  We have all (Max, myself, Jared, Robyn, and the entire CommArch team) been working hard over the past few weeks to make sure that transition is smooth, in particular as it relates to the Fedora Project.

I have been with Red Hat, working out of the Asia Pacific headquarters based in Singapore, for the last 8 years or so. I have had the good fortune to be able to work in very different areas of the business and it continues to be exciting, thrilling and fulfilling.

The business ethics and model of Red Hat resonates very much with me. Red Hat harvests from the open source commons and makes it available as enterprise quality software that organizations, business big and small can run confidently and reliably. That entire value chain is a two way chain, in that the work Red Hat does to make open source enterprise deployable, gets funnelled back to the open source commons to benefit everyone. This process ensures that the Tragedy of the Commons is avoided.

This need to Do The Right Thing was one of the tenets behind the establishment of the Community Architecture and Leadership team within Red Hat. Since its inception, I have had been an honorary member of the team, complementing its core group.  About a year ago, I moved from honorary member to being a full-timer in the group.

The team’s charter is to ensure that the practises and learnings that have helped Red Hat to harness open source for the enterprise continues to be refined and reinforced within Red Hat.  The team has always focused on Fedora in this regard, and will continue to do so. We’ve been lucky to have team members who have had leadership positions within different parts of the Fedora Project over the years, and this has given us an opportunity to sharpen and hone what it means to run, maintain, manage, and nurture a community.

The group also drives educational activities through the Teaching Open Source (TOS) community, such as the amazingly useful and strategic “Professors Open Source Summer Experience” (POSSE) event.  If the ideas of open source collaboration and the creation of open source software is to continue and flourish, we have to reach out to the next generation of developers who are in schools around the world. To do that, if faculty members can be shown the tools for open source collaboration, the knock-on effect of students picking it up and adopting is much higher. That can only be a good thing for the global
open source movement.

This opportunity for me to lead CommArch does mean that, with the team, I can help drive a wider and more embracing scope of work that also includes the community and the newly forming Cloud-related communities.

The work ahead is exciting and has enormous knock-on effects within Red Hat as well as the wider IT industry.  Red Hat’s mission statement states: “To be the catalyst in communities of customers, contributors, and partners creating better technology the open source way.”

In many ways, CommArch is one of the catalysts. I intend to keep it that way.

Now all machines at home are on Fedora 15!

I spent 30 minutes this morning upgrading my sons’ laptops to Fedora 15. I used a Fedora 15 LiveDVD (installed on a USB) that I had created that included stuff that the standard Fedora 15 LiveCD does not because of space. Tools like LibreOffice, Scribus, Xournal, Inkscape, Thunderbird, mutt, msmtp, wget, arduino, R, lyx, dia, and filezilla. I’ve thrown in blender and some games into the mix as well.

The updates of the systems went super quick (20 minutes to first boot) and then on to Spot’s Chromium repo:

  1. su –
  2. cd /etc/yum.repos.d/
  3. wget
  4. yum install chromium

Following that, on to to get the free and non-free setup RPMs to get to the tools that are patent encumbered and otherwise forbidden to be included in a standard Fedora distribution.

  1. yum install
  2. yum install
  3. yum install vlc
  4. yum install thunderbird-enigmail

[Update, June 19, 2011 0050 SGT] Based on the comment from Jeremy to this post, I’m updating the instructions]

The last bit is flash from Adobe – the 64-bit version:

  1. wget
  2. tar xvfz flashplayer10_2_p3_64bit_linux_111710.tar.gz
  3. cp /usr/lib64/mozilla/plugins/
  4. chmod +x /usr/lib64/mozilla/plugins/

Installing a 32-bit version of Adobe Flash for a 64-bit Fedora installation:

  1. Go to
  2. Installing a 32-bit wrapped into a 64-bit version
  3. ln -s /usr/lib64/mozilla/plugins-wrapped/ /usr/lib64/chromium-browser/plugins
  4. These steps should be sufficient for flash to be enabled for both Firefox and Chromium

Once done, restart your browser and you will have flash enabled.

Yes, I am aware that I’ve had to compromise and load up non-free software. It is less than ideal and I am looking forward to GNU Flash maturing as well as MP3 and related codec getting out of patent.

Printer/cups tip

Every time I update the OS on my laptops, I have to add the CUPS printer settings for the in office systems. It used to be that there was an internally usable RPM to do this, but I always thought that it was not really a clean enough solution.

So, this post is more of a reminder to myself that all I need to do is the following:

echo “BrowsePoll” >> /etc/cups/cupsd.conf

service cups restart

And, viola, like magic, the printers get discovered and all is well. Nice.

Early thoughts on GNOME 3

I must admit, the first time I installed Fedora 15 alpha, I did it only to test out what GNOME 3 was all about. It looked like an interesting interface that would work on a tablet-like device, having used the Andriod-based Archos 10.1 for while now.

When Fedora 15 was officially launched on May 24th, I decided to move my work machine (a Dell Vostro v13) from Fedora 14 to 15.

For the tl;dr, I like GNOME 3.

Now the rest of the story:

The default background looked like a curtain from another era. I hit the right-button of the mouse to see what’s available, but nothing came up. I know I have stuff on the Desktop. How do I get to that now? By moving the mouse to the top left hand corner, the desktop “collapses” to show a whole of other things amongst them being the “search” box on the right side of the screen. I typed in “Desktop” and among other things, it came up with “Places and Devices”. Hmm. Interesting way to navigate.

One of the best uses of Fedora has been the fact that I could share my network connection with anyone. I am often in situations where I have my 3G USB dongle connected up and turning my laptop into a wifi hotspot. Alas, as I write this blog, it is not working in GNOME 3. It is one of the minor things I have to put up with now. I am hopeful that it will be reinstated RSN.

In general, I think there has be a lot of rethinking that has gone into the design of GNOME 3. I like that fact that the desktop is kept really clean. I am one of those guilty of a crowded and busy desktop. Now all of that is hidden away in a FOLDER (which is was anyway) called Desktop. Maybe it is time to retire that Desktop folder meme as well.

Now that I’ve been using GNOME 3 for about two days, it has begun to grow on me.  All of my other machines at home (and which my family uses) are all running the older GNOME and it does seem clunky and ancient.

Overall, I am pleased thus far. Just give me the means to share out my network, I’ll be productive.

My must-haves on any new Fedora installation

So, I’ve taken the plunge and gone ahead and updated my Fedora 14 to the next rev of Fedora 15. F15 comes default with GNOME 3. I am still finding my way around it, but it seems to be less clunky than GNOME 2.x. There are some minor stuff missing. I am hoping that the network-sharing part gets included in a hurry.

The purpose of this post is to document for myself, the extra apps that I include in a standard installation.

Firstly, I started the installation from a Fedora 15 x86_64 live CD. I turned on the encryption of my /home directory for obvious security reasons. I think it should be made mandatory for everyone.

Once the system was all set up, I added the following:

a) go to – set up the free and non-free stuff

b) go to spot’s repo for the open sourced version of Chrome – chromium.

c) install xournal, mutt, msmtp, wget, arduino, scribus, inkscape, audacity, libreoffice, thunderbird-lightning, thunderbird-enigmail, etherape, nmap, lyx, vlc, dia, R-project, gimp, twinkle, virt-* and x-chat

d) adding my sshtunnel alias command into the ~/.bashrc:

#setting up ssh tunnel
alias sshtunnel="ssh -C2qTnN -D 8080 &"

e) updating the network proxy to “socks, localhost, port 8080”.

Open Source Java all the way

I am really pleased to see that the IcedTea project doing so well that for all the sites that need Java enabled in the browser, icedtea is more than sufficient.  It used to be the case that I needed to download from the RPMs for my installations before I could get access to, and more importantly, for my sons,

I’ve just moved to the latest Fedora 15 on my Dell Vostro V13 laptop and my well-worn practise, check that I could get access to DBS, CPF and runescape. And they all worked.

How does one know if Java is installed on the machine?

Start the browser (Firefox or chromium), type in “about:plugins” in the URL section.  On Chromium, you will see among other plug-ins, a section that says:

IcedTea-Web Plugin (using IcedTea-Web 1.0.2 (fedora-2.fc15-x86_64))

The IcedTea-Web Plugin executes Java applets.
Name: IcedTea-Web Plugin (using IcedTea-Web 1.0.2 (fedora-2.fc15-x86_64))
Description: The IcedTea-Web Plugin executes Java applets.
Location: /usr/lib/jvm/java-1.6.0-openjdk-
MIME types:
MIME type Description File extensions
application/x-java-vm IcedTea
.class .jar
application/x-java-applet IcedTea
.class .jar
application/x-java-applet;version=1.1 IcedTea
.class .jar
application/x-java-applet;version=1.1.1 IcedTea
.class .jar
application/x-java-applet;version=1.1.2 IcedTea
.class .jar
application/x-java-applet;version=1.1.3 IcedTea
.class .jar
application/x-java-applet;version=1.2 IcedTea
.class .jar
application/x-java-applet;version=1.2.1 IcedTea
.class .jar
application/x-java-applet;version=1.2.2 IcedTea
.class .jar
application/x-java-applet;version=1.3 IcedTea
.class .jar
application/x-java-applet;version=1.3.1 IcedTea
.class .jar
application/x-java-applet;version=1.4 IcedTea
.class .jar
application/x-java-applet;version=1.4.1 IcedTea
.class .jar
application/x-java-applet;version=1.4.2 IcedTea
.class .jar
application/x-java-applet;version=1.5 IcedTea
.class .jar
application/x-java-applet;version=1.6 IcedTea
.class .jar
application/x-java-applet;jpi-version=1.6.0_50 IcedTea
.class .jar
application/x-java-bean IcedTea
.class .jar
application/x-java-bean;version=1.1 IcedTea
.class .jar
application/x-java-bean;version=1.1.1 IcedTea
.class .jar
application/x-java-bean;version=1.1.2 IcedTea
.class .jar
application/x-java-bean;version=1.1.3 IcedTea
.class .jar
application/x-java-bean;version=1.2 IcedTea
.class .jar
application/x-java-bean;version=1.2.1 IcedTea
.class .jar
application/x-java-bean;version=1.2.2 IcedTea
.class .jar
application/x-java-bean;version=1.3 IcedTea
.class .jar
application/x-java-bean;version=1.3.1 IcedTea
.class .jar
application/x-java-bean;version=1.4 IcedTea
.class .jar
application/x-java-bean;version=1.4.1 IcedTea
.class .jar
application/x-java-bean;version=1.4.2 IcedTea
.class .jar
application/x-java-bean;version=1.5 IcedTea
.class .jar
application/x-java-bean;version=1.6 IcedTea
.class .jar
application/x-java-bean;jpi-version=1.6.0_50 IcedTea
.class .jar
application/x-java-vm-npruntime IcedTea
If that section does not show up, you do not have Java enabled for the browser. In which case, in Fedora, for example, you can choose the Add/Remove Software option, search for icedtea and install it. Once the icedtea is installed, you should restart your browser in order for the browser to pick up the new plug-in. That’s it. Open source Java FTW!

Is Vietnam blocking Facebook?

I am sitting at a lounge in Ho Chi Minh City’s international airport and connected to the wifi. Interestingly, I cannot reach  Here’s the dig and traceroute info:

$ dig

; <<>> DiG 9.7.3-RedHat-9.7.3-1.fc14 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15351
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;		IN	A

;; AUTHORITY SECTION:	86400	IN	SOA 2005010501 10800 3600 604800 86400

;; Query time: 17 msec
;; WHEN: Thu May 12 19:11:39 2011
;; MSG SIZE  rcvd: 96

$ dig

; <<>> DiG 9.7.3-RedHat-9.7.3-1.fc14 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22473
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;			IN	A

;; AUTHORITY SECTION:		86400	IN	SOA 2005010501 10800 3600 604800 86400

;; Query time: 15 msec
;; WHEN: Thu May 12 19:12:16 2011
;; MSG SIZE  rcvd: 92
# traceroute No address associated with hostname
Cannot handle "host" cmdline arg `' on position 1 (argc 1)

# traceroute No address associated with hostname
Cannot handle "host" cmdline arg `' on position 1 (argc 1)
# dig @

; <<>> DiG 9.7.3-RedHat-9.7.3-1.fc14 <<>> @
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22333
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;		IN	A


;; Query time: 128 msec
;; WHEN: Thu May 12 19:18:37 2011
;; MSG SIZE  rcvd: 50

Once I turned on my sshtunnel, I can get to facebook not otherwise. Interesting.

Managing open source skepticism

I had an opportunity to speak to a few people from a government tender drafting committee on Wednesday.  They are looking at solutions that will be essentially a cloud for a large number of users and have spoken to many vendors.

I was given an opportunity to pitch the use of open source technologies to build their cloud and I think I gave it my best shot. I had to use many keywords – automatic technology transfer (you have the source code), helps to maintain national sovereignty, learning to engage the right way with the FOSS community, enabling the next generation of innovators and entrepreneurs and preventing vendor lock-in.

By and large, I think the audience agreed, except for one person who said “yeah, now it is open source, but it will become proprietary like the others”. Obviously this person has been fed FUD from the usual suspects and I had to take extra pains to explain that everything that we, Red Hat, ships is either under the GNU General Public License or GNU Lesser/Library General Public License.  The GPL means no one can ever close up the code for whatever reason. I am not entirely sure I managed to convince that member of the audience. In a lot of ways, this is the burden we carry as Red Hatters in explaining our business model and how we engage with the FOSS community etc.

Glad to have participated in the Cloud Workshop in Penang

I am pleased to have spent two days at the National Cloud Computing Workshop 2011 held in Penang, Malaysia April 11-12 2011. Targeted at the Malaysian academic community, it offered some insights to the initiatives that the various universities in Malaysia are undertaking on rolling out an academic cloud that is being set up with a fully accountable Malaysian identity and access framework.  I think this bodes well for their plans to push for a Malaysian Research Network (MyREN) Cloud that is hoped will be a way to encourage the collaboration of both faculty and students in sharing knowledge and learning. I was particularly pleased to have been invited to speak about cloud technologies from a Red Hat perspective as well as to introduce the audience to the various open source collaboration and empowerment work Red Hat is doing from the Community Architecture team. When I mentioned, during my talk, about POSSE and Red Hat Academy as well as “The Open Source Way” and “Teaching Open Source“, I could sense a level of interest from the audience in wanting to know more.  And true enough, the post-talk q&a focused a lot on “how can we take part in POSSE”.  Looks like it is going to be a few POSSEs in Malaysia this year! Let the POSSE bidding process begin!

On day two, I was invited to take part as a panelist with some of the other speakers to discuss the future of cloud in Malaysia and to throw up suggestions and ideas about what they could be targeting. One of my two suggestions was to first create a “” as a definitive wiki-based resource that brings together the various research activities in Malaysia in the private and public universities as well as public-funded research institutions. The key is in a site that is wiki-based so that there are no unneeded bottlenecks in updates etc and helps with keeping the information current.  The second suggestion to the audience was to consider the various Grand Challenges and see if any of them are interesting to be picked upon. What is needed is to aim really high so that at least you will land on the moon if you miss. Aiming only to land on the moon may result in you landing in the ocean!

Overall, I think the organization was good. I am looking forward to the presentation materials of the speakers to be made online and to the next event!

Cloud for Academics

I am pleased to have an opportunity to speak from both a Red Hat and an open source presective about cloud technologies to the academic community in Malaysia.  

Clearly there is a lot to convey and I am hopeful that they have an appreciation that they can and are welcome to participate in cloud-related projects.  I hope that they’ve understood that projects such as Delta Cloud and related projects that they could direct their students (undergrad orgrad) to participate.  
For the benefit of all, here are some links that would be good to explore:
I was also asked about what Red Hat does for academics and was a prefect shoe-in to introduce both POSSE and Red Hat Academy.  Hopefully I will be run a POSSE in Malaysia really soon.

True Leadership and The Open Source Way

I live in the Free and Open Source World. A lot of what the FOSS movement’s ethos and principles are quite core to me.  I think this webinar featuring Charlene Li is a required viewing.  Remember, this is not about technology.  It is about how you should do things, how you should be authentic and how you should consider the notion of leadership.

This is a model that applies very well in daily life, including politics. Yes, politics. If you want to gain trust of the population, openness, authenticity and honesty are very important.  Lessons from The Open Source Way are very useful and appropriate as my country prepares for the upcoming parliamentary elections (likely to be on April 30, 2011).

NASA’s inaugural Open Source Summit

I missed the live streaming of the NASA OSS Summit but it is mostly all captured and available on  These are the links to the recordings:

Day One:

Day Two:

And a great post on OSDC.


Taking the higher ground

I am disappointed with the kinds of ad hominem attacks being made at the person from the PAP who is being labelled as the PAP’s youngest candidate to be introduced this time around.

It is one thing to comment on how the MSM covered her introduction with a “Ring”-like photo on the front page – the criticism is about how the MSM made the classic editorial mistake of a bad photo, and it is another to do character-assassination which seems to be what is being done. Give the lady a chance. Everyone deserves a chance. Yes, even though I will never vote PAP, I will still want to hear them out.  I am sure she has some sincerity and clearly would want to serve. She says that she has been working on the ground in the Ulu Pandan area for 4 years. Kudos to her then.

The vitriol that is being made is with regards to her husband being the principal private secretary to Lee Hsien Loong (the Prime Minister). That there is nepotism and/or cronyism in play could be a fair comment; but that is a field that is well oiled with the ruling party, so one should not be surprised.

The scenario that would will disappoint my fellow citizens will be if she is grouped in the GerrymandeRed-Constituency-scheme and that GRC does not get contested. In that case, she walks into parliament without being actually voted in.

Remember – in 2006,  only 34.27% of all voters VOTED for the PAP who went on to get 97.6% of seats in parliament! An unaccountable parliament could again be in place in 2011.

So, let’s take the higher ground. Let’s show the world that Singaporeans are fair and passionate people.  See Cherian’s post on this topic.

Interesting post from a non-techie moving to Fedora

A good friend of mine sent me a note about his friend’s experience in moving to a Fedora and Red Hat desktop environment.  That person is a non-techie and this is his report – all unsolicited – but posted with permission and anonymized.

I’ve installed both Fedora and Red Hat, here’s my first impressions:

1. Both Fedora and Red Hat are well designed. Because they use GNOME, both have a similar look and feel to Ubuntu. This is great as it makes for an easier transition! 🙂

2. Just like Ubuntu, after you first install Fedora and Red Hat, the system jumps onto the Internet and looks for software updates and security fixes that need to be installed.

3. With my high speed Internet connection Fedora took several hours to upload and install its initial updates.

I’m guessing with your connection that the initial update (and the annual update) will take a full day. Fortunately, during the update there were only two events that required me to click a button. Otherwise I was able to walk away from the computer and just let it do its business.

3. Red Hat took a bit less time in its initial update. I’m guessing this is because it has less software.

4. Fedora and Red Hat are identical in their look and feel. They have different applications pre-installed and, most importantly, Fedora has access more software than Red Hat does.

Red Hat is very conservative in the software it includes. I’m guessing this is because it is typically used as a secure server for business. Hence, it doesn’t offer as much end-user software.

Note the difference in pre-installed software available as seen in the attached screen shots.

5. Finding and updating software is very similar to Ubuntu. I found the package lists easier to navigate in Ubuntu, but Fedora and Red Hat are still easy.

That’s what I have for you thus far!


Unfiltered feed from Al Jazeera

If you are running Fedora or Red Hat Enterprise Linux, you can watch the raw feed from Al Jazeera using this script:

======>8=====cut here=============
rtmpdump -v -r rtmp:// -y “aljazeera_en_veryhigh” -a “aljazeeraflashlive-live” -o -| mplayer –
======>8=====cut here=============
Save the preceding into a file called for example, and change the permissions to x (chmod +x and then you can run it as ./

Setting up the standard Andriod marketplace on the Archos 10.1

I was not a very pleased user of the Archos 10.1 ever since I got it last December.  The issue centered on the Archos supplied “AppsLib” which was not all that efficient nor useful. It would startup slowly sometimes, crash at other times, and a lot of apps that I’ve got in my Nexus One was not even available (like ConnectBot for example).  Apart from these inconveniences, the tablet is really a nice device, quite responsive and despite it’s plasticky feel, it is robust and quite well built.

The lack of the standard Android marketplace was gnawing at me for the longest time and last night, I came across a post that suggested that the XDA developers forum has a specific hack to address this. So, 10 minutes after downloading and installing the gAppInstaller, and two reboots later, the Archos 10.1 now has a pride of place and has become a delightful device to be used well.

I am not entirely sure why Archos decided not to include the standard Andriod marketplace, but I reckon this has to do with them trying to differentiate.  I think it is a huge mistake to take a path of differentiated marketplace for it splinters the ecosystem and does not leave the user in a good place.

What I would like to complete the Archos next with is a decent jacket. That is elusive still!

At some point, I’d like to run Fedora on it as well.

Virtualization and the Internet

I had the privilege of speaking to a great group of network operators as part of the South Asian Network Operators Group conference held in Colombo, Sri Lanka from Jan 11-18, 2011. The topic I spoke on was entitled “Virtualization and the Internet”.

This is what thought leadership is an example of!

I am pleased to see this note by Michael Tiemann, President of Open Source, Inc.  As 2011 opens up, I would not be surprised to see the CPTN Holdings LLC, begin to play the game that their founders want – to go after people, groups, projects that might infringe software patents. It is universally agreed that software patents are an abomination (and by someone no less than Bill Gates). I am troubled that all of this maneuvering will continue to confuse and complicate FOSS development.

Was I fair?

I read with amusement some of the comments that people made with regards to the chat I had with the Dell Rep about acquiring a N-series machine. The chat is posted here.

Most of the comments were about how they DID NOT KNOW of this option being available which was the intent of my post, but there was a subset of comments that were clearly annoyed with their perception that I was “rude”, “an ass”, “a douchebag”, makes “us computer scientists look bad” and so on.

Perhaps I am guilty of all of the above and I would apologize to both the Dell Rep and those who voiced their objections.

I bought my very first laptop from Dell in 1996. Although it came with Windows 95, I think, I loaded up either Slackware or Yggdrasil. That machine is long gone – the LCD started peeling off and the motherboard went bad.  But the harddisk (I think it was a 500MB drive) survived and I’ve long since given it away. I then went through about 6 more Dell laptops (and oodles and oodles of Dell tower and pizza-box servers) and my current pride of place is reserved for a N-series Dell Vostro V-13 running Fedora 14. I am, indeed, a long time loyal customer of Dell’s.

With that in mind, and the nature of that engagement with Dell was about a committed customer who wanted to continue to recommend yet another Dell but with Linux on it. Having being frustrated in not finding the N-series offerings on Dell’s site, I entered the chat with an annoyed frame of mind.  No, it is not an excuse for any perceived bad behaviour, but I am a knowledgeable customer who knows about the N-series offering and getting riled about not finding it.

In any case, the intent of my post has been achieved.  Now people are better aware that they can, if they want to, acquire a piece of hardware from a reputable vendor but with their choice of software.  When you empower and engage with your customers, both you and your customer win.  Doing business is not a zero-sum game. I would encourage those reading this post to listen to Prof Michael Porter’s interview on BBC that aired earlier this week on the nature of shared value/value shared.

Security breach of

Thanks to Mozilla for this pro-active reporting of the security breach.  If any of you reading this blog have an account on and have not received this note, please take action.

Mozilla Add-ons
date Tue, Dec 28, 2010 at 8:34 AM
subject Important notice about your account

Dear user,

The purpose of this email is to notify you about a possible disclosure
of your information which occurred on December 17th. On this date, we
were informed by a 3rd party who discovered a file with individual user
records on a public portion of one of our servers. We immediately took
the file off the server and investigated all downloads. We have
identified all the downloads and with the exception of the 3rd party,
who reported this issue, the file has been download by only Mozilla
staff.  This file was placed on this server by mistake and was a partial
representation of the users database from The file
included email addresses, first and last names, and an md5 hash
representation of your password. The reason we are disclosing this event
is because we have removed your existing password from the addons site
and are asking you to reset it by going back to the addons site and
clicking forgot password. We are also asking you to change your password
on other sites in which you use the same password. Since we have
effectively erased your password, you don’t need to do anything if you
do not want to use your account.  It is disabled until you perform the
password recovery.

We have identified the process which allowed this file to be posted
publicly and have taken steps to prevent this in the future. We are also
evaluating other processes to ensure your information is safe and secure.

Should you have any questions, please feel free to contact the
infrastructure security team directly at If you
are having issues resetting your account, please contact

We apologize for any inconvenience this has caused.

Chris Lyon
Director of Infrastructure Security

Interesting to see me quoted

I was pleasantly pinged by someone who said that I was being quoted in an article saying: “The best moment for me was the launch of Fedora 14 (and subsequently Red Hat (NYSE: RHT) Enterprise 6) along with the efforts,” wrote Harish Pillay in the TuxRadar comments, for example. “They augur well for 2011 and beyond.”

Well, it is true.  DeltaCloud is very critical so that corporates will not be straddled with the “mother of all lock-ins”. I cannot emphasize that enough. As more entities contemplate moving more of their operations to the cloud, it is crucialthat the cloud service provider provides a fully documented means to ETC (Exiting The Cloud).

How to buy a Dell WITHOUT windows

I was asked by a friend to get a Fedora CD to her and her friend so that their children can learn to use Linux.  I suggested that I will help by shipping the Live CDs as well as spending some time (along with my 2 sons) to teach their sons how to use Linux.

Then the request came back asking where can they get a new laptop without Windows and that prompted my revisiting the website to see if I can get a machine without ‘oze.I have a Dell Vostro V13 N-series (which came with Ubuntu preinstalled, the “N” meaning “No Windows”). So, that was what I was looking out in the site.  Search as I might, nothing showed up.  It’s amazing how well hidden the n-series offerings are.  I am very sure Microsoft’s marketing muscle is squarely behind it. 

Now, since I know that there is such as thing as a N-series laptop, I clicked on’s “Live Chat” button and the following is the transcript of what happened. I’ve replaced the Dell person’s name with “Dell Rep”.
16:35:51 Customer harish pillay
Initial Question/Comment:
16:35:56 System System
You are now being connected to an agent. Thank you for using Dell Chat
16:35:56 System System
Connected with Dell Rep
16:36:01 Agent Dell Rep
Welcome to Dell Sales Chat. My name is Dell Rep. I’ll be your personal sales agent. How may I assist you. If you proceed to place your order online, please indicate my name, Dell Rep, as your sales representative so that I’ll be able to track your order for you.
16:36:22 Customer harish pillay
Hi, Dell Rep. Can you point me to where I can get the n-series vostro v13?
16:36:33 Customer harish pillay
i do not want to buy windows for the machine.
16:37:01 Agent Dell Rep
we do not offer n-series of the Dell system online
16:37:07 Customer harish pillay
16:37:24 Customer harish pillay
does microsoft restrict sales of n-series online?
16:37:56 Agent Dell Rep
i’m not too sure on that but V13 has being replaced by the V130
16:38:23 Customer harish pillay
ok, so I would like to buy a v130 without an OS (I will settle for freedos).
16:38:36 Customer harish pillay
i prefer the n-series v-130 then.
16:39:51 Agent Dell Rep
the V130 is not offering any Free DOS version at the moment
16:40:01 Agent Dell Rep
let me check on the availability of the V13
16:40:13 Agent Dell Rep
we do offer Free DOS offline
16:40:50 Customer harish pillay
it does not matter if it has freedos or fedora (or even ubuntu). i want to buy the machine without any microsoft os.
16:41:07 Agent Dell Rep
any particular specifications on your mind?
16:41:47 Customer harish pillay
i will be running Fedora and/or Red Hat Enterprise Linux on them. I have the apps taken care of already.
16:42:22 Agent Dell Rep
any specific hardware requirement
16:42:51 Customer harish pillay
Why don’t you focus on asking about the OS? the hardware is OK as it is.
16:43:25 Customer harish pillay
64-bit, 8G would be nice, but 4G RAM is OK. USB (3.0 would be nice), bluetooth, wifi.
16:43:55 Agent Dell Rep
please note that you may find difficulties for the correct drivers as we have not tested on the compatibility of the drivers with the OS you intend to install
16:44:13 Customer harish pillay
not that you support windoze drivers anyway.
16:44:17 Agent Dell Rep
any other hardware requirement
16:44:33 Customer harish pillay
16:44:52 Agent Dell Rep
also is this purchase are for your company or personal?
16:45:05 Customer harish pillay
does it matter?
16:45:27 Agent Dell Rep
i need to generate the quotation for you
16:45:51 Customer harish pillay
Intel Corporation WiFi Link 5100, Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller,
16:46:19 Customer harish pillay
is there a price difference if the quote was for corporate vs consumer?
16:47:02 Agent Dell Rep
there’s no difference unless your company has a specific contract with Dell
16:47:40 Customer harish pillay
fair enough. give me the consumer quote first.
16:48:32 Agent Dell Rep
can i have your full legal name, address as well as your contact no
16:49:04 Customer harish pillay
:-). harish pillay, address
16:50:27 Agent Dell Rep
alright, let me work the quotation and emailed it to you?
16:50:40 Customer harish pillay
16:50:47 Agent Dell Rep
16:52:22 Customer harish pillay
so are we done or what?
16:52:42 Agent Dell Rep
unless you’ve others to add.
16:53:04 Customer harish pillay
so the quote will be without windows?
16:53:12 Agent Dell Rep
16:55:26 Customer harish pillay
that’s fine. can you provide me with the quote that shows with and without windows?
16:55:37 Customer harish pillay
i want to know the difference.
16:55:42 Agent Dell Rep
16:55:46 Customer harish pillay
not that i want windows.
16:56:50 Customer harish pillay
are you mailing the quote now?
16:57:17 Agent Dell Rep
give me about 5 mins and i shall be able to send it to you
16:57:56 Customer harish pillay
ok thanks. i will be keeping this chat transcript and blanking out your name.
16:58:12 Agent Dell Rep
16:58:57 Customer harish pillay
the reason for keeping the chat transcript is so that I can post this to my blog stating that there is a way to buy non-windows Dell machines but one has to ask for it.
16:59:26 Customer harish pillay
so, keeping your name off the transcript is key as it is not you but your organization that is at fault here.
16:59:32 Agent Dell Rep
we do have regular request for n-series of system from time by time
17:00:20 Agent Dell Rep
we do not offer is sometimes to avoid misunderstanding from certain customers where they look for the cheapest system and only to find out that no OS was installed
17:00:33 Customer harish pillay
and I want to make it a permanent request and something that I can find from your online catalog. As long as it does not appear, I think Dell is doing the whole world a disservice and pandering to Microsoft’s monopolistic heavyhand.
17:00:33 Agent Dell Rep
we had that quite a lot previously
17:00:56 Agent Dell Rep
that’s the reason we choose to offer all with the OS preinstalled
17:01:00 Customer harish pillay
and if you explain to people, they will understand.
17:01:26 Agent Dell Rep
not all customer are as understanding as you
17:01:39 Customer harish pillay
so long as dell hides the info (or makes it hard to find), these misunderstandings can happen.
17:02:07 Agent Dell Rep
we have lots of customer who choose the cheapest and only to find out that no OS is installed
17:02:57 Agent Dell Rep
we don’t really hide the info as long as a customer request for it, we’ll be able to offer
17:03:22 Agent Dell Rep
we just limit them options online to avoid misunderstanding
17:03:37 Agent Dell Rep
anyway perhaps we may work out something in future
17:17:37 Agent Dell Rep
i’ve just emailed both quotation to you. could you please check and revert
17:23:28 Agent Dell Rep
Is there anything else I may assist you? If there’s no further assistance required, you may email me at DELL REP shall you need further assistance / clarifications.
17:32:49 System System
The session has ended!
I have received a quotation for the Dell Vostro V13 and here are the numbers:
a) S$887.15 for the N-series
b) S$1045.79 for the same machine with ‘doze.
Hardware: Vostro V13 System Base (SU7300)Intel® Core?2 Duo Processor SU7300 (1.3GHz, 3M L2 Cache, 800MHz FSB) ULV, 13.3HDF Anti-glare LED LCD panel with camera, 4GB (1X4G) DDR3-1066MHz SDRAM, 1 DIMM500GB* Hard Drive, 7200 RPM, 6-cell Lithium Ion Sealed 30Whr Battery, Integrated Graphic CardIntel(R) Wireless Network Card 5100 (802.11a/g/n)Dell Wireless 365 Bluetooth ModuleInternal Dell(TM) Keyboard (English).
So, the Windows-tax is S$158.64. Now you know.

National Convention for Academics and Researchers, Hyderabad, India

I had the distinct privilege of attending and speaking at the National Convention for Academics and Researchers 2010 in Hyderabad on December 17 and 18 2010. The event was held at the Mahindra Satyam Technology Center, an enclave of low-rise building that helps one get away from the horn-tooting noise of a typical Indian city. The settings were a pleasant park-like environment.

I arrived at the location at about 10:30 am on Friday Dec 17 and I was greeted by the nice cool weather (I reckon with a daytime temperature of about 20C).  There were about 4 low-rise building housing the conference auditoriums.  I particularly like the fact that those buildings were named after legendary Indian centers of learning like Nalanda  and so on.  Interestingly, I am not able to located a map that shows the names of those buildings (and the Mahindra Satyam website is horribly broken when viewed on Chrome but OK on Firefox).
I participated in a few of the talks, with my own contribution during the FOSS and Education session (sadly, there are no online references to the session – the site has not been updated; probably will never be). For what it’s worth, here’s my presentation.
I spoke for about 15 minutes and touched on POSSE, TOSW and threw out an invitation to the audience (90% of whom where faculty) to consider participating in a future POSSE to be run in India and thence to help run POSSEs themselves.  All I can say is that I have an overwhelmingly positive response and I think we have our collective hands full in making this happen in 2011 in India.
We need to urgently figure out how to scale POSSEs in 2011/2012 and I am inclined to look at the TEDx model to ensure consistency, quality and value. Day 2 and feeling sick! Really sick!

Day two of the very last started a little late for me.  I was developing a cold, a really bad cold.  The one thing I always carry with me when I travel is vitamin C.  This time, I completely forgot it (all my fault, not The Wife’s). I have found that if I take at least 1000 mg daily, when I travel, I am functioning well and given the usual timezone challenges etc, I do not fall sick.  But that was not the case this time.

I decided that I will take a slow start to day two and fortunately, my colleagues in Red Hat India had arranged for interviews with a couple of journalists in the morning.  That suited me fine.  This meeting was to be a the Oberoi Hotel at 10:30 am, I found my way to the place well ahead of time (not wishing to be stuck in traffic for no good reason).  It was good that I did this, as the cold that was developing was really getting to be annoying and I was really glad that the concierge (a Mr Amit) at the Oberoi offered me at no cost a couple of paracetamol tablets. I took that with a couple of cups of hot tea with ginger and honey and I was slowly beginning to feel better, just in time for the journalists.
All I can say is that it was nice to be able to chat with the journalists, who, thankfully, understood Red Hat and it’s business, which gave me then the time and energy to explain why nurturing and growing the open source community is just as critical and foundationally important for the long term growth of the commercial open source business like Red Hat.
The interviews were over by about noon and that allowed me enough time to fight the noon Bangalore traffic and arrive at the venue by 1:30 pm.  After gulping down a nice vegetarian lunch (I guess all they had there was vegetarian lunches), it was time to proceed to Hall C for the Fedora MiniConf.
I must say that I was really pleased to see a number of people (40 perhaps) who began to fill out the auditorium, which I think is the smallest of the three auditoriums a the centre.  Rahul Sundaram, the team leader of the Fedora Ambassadors in India, kicked off the session and invited Amit Shah to speak about Fedora Virtualization: How it stacks up.  I enjoyed Amit’s talk and learned a few things about kvm which was nice considering that Amit is a core contributor to KVM! His talk was followed on by Aditya Patawari who spoke about “Fedora Summer Coding and Fedora KDE Network Remix“. The contents were good but I think Aditya needs to do a little less pacing on the stage for it tends to be distracting. A key lesson from Aditya’s talk, to me, was the need to greater modularization of packages without a massive penalty in the metataging of the package system.
The third talk, IMHO, was the most fun for me. As someone who has been spending most of his editing time using vi, with the occasional foray into emacs, this talk, by SAG Arun, entitled “Exploring EMACS in Fedora – tips and tricks, packaing extensions” was indeed refreshing. I think I shall now make the $EDITOR in my machine to be emacs instead of vi!
I realized that the GPG keysigning that I wanted to run was not going to happen (as I had only gotten one participant) and that my cold, that was being held back by the earlier paracetamols and ginger/honey tea, was now coming back with a vengeance.  Added to that, I now had to catch a flight to go to Hyderabad for another event – the National Convention for Academics and Researchers. So, reluctantly, I had to cancel the GPG keysigning. The next time then!
I got into the car and arrived at the really nice Bangalore Airport and when I got out of the car, there was a clear and distinct chill in the air which caused me to shiver. And boy was I shivering.  It has been a long time since I felt that bad, and it did not help that the temperature outside was around 16C and all I had on was a t-shirt and jeans and a really bad cold.  I am not sure if the shivering was due to the cold or to a fever which I felt I was having. I managed to make my way through the customary security and checked in and found a pharmacy.  The on-duty pharmacist recommended a fairly strong medication (in the form of tablets) that contained both paracetamol and anti-histamines. All for 200 rupees. Nice. That was OK but I really wanted to know if I was running a fever and asked the pharmacist if she could just take my temperature.  What she said was that “that would be considered a out-patient and we will have to charge you”. Huh? Just the temperature, ma’am. Nothing more. I told here, “that’s fine. I will figure this out”.
As the flight I was taking to Hyderabad would not be serving any meals (hey it’s a budget airline), I figured that I better get some grub in before taking the medication and the flight. The airport’s offerings of food was nice, the environment really posh (yes, I am a sucker for well laid out airports) and it made for the miserable cold/shivering situation a lot better to manage.
The flight on Jet Airways, was on time and arrived into Hyderabad about 1.5 hours later, again on time. Nice flight, nothing spectacular.  All I could feel was that the cold was making me feel weak and tired.
I did not know what I was to experience at Hyderabad Airport – it’s my first trip there. This was a spanking new airport! What a pleasant welcome for a weary traveller. Like the Bangalore airport, this Hyderabad airport is also built
on land some 30-40 km away from the city center and accessible via a set of multi-lane highways. Nice.  Eventually, about 1.5 hours after arriving into Hyderabad airport, I was safe in the hotel and took a quick shower and crashed out. I needed to get out of this cold/flu/crappy feeling. Sleep will help. day 1 and Red Hat Enterprise Linux 6 launch in Bangalore (their Xth edition and allegedly their last), started off “in true style” – an hour late [this was what the MC said at the very beginning and is not an editorial comment from me.]

I listed to Danese Cooper, CTO of Wikimedia Foundation, deliver the openin keynote and I did learn a significant amount of details about Wikipedia.  Here are some nuggets:
  • Wikipedia is the 5 largest site on the planet in terms of traffic
  • They have about 450 servers serving out the Wikipedia pages
  • The data center is Tampa, Florida and in Amsterday, Holland.
  • They are looking for a 3rd data center somewhere in Asia – possibly in India or Singapore (any takers, National Library Board perhaps?)
  • They have about US$20 in revenue mostly from sponsorship and donations 
  • Are fiercely independent and are not looking for help or funds that can be construed as being biased
  • Have optimized their MySQL instance as well as many other tweaks to make the site extremely responsive.  As an aside, I think they are not even using Akamai for content caching.
  • When their site goes down for any reason, they will get calls from BBC, CNN etc as Wikipedia has become a key resource.
Danese’s talk lasted about 45 minutes followed by a lively Q&A session.  Watch the video when I get a chance to post it.
The day 1 was a good time to connect up with a whole lot of new folks.  OLPC’s Manusheel Gupta, an independent technologist, Arjuna Rao Chavala, Wikimedia’s Alolita Sharma and Eric and a whole slew of Fedora volunteers (for the Fedora Miniconference happening on Thursday).
The next talk I attended talk “Hardware Design for Software Hackers” by Anil Kumar Pugalia.  I thought it was a good talk focusing on using only open source tools (like avr, kicad etc) to create hardware that can then be fabricated and deployed.  It was a fun talk I felt.
Took a break from all of these talks and went over to Hotel Leela where Red Hat India was holding the launch event for Red Hat Enterprise Linux 6.  It was nice to see the RHI folks there. They had in excess of 1,100 registrations to attend the event and if we give a 50% attrition rate, that was still a number greater than the capacity of the ballroom.  So to a packed audience, Red Hat’s story was told in 4 parts and I think it was an overall success.
Looking forward to day two of

Participate in this info-comm survey

I think this survey, being run by the Nanyang Technology University and the Singapore Computer Society could use responses from across the world, not only Singapore.  So, please consider participating and making this survey results useful. Although I am not directly involved with the survey per se, I will post results from it here on this blog (and yes, I am trying to get the raw data on a CC-license).

This is so clever!

When something clever, it deserves pointing out.  Google posted a video about their Chrome OS and that video had some interesting stuff aka easter eggs
What was nice is that the folks at Jamendo, which has a very large and growing collection of musicians and music that you are put out on creative commons license, figured out the easter eggs and got themselves a Chrome laptop.  Well done!

GPG Keysigning at 2010

I will be attending the event from December 15-17 in Bangalore, India.

As part of the Fedora participation at the, I will be running a GPG keysigning party.

This will be the first time I am running a GPG keysigning event and I am following it all on the experiences of Matt Domsch and documented here.

For the session, please ensure that the following is adhered to (again, adopting the good work from Matt):

How To Participate (BOLD is mandatory, ITALICS is optional):

a) You need to pre-register for this.

b) If you do not already have a GPG keypair, get one done.

c) You may choose to add your ID into your key pair.

d) Submit your key before the keysigning party to keyserver. To submit, you will need your KEYID from your keyring. Run the following command:

gpg --list-secret-keys | grep ^sec

which in my case will return:

sec   1024D/746809E3 2006-02-20

What you need to do is to take the portion after 1024D and submit that to the keyserver.

e) To submit your KEYID, you need to execute the following command:

gpg --keyserver --send-keys KEYID

Make sure you replace the word KEYID above with the actual key.

f) Once the KEYID has been successfully submitted, email me your key fingerprint using the following command:

gpg --fingerprint KEYID | mail -s " key"

Just Before (all the following steps are mandatory)

a) If you did pre-register (ie, your emailed me the info requested above), please print out your key fingerprint ONCE and bring it along.

b) If you did not send it ahead of time, you might have to print out multiple copies of your key fingerprint. One copy per person at the keysigning party.  I cannot confirm how many there will be but do watch this blog for that number.

c) To print out your fingerprint, you can use the tool “gpg-key2ps” (found in the pgp-tools RPM – “yum install pgp-tools”).

gpg-key2ps KEYID >

will generate on one page the fingerprint of your key. This document, can be viewed using evince or if you prefer convert to a pdf using the ps2pdf command.

d) Run md5sum and sha1sum on the foss-in-keysigning-fingerprints.txt file.  The file, foss-in-keysigning-fingerprints.txt will be generated shortly before and you will be notified by email of it’s availability. Print out the results of running both the commands and bring along that piece of paper to the meeting.

e) Bring along a government-issued ID with a photo of yourself in it. This document can be a passport, a national ID card or a driver’s license. It is very important that this document has a photo of yourself that is relatively recent and that this document is government issued.

In summary, right before the kesigning event, you will have two pieces of paper (one with your key fingerprint and the other with the md5sum and sha1sum results of the foss-in-keysigning-fingerprints.txt file).

At the Keysigning Event

Since I am asking for people to pre-register, you will find the needed files on We will be READING out thees values in the file to confirm match.

Post Keysigning

Once the values are read out, you will need to do the acutal signing of keys. For this, we will use “CA – Fire and Forget” tool called caff. Caff will be able to do bulk signing of keys and will then send off email  to all those whom you have confirmed. The recipients will then need to retrieve their signed key, import into their gpg keyring and also upload to the keyserver

Please watch this space for the exact time and location of the GPG Keysigning event.

Red Hat Enterprise Linux 6 launch in Singapore

Red Hat is launching the next version of Red Hat Enterprise Linux, version 6, on December 3rd 2010 in Singapore at the M Hotel, Anson Road.  It starts at 9 am.  I would be sharing the RHEL 6 overview, features and roadmap.  Sign up here.



Time Programme
8.00am – 9.00am Registration & Welcome Snacks
9.00am – 9.15am Welcome Address
9.15am – 10.00am RHEL 6: Overview, Features, Roadmap

This presentation will provide an overview of the Red Hat Enterprise Linux 6 product, covering product goals, new features and capabilities, and packaging. The presentation will be useful for CIOs and IT managers who wish to learn about this new, industry leading operating platform and how it can help them achieve their enterprise computing goals.

10.00am – 10.45am Cloud Infrastructure Matters: Virtualization, Linux and more

Virtualization is the foundational technology for cloud computing, but it is also an important technology in its own right for achieving operation efficiencies in a modern datacenter. Virtualization helps organizations expand their IT capabilities and simultaneously lower capital and operational costs. We will explore key functionality and use cases for server and desktop virtualization. We will also discuss how you can build a virtualization architecture using Red Hat Enterprise Virtualization, and lay the groundwork for both internal and external Clouds using Red Hat technologies.

10.45am – 11.15am Tea Break
11.15am – 12.00pm PaaS, Present and Future: The Essentials for Building, Hosting, Integrating & Managing JBoss Applications in the Cloud

The ability to develop applications, seamlessly integrate them with existing heterogeneous environments and deploy them to a cloud infrastructure is what makes Platform as a Service (PaaS) solutions so attractive. But, the benefits of rapid time to market, increased flexibility and lower costs are not guaranteed. How you design and implement the solution is critical.

Many PaaS offerings introduce a new, proprietary application development environment. Others deliver a PaaS based only on simple developer frameworks, limiting choice and application portability. When evaluating offerings, it’s important to consider portability and interoperability in both development and deployment, support for the programming models you choose to employ, the breadth of the middleware reference architecture, and the availability of tools to assist you throughout the entire application life cycle – from development through management.

In this session we’ll cover the essential requirements for Platform as a Service, and discuss how you can leverage Red Hat’s JBoss Enterprise Middleware today to build, host, integrate and manage applications in public or private clouds.

12.00pm – 12.30pm Red Hat Training and Services: Real-World Perspectives

Enterprise businesses across a variety of industries and sectors rely on Red Hat training and consulting services to address their critical business demands. Learn more about Red Hat’s enhanced training programme, which upskills IT professionals with the knowledge and proven hands-on skills to optimize the performance of Red Hat technologies such as virtualization, cloud computing and Red Hat Enterprise Linux.

12.30pm –12.45pm Question-And-Answer Session


Fedora 14 launched in Singapore

Fedora 14 was officially launched on November 29, 2010 at the Singapore Management University.  The event was jointly held with the launch of the Open Source Software for Innovation and Collaboration Special Interest Group of the Singapore Computer Society.

About 60 persons attended the event.  Here are some photos. The presentation I did about Fedora is here.
What the Fedora Ambassadors and I showed during the event was the following must haves:
a) Xournal – this is really useful tool to help with the mundane task of “editing” a PDF.  Many a times, I have been stuck with a PDF form that needs a simple signature.  It is criminal to have to print out the PDF, sign it and then to scan and PDF it.  Xournal should be nominated for a Green Award for Software if there is one.
b) NetworkManager‘s ability to share out the network: I used a 3G USB dongle (Bus 006 Device 003: ID 12d1:1001 Huawei Technologies Co., Ltd. E620 USB Modem) on the laptop that I was using for the launch.  I connected to the 3G network, then got NetworkManager to create a shareable wifi hotspot. This little capability is extremely understated. With the amount of devices around you that are wifi-capable, this ability of a Fedora machine to be able to become a wifi-hotspot is extremely useful.  Yes, you can do tethering with cell phones (atleast the Android 2.2 phones), but this being a standard capability of a Fedora box needs to be publicised.
c) Virtual Machine Manager: What demo is complete if youdo not show virtualization? And the fact that virtualization is built-in in Fedora means that you can now go about experimenting, using, breaking, fixing, updating various other systems.  This alone changes how we consume technology and how we can easily transition to the cloud.
It was an overall fun event.  I hope to see an increased participation by more folks in the Fedora community in Singapore.

More twists and turns

News is coming in that Novell is being bought out by Attachmate for about US$2.2 billion along with Novell’s “intellectual properties” (patents, trademarks, copyrights) being sold to some consortium called CPTN Holdings LLC.  For some reason, CPTN Holdings LLC does not seem to have a web presence of their own. BTW, seems to be down right now (2330 SGT/1530GMT Nov 22 2010).

I can only speculate as to what the sale of the patents/copyrights to CPTN will mean.  The monies they pay out would want to be recouped which will mean that they will become aggressive in their patent litigation efforts.  I think it is time NOW to ban software patents once and for all.


Reflections of a week of ISO/IEC JTC1 meeting in Belfast

Interesting meeting and great opportunity to meet with people who to me seem to be caught in a world that has seen it’s glory days. I was asked by the Singapore IT Standards Committee to attend as a representative from Singapore. Nice to have an opportunity to represent Singapore at a global stage.

The ISO‘s and the IEC‘s baby ISO/IEC JTC1 just concluded it’s 25th annual plenary in Belfast, Northern Ireland. The meeting lasted from 9am Monday November 8th till about noon Satuday November 13th at the Europa Hotel.  Five and half days of meetings (9am-5pm daily), was needed to ensure that all of the Sub-committees (SCs) and various Working Groups (WGs) were able to update the JTC1 on what they have been able to do over that last twelve months in the various standards making activities. 
Many topics were covered.  Cloud computing and green IT were the top two areas of interest and in need for standards. It is crucial that standards are developed and these ensure that customers cannot be locked-in.  I am agreeable with the notion that the Cloud has the potential to be the mother of all lock-ins [I am not saying this because that link quotes the CEO of Red Hat (where I work), but because it is the big elephant in the room.]
International standards making has some interesting artifacts.  The entities that help make these standards (ISO, IEC, ECMA etc) have as one of their key components in their revenue model, the sale of printed documents containing the standards! I must note that not all of the standards bodies have “sale of printed documents” as a key portion of their revenue model (see IETF, W3C for example). However, the group whose meeting I was in, JTC1, is struggling with the notion of making the standards documents freely downloadable, while parent organization of JTC1 (ISO and IEC) are against it. The plenary did put up a resolution asking that JTC1 standards be kept free of charge to download (and if someone wants a dead-tree version, then pay for it). The continuing resistance from ISO and IEC in making this happen will ultimately see the irrelevance of JTC1 as a IT standards making entity. Perhaps a generational change is needed in the management and leadership of ISO and IEC for this to happen, by which time, I reckon, it will be too late.
Enough of doomsday stuff.  
I must record here that I am really pleased that as part of the outcome of this year’s plenary, there was a recognition that the collaboration tools that the JTC1’s constitutent groups use are not as good as can be.  They use some tool called Livelink (I think) and lots of conference calls etc.  The plenary resolved that they need to have better tools to help with collaboration and there was a call to set up an Ad-Hoc Group (AHG – yes these standards bodies LOVE their acronyms). Singapore, France and a few other countries and SCs offered to be on the AHG and, Singapore offered to chair it and I am happy to say that I will be chairing this global effort to improve the tools that standards-making groups can use to do their work.  I am looking forward to making this happen – I have to report back at the next plenary in November 2011 in San Diego.
I will be tapping on my colleagues in Red Hat and the greater open source community and trying to use the principles in The Open Source Way. Surely, tools like wikis, etherpad, irc chats etc can signifcantly improve the communications and collaboration.  It is quite sad to see word processor-based documents being emailed around as the basis of discussions etc.  Where is the single source of truth of these documents?
So, if anyone there has ideas on easy and robust collaboration tools, tell me. I have to convene a meeting of the AHG to look at these and I certainly want to use quality tools to make it happen. If these tools are open sourced, it will be so much better.

Standards work

Not sure how the upcoming week will pan out, but I am definitely eager to learn what are the nuances and politics that happens at the ISO.  I will be a delegate from Singapore attending the ISO/IEC JTC1 annual plenary in Belfast, Northern Ireland. 

It will be a long 5.5 days and I am hoping to blog and dent about it. In the meantime, what are the fun things to do in Belfast?

Congratulations Fedora!

Really happy to see Fedora 14 unleased at 1400 GMT today. I had an opportunity today to engage with a bunch of people who are new to Red Hat and explaining to them Fedora and how Red Hat makes money.  When I mentioned to the audience that Fedora 14 is being released today, they were have a confused expression on their faces.  They did not get it.

So, there is a new version.  What’s the big deal? No, the audience did not ask – they did not know what Fedora was and how it continues to define the leadership of what Open Source software development and innovation is all about. I think I will do the needy and provide the group (whom I will be meeting tomorrow) a copy of the ISO on a thumbdrive so that they can see what this is all about!

Being part of Community Architecture

It is time to tell the world that I have the distinct pleasure to become part of the Community Architecture team within Red Hat. This is indeed both an exciting as well as a deeply challenging opportunity.  Exciting because it means I get to continue to engage with some of the brightest minds within Red Hat who are chartered to think about how the FOSS community has to be kept alive and well. A thriving FOSS community will help continue the amazingly rapid innovation that happens which Red Hat can then bring to enterprises.  Red Hat bringing the innovations from the community to enterprises ensures that everyone wins by added engineering and QA/QE that is added so that deploying FOSS in mission critical systems become a no-brainer. Equally important, the investments by Red Hat on the additional QA/QE flows back out to the FOSS community.  All of this is the core of what has come to be termed, The Open Source Way.

I have a lot of ideas as does the team. We need to build a thriving community of FOSS contributors in Asia Pacific (APAC in short).  For too long, APAC has been a net consumer of FOSS and contributors are few and far between.  I am hoping that with the added focus I now can bring to this space on a full-time basis, that the number of people contributing code, documentation, testing, new ideas etc from APAC countries will see an increase. It has always been my belief that smart people are evenly distributed around the world.  Why we see pockets of contributors is largely a function of connectivity and opportunity.  With the connectivity equation becoming moot, we need to foster opportunity.  For that, I am gung ho and ready to make the plunge.
My initial target group will be the Fedora, JBoss and DeltaCloud communities. Onward ho!

Rescuing a SCO OpenServer 5.0.5 machine to run in a VM on RHEL5.4

I received a frantic SMS from an old classmate running a travel agency business saying that his SCO machine cannot now boot. He is running an application written to a 3-GL/4-GL product called Thoroughbred. He has had the system running since about 1999. And since it does not connect to the Internet, and only has internal LAN users coming in via a telnet connection, he never needed to keep it updated.

So, what was the problem? His 10-year old IBM tower machine failed to boot. Got some help from a local vendor and found that it was the power supply that failed not the harddisk. WIth the power supply replaced, he is now back in business. The proposal given to him from the vendor who fixed the hardware was to “upgrade to a new machine, but we cannot guarantee that the OS and applications you have running will work”.

Enter, the lunch meeting. After hearing his story, I thought his situation would make for an interesting case study for virtualization. So, with his permission, I got hold of the “still in original shrink-warp” SCO manuals and CDs along with the Thoroughbred software and installed the SCO OpenServer 5.0.5 in a RHEL 5.4 VM.

Here is what I had to do:
a) dd’ed the SCO OpenServer 5.0.5 CD (“you can now boot from the CD if your BIOS supports it”) into an iso. It came up to about 280M only!
b) From virtual-machine-manager, choose a new maching with full virtualization
c) Specified 800MB as the drive size (imagine that!)
d) Kept the defaults for the rest.
e) Proceeded with the boot up and installation.
f) All went well. It is hilarious to see the setting up of the harddisk with the sector numbers and heads being cycled through – the “drive” is virtual, and kudos to the virtualization engineers, the SCO installation program was sufficiently convinced that it was all real.
g) Boot up.

The boot up went well (user: root and password: fedora). But the network was not working. Had to start “scoadmin” to get into a curses based setup to configure the network device. I had to go back to virtual-machine-manager and set this VM to have a “pcnet” network. The default “hypervisor” network did not seem to work. The “pcnet” is apparently a ISA device which the SCO has drivers for. It was in the AMD section of the hardware network hardware setup section of the scoadmin command.

So here’s the /etc/libvirt/qemu/sco-openserver-5.0.5.xml:





[BTW, in the xml file above, I have to add a space after the < above so that it will not be interpereted by lj. Edit out the extra space if you want to use the xml file.]

The SCO OpenServer 5.0.5 does not have DHCP default and needs an IP to be specified. So, an IP, netmask and default gw has to be specified manually. Interestingly, the network device is “net3”.

It could ping out of the box, telnet to external machines etc so the NAT setup of the VM is working fine.

Looks like I have solved the problem from my friend by using virtualization. Now can get go get a new machine and run RHEL 5.4 on it have his ancient SCO OpenServer 5.0.5 + applications run in perpetuity.

Minor issue with NetworkManager and Fedora 12

Just solved a problem on my Fedora 12 32-bit machine (Acer Aspire One D250). For some reason, the nm-applet is not starting when I log into the desktop. When I start a terminal and fire up nm-applet as a normal user, I get the following error:

[user@machine ~]$ nm-applet

** (nm-applet:7050): WARNING **:  request_name(): Could not
 acquire the NetworkManagerUserSettings service.
  Error: (9) Connection  ":1.372" is not allowed to own the service 
"org.freedesktop.NetworkManagerUserSettings" due to security policies
in the confuguration file

But, when I substitute user as root (su -), I can start it and all’s well. Not a happy situation.

Did some checking and got a hint from –

So, what I have done is the following. In etc/dbus-1/system.d/nm-applet.conf, I had to add the
following for my userid:



I am using the “user” setting and not a group as suggested in the URL above as I do not want to create a non-standard group for this purpose. Suffice that my immediate problem is solved.

On the Fedora 12 machine that is experiencing this issue, policykit is as follows:

$ rpm -qa|grep polkit

On another Fedora 12 machine that all’s well policykit is as follows:

$ rpm -qa|grep polkit

Posted this to BZ 549253.

Automatically connecting to the Wireless@SG hotspots

Thanks to a caustic comment some years ago to Lee Kuan Yew by some visitors to Singapore, we have a nation-wide free wifi network called “Wireless@SG”. It has been a constant pain to use and after much ridicule, someone, somewhere has decided to do the Right Thing (TM).

So, what was the problem? Well, when you come across a Wireless@SG hotspot, you have to log in via a browser before you can continue. Yes, you have to provide a username and password (yes, “someone” wants to track you). Move away from that hotspot to another Wireless@SG node, you have to log in again. No mobility. Wireless is about mobility. The people at IDA claim that they are constrained by “security requirements” from the Ministry of Home Affairs. On the other hand, the MHA folks I spoke with say otherwise. So, who dropped the ball? Whatever it is, we have wasted a tonne of tax-payer monies to run the Wireless@SG system for the last few years. There has not been a single report of the service levels of Wireless@SG and how IDA is accounting for the monies spent. I have no issue with providing a quality service using tax dollars. But to provide something that is annoying to use and having no public accountability is plain wrong.

Wireless@SG is still there and there appears to be some people using it. Most of them are NOT mobile – they tend to be seated at some fast food restaurant or coffee place etc.

Now, fast forward to 2010, it looks like the IDA has finally gotten around to make the Wireless@SG truly mobile. Why it took years, I cannot answer. Perhaps some boardroom battles had to be fought, who knows! Someone want to snitch? Post it anonymously if you must.

OK, so we now have proper mobility. Let’s look at the site that discusses this.

First, it suggests that you go to and gives you two options to connect – one via a piece of software (closed source) to connect your devices. Interestingly, they only list:

Supported Operating Systems
	- iPhone
	- Windows Mobile 6.1 and above
	- Windows XP/Vista/7
	- Mac OS 10.5 and above

as the supported OSes. No Fedora? Why?

Nevermind that. Let’s look at the 2nd option – the manual way of doing this.

Supported Operating Systems
	- iPhone
	- Windows Mobile 6.1 and above
	- Symbian S60 Windows XP/Vista/7
	- Mac OS 10.4 and above 
	- Blackberry OS 
	- Android 1.6 and above

again, no Fedora? They have Android so, how difficult is it to enable Fedora on it?

OK, let’s explore further. I had an account with iCellWireless and choosing thier column entry on Android, I get to see Android configuration document.

The information is trivial. This is all you need to do:

	- Network SSID: Wireless@SGx
	- Security: WPA Enterprise
	- EAP Type: PEAP
	- Sub Type: PEAPv0/MSCHAPv2

and then put in your Wireless@SG username@domain and password. I could not remember my iCell id (I have not used it for a long time) so I created a new one – They needed me to provide my cellphone number to SMS the password. Why do they not provide a web site to retrieve the password?

Now from the info above, you can set this up on a Fedora machine (would be the same for Red Hat Enterprise Linux, Ubuntu, SuSE etc) as well as any other modern operating system.

Now that we have solved the single sign on problem with Wireless@SG, I want statistics on usage, support problems, etc etc etc.


Thanks to Xournal, you can now annotate any PDF and export it out to a new PDF. This is excellent for filling in forms, note taking, keeping a journal, writing using a stylus etc. I have just experimented it on my newly minted Fedora 12 machine and it just worked wonderfully. My set up has a Genius G-Pen 340 pen tablet plugged in via a USB port and it just all worked seamlessly. Kudos to all who make this happen!