And, they are online now


Over a week ago, I was pinged by @l00g33k on twitter with a picture of a description of a piece of code I wrote in 1982.

That lead to a meet up and reliving a time where the only high technology thing I had was a 6502-based single board computer complete with 2K of RAM. It was a wonderful meet up and @l00g33k  was kind enough to handover to me a bag with 10 copies of the newsletter that was published by the Singapore Ohio Scientific Users Group. That was the very first computer user group I joined.

Suffice to say, I did help contribute to the newsletter by way of code to be run on the Superboard ][ – all in Basic.

I’ve scanned the 10 newsletters and it is now online.

I am really pleased to read in the Vol 1 #3 (page 36) a program to generate a calendar. The code is all in Basic. Feed it a year and out comes the calendar for the whole year.

Another piece of code is in Vol 1 #5 page 36 a program to print out the world map. That code was subsequently improved upon and published by another OSUG member to include actual times of cities – something that could only be done with the addiion of a real time clock circuitry on the Superboard ][.

A third program was in Vol 1 #6 page 26 that implemented a morse code transmitter.

I was very happy then (as I am now) that the code is out there even though none of us whose code was published in the newsletters had any notion of copyright. Code was there to be freely copied and worked on. Yes, a radical idea which in 1984 got codified by Richard Stallman’s Free Software Foundation (www.fsf.org).

Advertisements

You can’t wake a person who is pretending to be asleep**


Lots of what we take for granted found expression in the thoughts and writings of John Perry Barlow. He crafted “A Declaration of the Independence of Cyberspace” back in 1996. It held lots of truths that I found critical for all of us, regardless of where we find ourselves.

John passed away on 7 February 2018. I was privileged to have met him in Singapore on 28th March 1995 (update: thanks to Marv for the date) and what an honour it was. Singapore was in the midst of her “IT 2000 Master Plan” crafted by the National Computer Board (the earliest I can find of ncb.gov.sg is from 13 October 1997).

He had spoken at an event at the NCB and we then proceed to have lunch a chinese restaurant at Clementi Woods park. Among the many things we chatted about was about the future and what it means to be connected. Mind you, those were days when we had dial up modems, perhaps 56k baud, but what a thrill it was to hear about his visions.

I was fortunate to have been able to keep in contact with him in the early 2010s via twitter and email and I was very glad that he did remember that trip and that he found some of the things Singapore was doing then to be intriguing but challenging for the future and that digital rights would be something that we need to be fighting for because if the people don’t own it, governments and big corporations will occupy that space.

He was the founder of the Electronic Freedom Foundation and on 7th April 2018, the EFF held a “John Perry Barlow Symposium” hosted by the Internet Archive. Do watch the recording to how critical John was to lots of what we take for granted today.

Read his writings at the EFF which is hosting the John Perry Barlow Library.

Thank you John.

* John’s photo by Mohamed Nanabhay from Qatar,  CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=66227217

**https://www.quotes.net/quote/4718

Seeking a board seat at OpenSource.org


I’ve stepped up to be considered for a seat on the Board of the Open Source Initiative.

Why would I want to do this? Simple: most of my technology-based career has been made possible because of the existence of FOSS technologies. It goes all the back to graduate school (Oregon State University, 1988) where I was able to work on a technology called TCP/IP which I was able to build for the OS/2 operating system as part of my MSEE thesis. The existence of newsgroups such as comp.os.unix, comp.os.tcpip and many others on usenet gave me a chance to be able to learn, craft and make happen networking code that was globally useable. If I did not have access to the code that was available on the newsgroups I would have been hardpressed to complete my thesis work. The licensing of the code then was uncertain and arbitrary and, thinking back, not much evidence that one could actually repurpose the code for anything one wants to.

My subsequent involvement in many things back in Singapore – the formation of the Linux Users’ Group (Singapore) in 1993 and many others since then, was only doable because source code was available for anyone do as they pleased and to contribute back to.

Suffice to say, when Open Source Initiative was set up twenty years ago in 1998, it was a formed a watershed event as it meant that then Free Software movement now had a accompanying, marketing-grade branding. This branding has helped spread the value and benefits of Free/Libre/Open Source Software for one and all.

Twenty years of OSI has helped spread the virtue of what it means to license code in an manner that enables the recipient, participants and developers in a win-win-win manner. This idea of openly licensing software was the inspiration in the formation of the Creative Commons movement which serves to provide Free Software-like rights, obligations and responsibilities to non-software creations.

I feel that we are now at a very critical time to make sure that there is increased awareness of open source and we need to build and partner with people and groups within Asia and Africa around licensing issues of FOSS. The collective us need to ensure that the up and coming societies and economies stand to gain from the benefits of collaborative creation/adoption/use of FOSS technologies for the betterment of all.

As an individual living in Singapore (and Asia by extension) and being in the technology industry and given that extensive engagement I have with various entities:

I feel that contributing to OSI would be the next logical step for me. I want to push for a wider adoption and use of critical technology for all to benefit from regardless of their economic standing. We have much more compelling things to consider: open algorithms, artificial intelligence, machine learning etc. These are going to be crucial for societies around the world and open source has to be the foundation that helps build them from an ethical, open and non-discriminatory angle.

With that, I seek your vote for this important role.  Voting ends 16th March 2018.

I’ll be happy to take questions and considerations via twitter or here.

Wireless@SGx for Fedora and Linux users


Eight years ago, I wrote about the use of Wireless@SGx being less than optimal.

I must acknowledge that there has been efforts to improve the access (and speeds) to the extent that earlier this week, I was able to use a wireless@sgx hotspot to be on two conference calls using bluejeans.com and zoom.info. It worked very well that for the two hours I was on, there was hardly an issue.

I tweeted about this and kudos must be sent to those who have laboured to make this work well.

The one thing I would want the Wireless@SG people to do is to provide a full(er) set of instructions for access including Linux environments (Android is Linux after all).

I am including a part of my 2010 post here for the configuration aspects (on a Fedora desktop):

The information is trivial. This is all you need to do:

	- Network SSID: Wireless@SGx
	- Security: WPA Enterprise
	- EAP Type: PEAP
	- Sub Type: PEAPv0/MSCHAPv2

and then put in your Wireless@SG username@domain and password. I could not remember my iCell id (I have not used it for a long time) so I created a new one – sgatwireless@icellwireless.net. They needed me to provide my cellphone number to SMS the password. Why do they not provide a web site to retrieve the password?

Now from the info above, you can set this up on a Fedora machine (would be the same for Red Hat Enterprise Linux, Ubuntu, SuSE etc) as well as any other modern operating system.

I had to recreate a new ID (it appears that iCell is no longer a provider) and apart from that, everything else is the same.

Thank you for using our tax dollars well, IMDA.

Three must haves in Fedora 26


I’ve been using Fedora ever since it came out back in 2003. The developers of Fedora and the greater community of contributors have been doing a amazing job in incorporating features and functionality that subsequently has found its way into the downstream Red Hat Enterprise Linux distributions.

There are lots to cheer Fedora for. GNOME, NetworkManager, systemd and SELinux just to name a few.

Of all the cool stuff, I particularly like to call out three must haves.

a) Pomodoro – A GNOME extension that I use to ensure that I get the right amount of time breaks from the keyboard. I think it is a simple enough application that it has to be a must-have for all. Yes, it can be annoying that Pomodoro might prompt you to stop when you are in the middle of something, but you have the option to delay it until you are done. I think this type of help goes a long way in managing the well-being of all of us who are at our keyboards for hours.

b) Show IP: I really like this GNOME extension for it does give me at a glance any of a long list of IPs that my system might have. This screenshot shows ten different network end points and the IP number at the top is that of the Public IP of the laptop. While I can certainly use the command “ifconfig”, while I am on the desktop, it is nice to have it needed info tight on the screen.

 

 

c) usbguard: My current laptop has three USB ports and one SD card reader. When it is docked, the docking station has a bunch more of USB ports. The challenge with USB ports is that they are generally completely open ports that one can essentially insert any USB device and expect the system to act on it. While that is a convenience, the possibility of abuse isincreasing given rogue USB devices such as USB Killer, it is probably a better idea to deny, by default, all USB devices that are plugged into the machine. Fortunately, since 2007, the Linux kernel has had the ability to authorise USB devices on a device by device basis and the tool, usbguard, allows you to do it via the command line or via a GUI – usbguard-applet-qt. All in, I think this is another must-have for all users. It should be set up with default deny and the UI should be installed by default as well. I hope Fedora 27 onwards would be doing that.

So, thank you Fedora developers and contributors.

 

 

Quarter Century of Innovation – aka Happy Birthday Linux!


Screenshot from 2016-08-25 14-35-23

Happy Birthday, Linux! Thank you Linus for that post (and code) from a quarter of a century ago.

I distinctly remember coming across the post above on comp.os.minix while I was trying to figure out something called 386BSD. I was following the 386BSD development by Lynne Jolitz and William Jolitz back when I was in graduate school in OSU. I am not sure where I first heard about 386BSD, but it could have been in some newsgroup or the BYTE magazine (unfortunately I can’t find any references). Suffice to say, the work of 386BSD was subsequently documented by the Dr. Dobb’s Journal from around the 1992. Fortunately, the good people at Dr. Dobb’s Journal have placed their entire contents on the Internet and the first post of the port of 386BSD is now online.

I was back in Singapore by then and was working at CSA Research doing work in building networking functionality for a software engineering project. The development team had access to a SCO Unix machine but because we did not buy “client access licenses” (I think that was what it was called), we could only have exactly 2 users – one on the console via X-Windows and the other via telnet. I was not going to suggest to the management to get the additional access rights (I was told it would cost S$1,500!!) and instead, tried to find out why it was that the 3rd and subsequent login requests were being rejected.

That’s when I discovered that SCO Unix was doing some form of access locking that was part of the login process used by the built-in telnet daemon. I figured that if I can replace the telnet daemon with one that does not do the check, I can get as many people telnetting into the system and using it.

To create a new telnet daemon, I needed the source code and then to compile it. SCO Unix never provided any source code. I managed, however, to get the source code to a telnet daemon (from I think ftp.stanford.edu although I could be wrong).

Remember that during those days, there was no Internet access in Singapore – no TCP/IP access anyway. And the only way to the Internet was via UUCP (and Bitnet at the universities). I used ftpmail@decwrl.com (an ftp via email service by Digital Equipment Corporation) to go out and pull in the code and send it to me via email in 64k uuencoded chunks. Slow, but hey, it worked and it worked well.

Once I got the code, the next challenge was to compile it. We did have the C compiler but for some reason, we did not have the needed crypto library to compile against. That was when I came across the incredible stupidity of labeling cryptography as a munition by the US Department of Commerce. Because of that, we, in Singapore, could not get to the crypto library.

After some checking around, I got to someone who happened to have a full blown SCO Unix system and had the crypto library in their system. I requested that they compile a telnet daemon without the crypto library enabled and to then send me the compiled binary.

After some to and fro via email, I finally received the compiled telnet daemon without the crypto linked in and replaced the telnetd on my SCO Unix machine. Viola, everyone else in the office LAN could telnet in. The multi-user SCO machine was now really multi-user.

That experience was what pushed me to explore what would I need to do to make sure that both crypto code and needed libraries are available to anyone, anywhere. The fact that 386BSD was a US-originated project meant that tying my kite to them would eventually discriminate against me in not being able to get to the best of cryptography and in turn, security and privacy. That was when Linus’ work on Linux became interesting for me.

The fact that this was done outside the US meant that it was not crippled by politics and other shortsighted rules and that if it worked well enough, it could be an interesting operating system.

I am glad that I did make that choice.

The very first Linux distribution I got was from Soft Landing Systems (SLS in short) which I had to get via the amazingly trusty ftpmail@decwrl.com service which happily replied with dozens of 64K uuencoded emails.

What a thrill it was when I started getting serialized uuencoded emails with the goodies in them. I don’t think I have any of the 5.25″ on to which I had to put the uudecoded contents. I do remember selling complete sets of SLS diskettes (all 5.25″ ones) for $10 per box (in addition to the cost of the diskettes). I must have sold it to 10-15 people. Yes, I made money from free software, but it was for the labour and “expertise”.

Fast forward twenty five years to 2016, I have so many systems running Linux (TV, wireless access points, handphones, laptops, set-top boxes etc etc etc) that if I were asked to point to ONE thing that made and is still making a huge difference to all of us, I will point to Linux.

The impact of Linux on society cannot be accurately quantified.  It is hard. Linux is like water. It is everywhere and that is the beauty of it. In choosing the GPLv2 license for Linux, Linus released a huge amount of value for all of humanity. He paid forward.

It is hard to predict what the next 25 years will mean and how Linux will impact us all, but if the first 25 years is a hint, it cannot but be spectacular. What an amazing time to be alive.

Happy birthday Linux. You’ve defined how we should be using and adoption technology. You’ve disrupted and continue to disrupt, industries all over the place. You’ve helped define what it means to share ideas openly and freely. You’ve shown what happens when we collaborate and work together. Free and Open Source is a win-win for all and Linux is the Gold Standard of that.

Linux (and Linus) You done well and thank you!

This is quite a nice tool – magic-wormhole


I was catching up on the various talks at PyCon 2016 held in the wonderful city of Portland, Oregon last month.

There are lots of good content available from PyCon 2016 on youtube. What I was particularly struck was, what one could say is a mundane tool for file transfer.

This tool, called magic-wormhole, allows for any two systems, anywhere to be able to send files (via a intermediary), fully encrypted and secured.

This beats doing a scp from system to system, especially if the receiving system is behind a NAT and/or firewall.

I manage lots of systems for myself as well as part of the work I at Red Hat. Over the years, I’ve managed a good workflow when I need to send files around but all of it involved having to use some of the techniques like using http, or using scp and even miredo.

But to me, magic-wormhole is easy enough to set up, uses webrtc and encryption, that I think deserves to get a much higher profile and wider use.

On the Fedora 24 systems I have, I had to ensure that the following were all set up and installed (assuming you already have gcc installed):

a) dnf install libffi-devel python-devel redhat-rpm-config

b) pip install –upgrade pip

c) pip install magic-wormhole

That’s it.

Now I would want to run a server to provide the intermediary function instead of depending on the goodwill of Brian Warner.

 

UEFI and Fedora/RHEL – trivially working.


My older son just enrolled into my alma mater, Singapore Polytechnic, to do Electrical Engineering.  It is really nice to see that he has an interest in that field and, yes, make me smile as well.

So, as part of the preparations for the new program, the school does need the use of software as part of the curriculum. Fortunately, to get a computer was not an issue per se, but what bothered me was that the school “is only familiar with windows” and so that applications needed are also meant to run on windows.

One issue led to another and eventually, we decided to get a new laptop for his work in school. Sadly, the computer comes only with windows 8.1 installed and nothing else. The machine has ample disk space (1TB) and the system was set up with two partitions – one for the windows stuff (about 250G) and the 2nd partition as the “D: drive”. Have not seen that in years.

I wanted to make the machine dual bootable and went about planning to repartition the 2nd partition into two and have about 350G allocated to running Fedora.

Then I hit an issue.  The machine was installed with Windows using the UEFI. While the UEFI has some good traits, but unfortunately, it does throw off those who want to install it with another OS – ie to do dual-boot.

Fortunately, Fedora (and RHEL) can be installed into a UEFI enabled system. This was taken care of by work done by Matthew Garrett as part of the Fedora project. Matthew also received the FSF Award for the Advancement of Free Software earlier this year. It could be argued that perhaps UEFI is not something that should be supported, but then again, as long as systems continue to be shipped with it, the free software world has to find a way to continue to work.

The details around UEFI and Fedora (and RHEL) is all documented in Fedora Secure Boot pages.

Now on to describing how to install Fedora/RHEL into a UEFI-enabled system:

a) If you have not already done so, download the Fedora (and RHEL) ISOs from their respective pages. Fedora is available at https://fedoraproject.org/en/get-fedora and RHEL 7 Release Candidate is at ftp://ftp.redhat.com/pub/redhat/rhel/rc/7/.

b) With the ISOs downloaded, if you are running a Linux system, you can use the following command to create a bootable live USB drive with the ISO:

dd  if=Fedora-Live-Desktop-x86_64-20-1.iso of=/dev/sdb

assuming that /dev/sdb is where the USB drive is plugged into. The most interesting thing about the ISOs from Fedora and RHEL is that they are already set up to boot into a UEFI enabled system, i.e., no need to disable in BIOS the secure boot mode.

c) Boot up the target computer via the USB drive.

d) In the case of my son’s laptop, I had to repartition the “D: drive” and so after boot up from the USB device, I did the following:

i) (in Fedora live session): download and install gparted (sudo yum install gparted) within the live boot session.

ii) start gparted and resize the “D: drive” partition. In my case, it was broken into 2 partitions with about 300G for the new “D: drive” and the rest for Fedora.

e) Once the repartitioning is done, go ahead and choose the “Install to drive” option and follow the screen prompts.

Once the installation is done, you can safely reboot the machine.

You will be presented with a boot menu to choose the OS to start.

QED.

 

Getting a good grip on the haze conditions


I feel that with DPM Tharman Shanmugaratnam’s speech this past Monday, June 17 2013, at the eGov Global Exchange event about the Singapore government going whole hog with 100% machine readable data on the data.gov.sg, was excellent. Finally, there is some sanity in government with regards to data (that has already been paid for by tax dollars) should be open and freely available.  No more discussion about “monetizing” the tax-payer-paid data. Let the public do as they please with the data.

So, it is with that as a background, that I want to see how best was can get the following done to address the haze conditions (as seen in the NASA satellite image) with the population that is at risk.

This is what we have in terms of data:

a) Data from the National Environment Agency regarding the Pollutant Standard Index and the PM2.5 values.

b) AirNow.gov US government site that gives a co-relation between the various data measurements

The NEA PSI data is only shown on the site for the current 24 hour period and nothing is shown of the previous days.  I don’t see any link on their site to look at earlier data. As such, I’ve set up a public document on Google Docs.

Now what I’d like to see is the mashing up of the data with maps and other relevant information such as construction sites where there are workers outdoors and to see how quickly we can pull in the right resources to assist.  There is already an effort underway  (also geek.sg) to make sure that those populations at risk because of lack of information and/or safety equipment like N95 masks are reached and provided for.

[Update at 7:25 pm June 22, 2013]

Looks like the NEA site is transforming in a good way.  You can get historical data now.

This is too cool!


[harish@phoenix ~]$ traceroute 216.81.59.173
traceroute to 216.81.59.173 (216.81.59.173), 30 hops max, 60 byte packets
 1 registerlafonera.fon.com (192.168.10.1) 2.473 ms 2.937 ms 3.902 ms
 2 cm1.zeta224.maxonline.com.sg (116.87.224.1) 15.342 ms 15.664 ms 16.515 ms
 3 172.20.53.17 (172.20.53.17) 17.175 ms 17.540 ms 18.104 ms
 4 172.26.53.1 (172.26.53.1) 18.865 ms 20.381 ms 20.813 ms
 5 172.20.7.30 (172.20.7.30) 24.398 ms 24.337 ms 24.227 ms
 6 203.117.35.45 (203.117.35.45) 28.237 ms 17.013 ms 16.335 ms
 7 203.117.34.37 (203.117.34.37) 15.227 ms 21.645 ms 21.858 ms
 8 203.117.34.198 (203.117.34.198) 20.962 ms 21.042 ms 20.766 ms
 9 203.117.36.38 (203.117.36.38) 21.584 ms 22.500 ms 22.639 ms
10 paix.he.net (198.32.176.20) 213.814 ms 214.532 ms 216.222 ms
11 10gigabitethernet9-3.core1.sjc2.he.net (72.52.92.70) 209.283 ms 209.811 ms 206.368 ms
12 10gigabitethernet5-3.core1.lax2.he.net (184.105.213.5) 197.110 ms 199.926 ms 203.206 ms
13 10gigabitethernet2-3.core1.phx2.he.net (184.105.222.85) 231.479 ms 234.769 ms 234.712 ms
14 10gigabitethernet5-3.core1.dal1.he.net (184.105.222.78) 246.268 ms 246.252 ms 246.026 ms
15 10gigabitethernet5-4.core1.atl1.he.net (184.105.213.114) 273.176 ms 273.562 ms 273.933 ms
16 216.66.0.26 (216.66.0.26) 257.073 ms 257.860 ms 258.197 ms
17 * * *
18 Episode.IV (206.214.251.1) 279.888 ms 277.874 ms 280.236 ms
19 A.NEW.HOPE (206.214.251.6) 285.736 ms 284.384 ms 285.730 ms
20 It.is.a.period.of.civil.war (206.214.251.9) 291.342 ms 293.745 ms 293.975 ms
21 Rebel.spaceships (206.214.251.14) 295.027 ms 300.389 ms 300.605 ms
22 striking.from.a.hidden.base (206.214.251.17) 300.050 ms 300.106 ms 299.865 ms
23 have.won.their.first.victory (206.214.251.22) 284.885 ms 291.515 ms 293.083 ms
24 against.the.evil.Galactic.Empire (206.214.251.25) 282.759 ms 280.749 ms 280.269 ms
25 During.the.battle (206.214.251.30) 301.951 ms 300.714 ms 297.183 ms
26 Rebel.spies.managed (206.214.251.33) 306.370 ms * *
27 * to.steal.secret.plans (206.214.251.38) 304.887 ms 301.879 ms
28 to.the.Empires.ultimate.weapon (206.214.251.41) 292.549 ms 290.469 ms 291.832 ms
29 the.DEATH.STAR (206.214.251.46) 290.021 ms 281.892 ms 280.153 ms
30 an.armored.space.station (206.214.251.49) 283.677 ms 295.996 ms 285.008 ms
[harish@phoenix ~]$

Good stuff, Episode IV.

In a word, wow!


I am not sure if this is a report that contains stuff taken out of context, but if its true that PM Lee thinks that his government did not have clarity in vision, what does that mean to all the justifications of paying sky-high wages to the “ministers” who were, after all, the ones who should have looked out for us?

I await back-peddling and clarifications before commenting on what appears to me an admission of failure.

While we are talking about failures, let me point a huge failure playing out in the Singapore civil service in the form of the fiasco called the  “Standard Operating Environment”. There is no one in the civil service that has anything positive to say about the fiasco that they have to be living with for the next umpteen years. The amount of tax dollars wasted and continued to be wasted because of the closed, proprietary software chosen is appalling. We need to stop it. Now!

Mr Prime Minister, I made an offer to help set up a Singapore Open Institute that will propel the Singapore public sector rapidly forward with the adoption and use of open source software and make it innovative and forward thinking. The offer still has not been taken up by you or your office.

Let’s make rapid changes and changes for the better. I look forward to hearing from you at h dot pillay at ieee dot org.

Doing the right thing and a proposal


I am glad to read the the Prime Minister has decided to probe the sale the applications (built and paid using tax dollars) by the PAP Town Councils to the PAP-owned company AIM.

<stand up> <applause> <applause> <applause> <applause> <applause> <sit down>

I would like to know the following:

  1. Who will head this?
  2. What kind of time frame will this have to be done by – one month, one year, by next general election?
  3. In the meantime, what happens to the monies that have been spent (and to be spent) in the transaction
  4. Will there be a public disclosure of companies that are PAP-owned and a list of transactions done by them with public sector agencies. I would expect the same from the WP and other political parties as well.

I would also like to hear from the Prime Minister on how we can ensure that all technology used/developed/deployed in any public sector entity in Singapore will FIRST consider open source solutions and failing to find something, then with a request for exemption (RFE), filed, published and approved, to look at non-open source options.

The time is NOW to make the bold and exciting change, Mr Prime Minister. I am sure this is of no concern to you, but rest assured your legacy will be being acknowledged as the Open Source Prime Minister.

While it might be premature to say “well done”, any progress is good progress. Doing the right thing is what this is all about.

As citizens, we need to keep a watchful eye on this probe to ensure that nothing is left unturned and keep the pressure on.

Again, my offer to help build an open source solution to managing Town Council system remains.

Let me take this opportunity to flesh out a proposal of how this can be accomplished.

Proposal

  • We establish a Singapore Open Institute, funded by government and/or corporate sponsors.
  • SOI’s role will be primarily at assessing all the open source solutions being developed around the world especially for government (and education) and finding local use of them. Likewise, local public sector agencies can seek SOI’s help in creating open source solutions.
  • SOI will be the trusted agency that public sector entities will seek advise and clearance in projects they want to undertake.
  • SOI will also create a Public Sector Software Exchange (PSX). The PSX will be open to anyone, anywhere to contribute to as well as to consume code from. All code in PSX could be on a GPLv3 or Apache License v 2 or something Singapore-branded, like the EU open license. PSX will also host SMEs, start-ups and individuals who can provide solutions. Parts of the Instruction Manual will have to be amended as needed to accomodate this.
  • SOI will also be the entity to which requests for exemption (RFE) has to be applied for by public sector agencies before going for closed source products. RFEs will have an expiry period and will be specific to a project.
  • SOI will also be the catalyst in creating and running programming contests, hack-a-thons etc (both with open source software and hardware). This is principally to encourage as many people to learn coding and build solutions.
  • Mindef, Police, SCDF and security related agencies are exempted from SOI but are strongly encouraged to create an equivalent of forge.mil.
  • SOI will also be the thought leader for Open Data, Open Source, Open Hardware and Open Standards.

It is an idea whose time has come for Singapore to act on, Mr Prime Minister.

Let’s do the right thing.

It’s not a contest per se, it’s a Sahana-moment!


So, my 3 am post from January 3rd 2013 is now on http://www.tremeritus.com, probably not a good thing, but then this is the way things move.

For what it’s worth, going by the comments in that post, this is not about scoring points against the Coordinating Chairman or the PAP or the WP.  It is about highlighting the facts in a way that was clearer and not wrapped up in words and more importantly, offering a better way to do things for the betterment of this country.

At the expense of being ridiculed for stating the obvious, all the information and analysis done at 3 am on January 3rd 2013 that is in my original post is from that one media release put out by the Coordinating Chairman on January 2, 2013.

There is confusion about what the various issues which sadly are related albeit tangentially.

Let me try to give a map of the issues that are being looked at.

a) The Ministry of National Development put out a  Town Council Management Report for 2012 on December 14, 2012. Of the 15 town councils, all except for the Aljunied Hougang Town Council scored green in S&CC Arrears Management – Examines the extent of Town Councils’ S&CC arrears that residents have to bear.” AHTC is the only non-PAP Town Council.

b) Because of that red score, the question arose as to what happened? To that extent, the AHTC released their comments.

c) It then was known to all of us that there was a company, Action Information Management Pte Ltd, that was providing the IT solutions to the town councils.

d) That was when the issue blew up with regards to who is AIM, why did this company get to do this business, how did they come to own the IT system etc etc.

So, there are two chunks of issues:

1) The poor performance from the Town Council Management Report 2012 perspective of Aljunied Hougang Town Council

2) Who is this AIM and what is their role in all of this?

Both are important issues. I am in no position to comment on the first point.  That is for the AHTC to address to the satisfaction of the residents of AHTC as well as us Singaporeans.

My interest centers in the second point. As a computing professional, having been in this industry since 1982, this interests  me personally. I am also an advocate of using and growing the use of open source technologies especially in the public sector. The Town Councils are public sector organizations. It pains me to see good money being thrown at IT solutions only for the vendor(s) to obsolete it in a relatively short time, and get the customer to pay up again and again. This becomes even more acute with public sector IT spending. It is yours and my tax dollars that get spent wastefully.

Sure, there as a time when the open source solutions and frameworks did not quite provide good alternatives to address the varied IT needs. But that was a long, long time ago. Today open source is so very prevalent in every nook and corner that there is no longer any justifiable reason not to consider open source first for any IT need, especially in government and public sector.

People who know me would have heard the repeating groove that I have become, in that we need, at least in Singapore, an official government policy to do open source FIRST for all public sector IT procurement and for government agencies to file justifications for exemptions if they want to go with a proprietary solution and these exemptions have to be public knowledge.

Why is that needed? It is because monies spent by publicly funded agencies especially in reusable technologies like software, should not be wasted and locked away in some proprietary solution.

I am not proposing nor suggesting that open source solutions don’t come at a price. They do. They will need to be supported (as any software needs to, open or otherwise). But the huge upside when used in the public sector is that the solution can be worked on and enhanced and re-factored by entities that the public sector organizations could engage. This grows the local, domestic IT sector. It grows it in a way that benefits the local econoomy and SMEs who then get opportunities to become conversant in domains that otherwise will be hard to get into. With the code being open, anyone can contribute, but, and this is the part most people miss out, you STILL NEED commercially contracted support. 

This opens up opportunities to SMEs in Singapore to take up the various solutions to manage and maintain and gain expertise and in the process begin expanding outside Singapore as well.

Eight years ago, as a reservist SCDF officer, I was mobilized to support SCDF’s Ops Lion Heart to help with Search and Rescue after the 2004 Boxing Day Indian Ocean tsunami. My country called me to serve at a time of need and I put on my uniform and was on the ground in Banda Aceh for about two weeks.

The lessons I learned then was that in a disaster situation, the various international agencies and military/civil defense forces on the ground had very little common technology (other than walkie talkies) that could be used to coordinate the work. We, the SCDF, had our comms equipment (we had a Immarsat vsat satellite and satellite phones and GPS devices) but other than that, nothing else to interface with the other forces on the ground. Why? Because each of those entities had their own proprietary software tools to work with. At a time when there was a massive natural disaster, as rescuers we were not assisted by the technologies because of vendor lock-in.

Out of that disaster, came Project Sahana –  put together by Sri Lankan open source developers. Sahana is now a UN sanctioned tool for disaster management.

Why do I bring this up? Because it seems that we are heading to a Sahana-moment in Singapore. Public sector IT services should be decoupled from political parties.  Public sector IT solutions must open up the source code so that there are no opportunities for being taken for a ride.

So, to draw back to the beginning. Mr Coordinating Chairman, this is not a contest per se. This is a genuine offer to help us, the collective us, to do the Right Thing

Why Open Standards and Open Source Matters in Government


I have offered to the powers that be (TPTB) running the various Town Councils in Singapore an opportunity for the open source community to help build an application to manage their respective towns following the unfolding fiasco around their current software solution which is nearing end of life.

I am not surprised to hear comments and even SMS texts from friends who say that I am silly to want to offer to create a solution using open source tools. I can only attribute that to their relative lack of understanding of how this whole thing works and how we can collectively build fantastic solutions for the common good of society not only in Singapore but around the world.

I work for a company called Red Hat. Red Hat is a publicly traded company (RHT on NYSE) and is a 100% pure play open source company. What Red Hat does is to bring together open source software and make it consumable for enterprises. Doing that is not an easy thing. A lot of additional engineering and qualifications have to go into it before corporates and enterprises feel confident to deploy it. Red Hat has been successful in doing all of that because of the ethos of the company in engaging with open source developers (and hiring them as full time employees where appropriate) so that we can help the world gain and use better and higher quality software for everything.

That means that in taking open source software, Red Hat has to ensure that improvements and enhancements done are put back out as well to benefit everyone else and at the same time, at a price, provide a service to enterprises that want to use these tools but also want accountability, support, continued innovation etc. That is the Red Hat business model. We are the corporate entity that enterprises deploying open source tools look to for sanity.

Naturally, everything we create is available to anyone else, including our competition, and, yes, we can be beaten at our own game. That’s the best part. The fact that we can be challenged by others with what we helped create is a fantastic situation to be in as it forces us to constantly innovate (and in the open) and show how we are a responsible open source community member while giving tremendous value to enterprises.

It is in that spirit that I made the offer to help form a team of open source developers in Singapore to create the management system software for the town councils.  Certainly, when the software is built and deployed, the town councils would need to have competent support and there is nothing stopping any of the IT SMEs in Singapore picking up that opportunity. This gives the Town Councils significant advantage in choosing vendors to support their needs while keeping the innovation forthcoming because the code is open.

Here’s an article in an IT publication which I was interviewed about open source and CIOs – yeah, self promotion :-). But, here’s a better article about how open source is so prevalent in the US  government as well (yes, Gunnar is a colleague of mine).

So, the offer to build an open source solution is genuine and sincere. It is not for me to make money out of it per se, but to foster a situation that will create even more opportunities for others to actively participate in create fantastic open source solutions for us not only for the Singapore public sector, but the world.

I hope this offer is taken up seriously by TPTB including parts of IDA and MND. And for the record, this offer has nothing to do with Red Hat.

My Conscience Is Bugging Me


I cannot let the media statement put out by the “Coordinating Chairman of the PAP Town Councils” regarding the sale of the town council management software system to a ex-PAP MP-owned company be left alone without it being shredded apart. The media statement appeared on January 2, 2013 on the PAP website.

I have italicised and indented the paragraphs from the media statement and my response follows each italicised segment.

Statement by Teo Ho Pin on AIM Transaction

On 28 December 2012, I issued a press release in response to Ms Sylvia Lim’s statement on the website of the Aljunied-Hougang Town Council. Ms Lim had made various assertions in her statement. However, her statement was made without citing the relevant facts. I now make this further statement to set out fully the relevant facts.

I am the co-ordinating Chairman of all the PAP-run Town Councils (“the TCs”). The PAP TCs meet regularly and work closely with one another. This allows the TCs to derive economies of scale and to share best practices among themselves. This improves the overall efficiency of the TCs, and ensures that all the PAP TCs can serve their residents better.

In 2003, the TCs wanted to harmonise their computer systems. Hence, in 2003, all the TCs jointly called an open tender for a vendor to provide a computer system based on a common platform. NCS was chosen to provide this system. The term of the NCS contract (“NCS contract”) was from 1 August 2003 to 31 October 2010. There was an option to further extend the contract for one year, until 31 October 2011.

In 2010, the NCS contract was going to expire. The TCs got together and jointly appointed Deloitte and Touche Enterprise Risk Services Pte Ltd (“D&T”) to advise on the review of the computer system for all the TCs. Several meetings were held with D&T.

After a comprehensive review, D&T identified various deficiencies and gaps in the system. The main issue, however, was that the system was becoming obsolete and unmaintainable. It had been built in 2003, on Microsoft Windows XP and Oracle Financial 11 platforms. By 2010, Windows XP had been superseded by Windows Vista as well as Windows 7, and Oracle would soon phase out and discontinue support to its Financial 11 platform.

From what is mentioned above, D&T noted deficiencies and gaps in the system, which it seems was only about parts of the application infrastructure becoming obsolete and unmaintainable. It would be good to know what other gaps and deficiencies were reported.

It is now clear that the application that was developed ran on the system from Oracle Corporation, called “Oracle Financials 11”. It also is clear that, possibly both the server and client OS was Microsoft Windows XP. I do wonder how that original application was spec’ed out?

We have here a classic case of all of the component systems needed to run an application reaching end of life or becoming unsupported even as the application could still be used.

That, in itself, is not a big deal. Forced obsolescence is the norm in the IT industry. It is not the best state of affairs, but it is what it is.

The TCs were aware of and concerned about the serious risks of system obsolescence identified by D&T, and wanted to pre-empt the problem. In addition, as the NCS Contract was about to expire, they sought a solution which would provide the best redevelopment option to the TCs, and in the interim would allow them to continue enjoying the prevailing maintenance and other services.

Fair enough.

As Coordinating Chairman of the TCs, I had to oversee the redevelopment of the existing computer system for all TCs. It was clear to me that the existing computer software was already dated. The NCS contract would end by 31 October 2011 (if the one year extension option was exercised). However, assessing new software and actually developing a replacement system that would meet our new requirements would take time, maybe 18-24 months or even longer. We thus needed to ensure that we could get a further extension (beyond October 2011) from NCS, while working on redevelopment options.

Not sure why the preceding was needed, for it is a restatement of the first discussion.

D&T also raised with the TCs the option of having a third party own the computer system, including the software, instead, with the TCs paying a service fee for regular maintenance. This structure was not uncommon.

By stating that D&T saying that it is a common method for “third party own the computer system”, it is not clear how that would help with a rapidly aging computer system. Sounds incredulous for D&T to suggest that.

We decided to seriously consider this option. Having each of the 14 individual TCs hold the Intellectual Property (IP) rights to the software was cumbersome and inefficient. The vendor would have to deal with all 14 TCs when reviewing or revising the system. It would be better for the 14 TCs to consolidate their software rights in a single party which would manage them on behalf of all the TCs, and also source vendors to improve the system and address the deficiencies.

This paragraph contains the biggest amount of doublespeak and warped sense of value if there ever was one. What does it mean that each of the TCs holds the “Intellectual Property”?

It was stated that the reason for creating the application was (from above) “(t)his allows the TCs to derive economies of scale and to share best practices among themselves. This improves the overall efficiency of the TCs, and ensures that all the PAP TCs can serve their residents better.” which puts to lie “(t)he vendor would have to deal with all 14 TCs when reviewing or revising the system”.

It would seem that whatever that was built, ended up being 14 versions of the application and not one. How does reviewing and revising the system become any more efficient by “consolidat(ing) their software rights in a single party”? Humbug.

If that indeed was a valid reason, all the TCs could have done was to agree to trust one TC to be the custodian and decision maker. How does each giving up their ownership to an external party be any better?

I suspect the Coordinating Chairman is pulling a fast one here.

The TCs thus decided to call a tender to meet the following requirements:

1. To purchase the software developed in 2003, and lease it back to the TCs for a monthly fee, until the software was changed;

2. To undertake to secure extensions of the NCS contract at no extra cost i.e. take on the obligation to get an extension on the existing rates, until the TCs obtained new or enhanced software. This was put in to protect the financial position of the TCs; and

3. To work with the TCs to understand their enhancement and redevelopment needs and look for a suitable vendor to provide these upgrades.

If you look at the actual tender noticeall it states is that they are selling a “developed application software” and that the tenderer should be “experienced and reputable company with relevant track record”.

The devil is in the details which is only available if you fork out $214.
So, the PAP TCs wanted to sell out to someone else who fits their criteria of an experienced and reputable company with RELEVANT track record. The tender advertisement sounds very thin and vague.

Under the tender, the TCs sold only the IP in the old software. The ownership of the physical computer systems remained with the individual TCs. We wanted to sell the IP rights in the old software because it had limited value and was depreciating quickly. Had we waited until the new system was in place, the IP to the superseded old software would have become completely valueless.

Ah huh! They wanted to monetize their “IP” as it were. Time was running out. Not sure who else on the planet would want their “IP”, but they must monetize it.

The TCs advertised the tender in the Straits Times on 30 June 2010. Five companies collected the tender documents. These were CSC Technologies Services Pte Ltd, Hutcabb Consulting Pte Ltd, NCS, NEC Asia Pte Ltd and Action Information Management Pte Ltd (“AIM”).

I am sure four of the companies listed above, after wasting the $214, are run by level-headed management who realized that this tender was a huge scam and wanted no part in it and so decided not to respond.

I am aware that NCS considered bidding but in the end, decided not to do so as it was of the view that the IP rights to software developed in 2003 on soon to be replaced platforms were not valuable at all.

Another company withdrew after it checked and confirmed that it was required to ensure renewal of the NCS contract without an increase in rates. The company did not want to take on that obligation. The others may also have decided not to bid for similar reasons.

In the end, only AIM submitted a bid on 20 July 2010.

Does the Coordinating Chairman really think that NCS would have fallen into the scam as well? They would have known that there really is nothing in the application that they could “salvage”, having built it in the first place, let alone helping their customer monetize it.

We evaluated AIM’s bid in detail. First, AIM’s proposal to buy over the software IP would achieve our objective of centralising the ownership of the software, consistent with the model suggested by D&T.

This is circular logic which needs no further response.

AIM was willing to purchase our existing software IP for S$140,000, and lease it back at S$785 per month from November 2010 to October 2011. The lease payments to AIM would end by October 2011, with the expiration of the original NCS contract. Thus after October 2011, the TCs would be allowed to use the existing software without any additional lease payments to AIM, until the new software was developed.

Let’s do the math:

14 PAP Town Councils AIM
Contract Award $140,000 (perhaps each TC got $10K) ($140,000)
Lease (Nov 2010 – Oct 2011) ($785*14*12 => $131,880) $131,880
Nett $8,120

This meant that the TCs expected to gain a modest amount (about S$8,000) from the disposal of IP in the existing software.

So, the so called “Intellectual Property” is really only worth $8,120.

Second, AIM was willing to undertake the risks of getting an extension of the NCS contract with no increase in rates. This was the most important consideration for us, as it protected the TCs from an increase in fees.

And AIM will have the needed clout to negotiate with NCS – because they own the software – but the 14 PAP Town Councils being the original customer of NCS could not garner? Is that really true, Mr Coordinating Chairman? You are saying that you cannot do better than AIM against NCS? Say it ain’t so, Mr Coodinating Chairman.

Third, we were confident that AIM, backed by the PAP, would honour its commitments.

Wow, the PAP link. That’s the magic bullet.  Cronyism at its best. “Backed by the PAP” because the three directors are former PAP MPs or because the company is funded by the PAP?  Perhaps the other companies who picked up the tender document realized that they are not a PAP-{owned, funded} entity and would therefore not win.

That statement alone reeks of contempt of the free market, the principles of transparency, meritocracy and everything we hold dear in this country.
Are you, Mr Coordinating Chairman, also saying that AIM has deep pockets that they can withstand the possibility of NCS not agreeing? The directors of AIM have been reported not to be taking in director fees. That’s noble of them. It does look like the PAP Town Councils found their shining white knight in AIM.

Given the above considerations, AIM had met the requirements of the tender on its own merits. We assessed that the proposal by AIM was in the best interests of the TCs, and thus awarded the tender to AIM.

Of course! AIM has to be trustworthy and reputable given their PAP pedigree. Of course! D’oh!

Under the contract with AIM, the TCs could terminate the arrangements by giving one month’s notice if the TCs were not satisfied with AIM’s performance. Similarly, AIM could terminate by giving one month’s notice in the event of material changes to the membership of a TC, or to the scope and duties of a TC, like changes to its boundaries. This is reasonable as the contractor has agreed to provide services on the basis of the existing TC- and town-boundaries, and priced this assumption into the tender. Should this change materially, the contractor could end up providing services to a TC which comprises a much larger area and more residents, but at the same price.

What a lot of nonsense is this? It is unbelievable that the Coordinating Chairman can include a poison pill clause in the contract if the “boundaries of the Town Councils change”. I believe the boundaries of the West Coast Town Council changed after the May 2011 elections. I don’t see AIM doing anything about terminating the contract (correct me if I am wrong Mr Coordinating Chairman).

How does changes in the “larger area and more residents” materially change the way the software works? Is Mr Coordinating Chairman taking the tax payers and constituents of the PAP Town Councils to be daft? Wait a minute, a former PAP prime minister says we are (search for daft in that link)!

Since winning the tender, AIM has negotiated two extensions of the NCS contract until April 2013, at no increase in rates. The first extension was from November 2011 to October 2012, and the second from November 2012 to April 2013. The TCs received a substantial benefit in terms of getting the extensions from NCS beyond the original contract period, without any increase in prices.

Now, this is confusing. But I shall hold back for more juicy parts following.

What is not known now is the maintenance charges NCS charged as part of their original contract with the PAP Town Councils.

AIM has also been actively working with several vendors to explore new software options and enhancements for the TCs. AIM has identified software from a number of possible vendors, and has invited them to make presentations to the TCs in order for a suitable option to be chosen.

Are any of these open source solutions? Or is this going to be another closed, proprietary system that will face the same issues as the older one? Why are the Town Councils (via AIM) not looking at maximizing the tax dollars that goes into this by using open source solutions?

My offer to help build a fully open source solution remains.

Following the expiry of the initial lease arrangement for the software from AIM on 31 October 2011, no further lease payments for the software were made to AIM. During the period of its contract extension from November 2011 to April 2013, the management fee payable to AIM for the whole suite of services it provided was S$33,150, apart from what was payable to NCS for maintenance. In the end, inclusive of GST, each TC paid slightly more than $140 per month for AIM to ensure continuity of the existing system, secure the maintenance of this system at no increased costs, and identify options for a new system to which the TCs could migrate.

We entered into the transaction with AIM with the objective of benefitting the TCs. Over the last two years, the intended benefits have been realised. There is thus no basis to suggest that the AIM transaction did not serve the public interest, or was disadvantageous to residents in the TCs.

Bingo! The smoking gun perhaps?

So, AIM is not charitable and is asking the TCs to pay from November 2011 till April 2013. This is what the math looks like:

14 PAP Town Councils AIM
Contract Award $140,000 (perhaps each TC got $10K) ($140,000)
Lease (Nov 2010 – Oct 2011) ($785*14*12 => $131,880) $131,880
Nett $8,120
Nov 2011 – Apr 2013 ($33,150) $33,150
Nett ($25,030)

So, contrary to the rationale of “monetizing the IP” (a load of crap), the 14 PAP Town Councils will incur a loss of $25,030 in this deal.

This amount is on top of the cost of the D&T report and the “apart from what was payable to NCS for maintenance.”

It does seem that the PAP, having been in power for over 50 years, has found many creative means to “misdirect” tax monies.

I am saddened to have done this analysis.


Please, Mr Coordinating Chairman, please, come clean. You made a mistake. You thought you got a good deal. But that was not what it was. You have been drinking from the PAP water fountain for too long that you cannot see what is right and what is wrong. Your “media statement” is so full of holes that we can drive the Airbus A380 through it with room to spare.

Again, my offer to form a team of open source developers to build a solution that can benefit not only the town councils but anyone else remains.

Software for Public Sector Applications


The ongoing egg-in-the-face of the PAP over the “tender” (thanks to Alex for posting it via an anonymous source) awarded to AIM over the acquisition of a piece of software created for the use of the Town Councils is really disappointing.

Looking at the Today Online story, it would seem that Mr Teo and Mr Das have a lot of explaining to do.

Here’s an example of how proprietary software companies abuse their customers.  If you happen to have acquired a new laptop and it came with Windows 7 Starter Kit installed, when you set it up, you will be presented with a set of terms and conditions. Most people will just click OK and accept the terms and conditions without reading a word. But in this case, if you did not read anything you’d have missed out a juicy bit of restriction.

Section 8 on Page 7 of the Software License Terms says:

8. SCOPE OF LICENSE. The software is licensed, not sold. This agreement only gives you some rights to use the features included in the software edition you licensed. The manufacturer or installer and Microsoft reserve all other rights. Unless applicable law gives you more rights despite this limitation, you may use the software only as expressly permitted in this agreement. In doing so, you must
comply with any technical limitations in the software that only allow you to use it in certain ways. You may not
· work around any technical limitations in the software;
· customize the desktop background;
· …;

Isn’t amazing that even though you thought you bought that piece of software (according to their rules, it is not sold only licensed), you are NOT allowed to change the desktop background. Changing it will be breaking the terms and conditions of Windows 7 Starter Kit. Wow.

It sure sounds like our friends at the PAP-run Town Councils and AIM took a chunks out of the proprietary software “let’s screw and milk the customer” book. Only this time, the customer is the tax-paying Singapore public.

My offer to the Town Councils, expecially Aljunied Hougang Town Council, to help them build a fully open source solution remains.

A helper note for family and friends about your connectivity to the Internet from July 9 2012


This is a note targeted at family and friends who might find that they are not able to connect to the Internet from July 9, 2012 onwards.

This only affects those whose machines were are running Windows or Mac OSX and have a piece of software called DNSChanger installed.  The DNSChanger modifies a key part of the way a computer discovers other machines on the internet (called the Domain Name Server or DNS).

Quick introduction to DNS:

For example, you want to visit the website, http://www.cnn.com. You type this in your browser and magically, the CNN website appears in a few seconds. The way your browser figured out to reach the http://www.cnn.com server was to do the following:

a) The browser took the http://www.cnn.com domain name and did what is called a DNS lookup.

b) What it would have received in the DNS lookup is a mapping of the http://www.cnn.com to a bunch of numbers.  In this case, it would have received something like:

http://www.cnn.com.        60    IN    A    157.166.255.18
http://www.cnn.com.        60    IN    A    157.166.255.19
http://www.cnn.com.        60    IN    A    157.166.226.25
http://www.cnn.com.        60    IN    A    157.166.226.26

c) The numbers you see in the lines above (157.166.255.18 for example) are the Internet Protocol (IP) number of the server on which http://www.cnn.com resides. You notice that there are more than one IP number.  That is for managing requests from millions of systems and not having to depend only on one machine to reply.  This is good network architecture. For fun, let’s look at http://www.google.com:

http://www.google.com.      59    IN    CNAME    www.l.google.com.
http://www.l.google.com.    59    IN    A    173.194.38.147
http://www.l.google.com.    59    IN    A    173.194.38.148
http://www.l.google.com.    59    IN    A    173.194.38.144
http://www.l.google.com.    59    IN    A    173.194.38.145
http://www.l.google.com.    59    IN    A    173.194.38.146

http://www.google.com has 5 IP #s associated to it but you notice that there is something that says CNAME (stands for Canonical Name) in the first line. What that means is that http://www.google.com is also the same as http://www.l.google.com which in turns has 5 IP#s associated with it.

d) The beauty of this is that in a few seconds, you got to the website that you wanted to without remembering the IP # that is needed.

What is this important? If you have a cell phone, how do you dial the numbers of your family and friends?  Do you remember by heart their respective phone numbers? Not really or at least not anymore You probably know your own number and a small close group (your home, your work, your children, spouse, siblings).  Even then, their names are in your contact book and when you want to call (or text) them, you just punch in their names and your phone will look up the number and send out.

The difference between your cell phone directory and the DNS is that, you control what is in your phone directory.  So, a name like “Wife” in your phone could point to a phone number that is very different from a similar name in your friend’s phone directory.  That is all well and good.

But on the global Internet, we cannot have name clashes and that is why domain names are such hot things and people have snapped up pretty much a very large chunk of names during the dot.com rush in the late 1990s.

Now on to the issue at hand

So, what’s that got to do with this alarmist issue of connecting to the Internet from July 9, 2012?

Well, it has to with the fact that there as a piece of software – malware in this case – that got added to those running Windows and Mac OSX.  In all computers, the magic to do the DNS lookup is maintained by a file which contains information about which Domain Namer Server to query when presented with a domain name like http://www.cnn.com.

For example, on my laptop (which runs Fedora), the file that directs DNS looks is called /etc/resolv.conf.  This is the same for a Mac OSX file and I think it there is something similar in the Windows world as well. Fedora and Mac OSX share a common Unix heritage and so many files are in common.

The contents of my /etc/resolv.conf file is:

# Generated by NetworkManager
domain temasek.net
search temasek.net lan
nameserver 192.168.10.1

The file is automatically generated when I connect to the network and the crucial line is the line that reads “nameserver”. In this case, it points to 192.168.10.1 which happens to be my FonSpot wireless access point. But what is interesting is that my FonSpot access point is not a DNS server per se.  In the setup of the FonSpot, I’ve got it to look up domain names to Google’s public DNS server whose IP #s are 8.8.8.8 and 8.8.4.4.

Huh? What does this mean?  Simply put, when I type in http://www.cnn.com on my browser, that name’s IP# is looked up first by my browser asking the nameserver 192.168.0.1 which is the FonSpot will then return to my browser that it should go ask 8.8.8.8 for an answer. If 8.8.8.8 does not know, hopefully 8.8.8.8 will give an IP # to my browser to ask next.  Eventually, when an IP # is found, my browser will use that IP # and send a connection request to that site. All of this happens in milliseconds and when it all works, it looks like magic.

What if you don’t get to the site?  What if the entry in the /etc/resolv.conf file pointed to some IP # that was a malicious entity that wanted to “hijack” your web surfing?  There is a legitimate reason for this. For example, when you connect to a public wifi access point (like Wireless@SG for example), you will initially get a DNS nameserver entry that belongs to the wifi access provider. Once you successfully logged into that access point, then your DNS lookup will be properly directed. This technique is called “captive portal”. My FonSpot is a captive portal btw.

The issue here is that those machines who have the malware DNSChanger have the DNS lookup being hijacked and directed elsewhere.  See this note by the US Federal Bureau of Investigation about it.

It appears that the DNSChanger malware had set up a bunch of IP# to redirect maliciously all access to the Internet. If your /etc/resolv.conf file has nameserver entries that contain numbers in the following range:

85.255.112.0 to 85.255.127.255

67.210.0.0 to 67.210.15.255

93.188.160.0 to 93.188.167.255

77.67.83.0 to 77.67.83.255

213.109.64.0 to 213.109.79.255

67.28.176.0 to 67.28.191.255

you are vulnerable.

Here’s a test I did with the 1st of those IP#s on my fedora machine:

[harish@vostro ~]$ dig @85.255.112.0 www.google.com

; <<>> DiG 9.9.1-P1-RedHat-9.9.1-2.P1.fc17 <<>> @85.255.112.0 www.google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34883
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 4, ADDITIONAL: 5

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.google.com.            IN    A

;; ANSWER SECTION:
www.google.com.        464951    IN    CNAME    www.l.google.com.
www.l.google.com.    241    IN    CNAME    www-infected.l.google.com.
www-infected.l.google.com. 252    IN    A    216.239.32.6

;; AUTHORITY SECTION:
google.com.        32951    IN    NS    ns2.google.com.
google.com.        32951    IN    NS    ns4.google.com.
google.com.        32951    IN    NS    ns3.google.com.
google.com.        32951    IN    NS    ns1.google.com.

;; ADDITIONAL SECTION:
ns1.google.com.        33061    IN    A    216.239.32.10
ns2.google.com.        33061    IN    A    216.239.34.10
ns3.google.com.        317943    IN    A    216.239.36.10
ns4.google.com.        33297    IN    A    216.239.38.10

;; Query time: 305 msec
;; SERVER: 85.255.112.0#53(85.255.112.0)
;; WHEN: Sun Jul  8 21:40:07 2012
;; MSG SIZE  rcvd: 242

Some explanation of what the is shown above. “dig” is a command “domain internet groper” that allows me, from the command line, to see what a domain’s IP address is. With the extra stuff “@85.255.112.0”, I am telling the dig command to use 85.255.112.0 as my domain name server and get the IP for the domain http://www.google.com. Currently 85.255.112.0 is being run as a “clean” DNS server by the those who’ve been asked to by the FBI.

Hence, what will happen on July 9th 2012 is that the request by FBI to give a reply when 85.225.112.0 is used, will expire. Therefore the command I executed above on July 8th 2012 will not return a valid IP number from July 9th 2012. While the Internet will work, there would be people whose systems have been compromised to point to the bad-but-made-to-work-OK DNS servers, will find that they can’t seem to get to any site easily by using domain names. If they instead used IP#s, they can get to the site with no issue.

A quick way to check if your system needs fixing is to go to http://www.dns-ok.us/ NOW to check. If it is OK, ie your system’s /etc/resolv.conf is not affected (or the equivalent for those still running Windows).

See the announcement from Singapore’s CERT on this issue.

And it’s live now – SCO Open Server 5.0.5 running in a RHEL 6 KVM


As promised earlier, the final bits of getting an application that runs on the old hardware on to the VM is now all done.  I tried to install the app but, I really did not want to spend too much time trying to figure out all the nuances about it.  Since this is really an effort that would eventually see the app being replaced at some future date, I wanted to get it done easily.

So, over the last long weekend, I did the following:

a) Created a brand new VM running SCO Open Server 5.0.5 on the RHEL 6.2 machine. The specs of the VM are: 2GB RAM, 8GB disk, qemu (not kvm), i686, set the network card to be PC-Net and Video as VGA. This is the best settings to complete the installation of SCO in the VM.

b) Meanwhile on the old machine, I did a tar of the whole system – “tar cvf wholesystem.tar /”. This is probably not the best way to do it, but hey, I did not want to spend time just picking what I wanted and what I did not need from the old machine. The resulting “wholesystem.tar” file was about 2G in size.

c) Ftp’ed the wholesystem.tar file to the VM and did an untar of it on to the VM – “cd /; tar xvf /tmp/wholesystem.tar “. This resulted in a VM that could boot, but needed some tweaks.

d) The tweaks were:

  1. Changing the network card to reflect the VM’s settings
  2. Changing the IP#
  3. Disabling the mouse on the VM

d) SCO is msft-ish (or may be msft learned it from SCO) in that the tool that is used to do the changes “scoadmin” will, after changes are done, need the kernel be rebuilt which then necessitates the rebooting of the VM to pick up the new values

e) Edited the /etc/hosts file to reflect the new IPs and added in /etc/rc.d/8/userdef file a line to set the default route on the VM: route add default 192.1.2.5

The VM’s IP is 192.1.2.100 and in the /etc/resolv.conf file, the nameserver was set to 8.8.8.8 and 8.8.4.4 (Google’s public DNS)

Printing:

a) The old machine had two printers – an 80 column and a 132-column dot matrix printer – connected to its serial and parallel ports.  I did not want to deal with this issue for the VM and got hold of two TP Link PS110P print servers. What’s nice about these are that they are trivial to work with (they are running Linux anyway) and by plugging them to the printers (even the serial printer had a parallel port), both printers were on the network and so printing from the SCO VM was now trivial.

b) Configuring the SCO VM to print to the network printer was using the rlpconf command. The TP Link print server has an amazing array of options and I picked the LPR option and the LPT0 and LPT1 device queue on the two TP Link print server. While the scoadmin has a printer settings section, for some reason the remote printers set up by it never quite worked.  In any case, the rlpconf edits the /etc/printcap file to reflect the remote printers and that is all that is needed.  Here’s what the /etc/printcap looked after the rplconf command was run:

cat /etc/printcap
# Remote Line Printer (BSD format)
#rhel6-pdf:\
#       :lp=:rm=rhel6:rp=rhel6-pdf:sd=/usr/spool/lpd/rhel6-pdf:
LPT0:\
:lp=:rm=192.1.2.51:rp=LPT0:sd=/usr/spool/lpd/LPT0:
LPT1:\
:lp=:rm=192.1.2.52:rp=LPT1:sd=/usr/spool/lpd/LPT1:

the IP #s were set in the TP Link print servers and their respective print spools.

c) so, once that was done, running lpstat -o all on the VM shows the remote printer status:

#lpstat -o all
LPT0:
lp1 is available ! (06,05,02,000000|01|448044|443364|04,02,02|8.2,8.3)
LPT1:
lp1 is available ! (03,02,03,000000|01|450384|445932|04,02,01|8.2,8.3)

Networking issues:

Initially, I had set up the VMs using the default networking setting for KVM.  The standard networking in KVM assumes that the VM is going to go out to the network and not running as a server per se. But this VM was going to be accessed by other machines (not the RHEL6 host) on the office LAN, so the right thing to do is to set up the a Bridging network instead of a NATed network. RHEL 6.2 does not, by default, have bridging set up and I think that need to change. NATing is fine, but in order for the VM to be accessed from systems other than the host, there has to be additional firewall rules set up if it is to be NATed, but a one liner iptables rule: “iptables -I FORWARD -m physdev –physdev-is-bridged -j ACCEPT” if it was on a Bridge.

I think the dialog box that sets up the VM via virt-manager should add an option to ask if a you need a bridged network. The option is there, but not obvious. So following these instructions carefully – they work.

Well, that was it. The SCO Open Server 5.0.5 with the application that was needed is now running happily in a VM on a RHEL 6.2 machine and the printing is via the network to a couple of print server.

I must, once again, take my hats off to the awesome open source developers of KVM, QEMU, BOCHS etc for the wonderful way all the technologies have some together in a Linux kernel as fully supported by Red Hat in Red Hat Enterprise Linux. There is an enormous amount of value in all of this, that even a premium subscription of this RHEL installation is a fraction of the true value derived. The mere fact that a 20th century SCO Open Server can now be made to run in perpetuity on a KVM instance is mind-boggling (even if Red Hat does not officially support this particular setup).

QED.

Fedora 17 before it is released


I decided to take the plunge and run Fedora 17 before it’s officially launched in May.  My system has been running Fedora 16 x86-64 since the launch last November and I must say that it has been solid – including the GNOME 3.x stuff.

What I did was the following:

a) Updated the system fully – “yum update -y”

b) Ensure that “preupgrade” is installed – “yum install preupgrade -y”

c) Run the “preupgrade” command and let it set the system up.  This last step could take a few hours depending on your Internet speed. This was exactly what I did in November as well when I went from Fedora 15 to Fedora 16.

When it finally completed the preupgrade, I rebooted the machine, then it went through the final install and, viola, all was good. The key apps I need to use on a daily basis – mutt, msmtp, Firefox, Chromium, x-chat, Thunderbird, vlc, twinkle, calibre, virt-manager all worked as before. Or so I thought.

For what it’s worth, all of them work with the exception of vlc which will play ogg, mp3 but fails to play flv and mp4 (complains that it needs h264 codecs). I thought it should be there, but I guess something might not have been properly updated.  Oh well. Not the end of the world really. Everything else works.

The version of the kernel right now is:

[harish@vostro ~]$ uname -a
Linux vostro.sin.redhat.com 3.3.4-1.fc17.x86_64 #1 SMP Fri Apr 27 18:39:03 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

I did, however, encounter an interesting problem when I rebooted the machine to the newest kernel – my wifi did not come on. For a moment I thought something broke. I rebooted the machine from a liveUSB running Fedora 16 and the wifi worked so it is not hardware issue.  What I had to do was to use the “Fn + F7” key combination (to turn on and off the wireless in the machine) and bingo, the wifi came back on.  My machine is a Dell Vostro v13.

[harish@vostro ~]$ lspci
00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07)
00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07)
00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07)
00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03)
00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03)
00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03)
00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03)
00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03)
00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03)
00:1c.2 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 (rev 03)
00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03)
00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 03)
00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03)
00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03)
00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03)
00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03)
00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93)
00:1f.0 ISA bridge: Intel Corporation ICH9M-E LPC Interface Controller (rev 03)
00:1f.2 SATA controller: Intel Corporation ICH9M/M-E SATA AHCI Controller (rev 03)
00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03)
07:00.0 Network controller: Intel Corporation WiFi Link 5100

and

[harish@vostro ~]$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 003: ID 10f1:1a1e Importek Laptop Integrated Webcam 1.3M
Bus 003 Device 006: ID 0a5c:4500 Broadcom Corp. BCM2046B1 USB 2.0 Hub (part of BCM2046 Bluetooth)
Bus 003 Device 007: ID 413c:8161 Dell Computer Corp. Integrated Keyboard
Bus 003 Device 008: ID 413c:8162 Dell Computer Corp. Integrated Touchpad [Synaptics]
Bus 003 Device 009: ID 413c:8160 Dell Computer Corp. Wireless 365 Bluetooth

Let’s hope that by the time Fedora 17 is Generally Available, this little toggle is long gone.

Microsoft’s “open technology” spinoff


While I would like to stand up and cheer Microsoft on them setting up the “Microsoft Open Technologies, Inc”, I am not convinced that they are doing this in good faith.

Microsoft’s founder, Bill Gates, said in 1991 – 21 years ago – that

“If people had understood how patents would be granted when most of today’s ideas were invented, and had taken out patents, the industry would be at a complete standstill today.”

only to have all of that conveniently forgotten years later when they themselves started patenting software and suing people all over. These are the kinds of actions taken by a company who cannot innovate or create anything that is new and valuable.  It is also the same company whose CE goes around saying things like:

“Linux violates over 228 patents, and somebody will come and look for money owing to the rights for that intellectual property,”

Too many of these statements and blatant lies from a company that has lost its ethical compass. This is the same company that is now pro-CISPA even after backing down from being pro-SOPA. Do read this statement from EFF about what’s wrong with CISPA.

Never mind all that. Clearly, Microsoft sees money in FOSS. It is business as usual for them in creating their new subsidiary.

If they are really serious about FOSS being part of their long-term future, I am sure they will be reaching out to many people in the FOSS world to join them. Thus far, all I have seen is a redeployment of their internal, dyed-in-the-wool MSFTies.

I think Simon’s commentary on the plausible reasons for Microsoft setting this new entity up is a good set of conspiracy theories, but I think Simon gives Microsoft too much credit.

Exposing localhost via a tunnel


I came across this tool, localtunnel, that offers a way to expose a localhost based webserver (for example) to the internet. It is a reverse proxy that brings you to your machine way behind a firewall by bouncing off of a externally reachable host running localtunnel.

I tested it out on my Fedora 16 laptop (all I had to do was to run “gem install localtunnel” as I had ruby already installed).

I like the idea, but am not entirely convinced about the security exposure.

What does it take?


I am an organizer of a programming contest that will be using some really cool technologies (HTML5, Python, OpenShift, just to name three). This will be a contest open to anyone but we would need whoever takes part to be in Singapore for the duration of the contest.

This contest will also involve children 12-years and below (in their own category using Scratch as the tool) as well as an open category covering everyone else.

This contest covers the entire gamut of users – children (the next generation coders), cool technologies, innovation, solving society’s problems).

What I would like to do is to find a way to have the President of the Republic of Singapore to be the guest of honour to present prizes to the winners when the contest is all over. President Tony Tan, in his earlier career, we a champion of education (as Minister of Education), headed up the National Research Foundation (as champion of innovation and entrepreneurship) and is the current patron of the Singapore Computer Society.

My challenge is that everyone I talk to says that “inviting the president is hard; too much protocol; too many security related issues etc”. Really? Is it so hard to invite the head of state to be the chief guest of an event focusing on things that he had championed earlier in his career?

Please tell me how I can cut to the chase and get him as the Chief Guest. Anyone? I will send an email to him directly, but I shall put this request out in public now.

The Value of being Heard and Consulted


Some of you would know that I am employed by a company called Red Hat since September 2003, it will be nine years with the organization. That’s longer than I have been with any of my startups (Inquisitive Mind and Maringo Tree Technologies) combined. In many ways it is not about Red Hat per se, but about Free Software (and Open Source for that matter) and how the culture of Red Hat very much reflects the ethics and ethos of the Free Software movement.

Yes, Red Hat has to earn its keep by generating revenues (now trending past US$1 billion) and the magic of subscriptions which pegged the transfer of significant value to the customers by way of high quality and reliable software and services, ensures that Free Software will continue to drive the user/customer driven innovation.

All of this is not easy to do. When I joined Red Hat from Maringo Tree Technologies, I went from being my own boss, to working for a corporation. But the transition was made relatively easy because the cultural value within Red Hat resonated with me in that Red Hat places a very high premium on hearing and engaging with the associates. I was employee #1 in Singapore for Red Hat and my lifeline to the corporation was two things: memo-list and internal IRC channels. Later as the Singapore office took on the role of being the Asia Pacific headquarters, we hired more people and it is really nice to see the operation here employing over 90 people.

But inspite of the growth in terms of people, the culture of being heard and consulted is still alive and thriving. It is a radically different organization which will challenge those joining us from traditionally run corporations where little or no questions or consulting is done and all decisions are top down.  I am not saying that every Red Hat decision is 100% consulted, but at least it gets aired and debated. Sometimes your argument is heard, sometimes it is accepted and morphed, sometimes it is rejected.  I think this interview of Jim Whitehurst that ran in the New York Times is a good summary.

Red Hat Enterprise Linux 6 comes to the rescue of SCO Open Server in a VM!


After about two years ago to the day (plus or minus), I’ve finally gotten around to moving a friend’s ancient SCO OpenServer 5.0.5 to run on a modern operating system within a virtual machine.

My friend acquired a brand new Dell Xeon server with 8GB of memory and tonnes of disk space.  It came pre-installed with Red Hat Enterprise Linux 6. I got him to register with Red Hat Network and then set up the system and got it fully updated.  All’s well on that count.

Next was to take the experience from two years ago where I managed to install the SCO OpenServer 5.0.5 on a RHEL 5.4 system and make that happen in the latest and greatest of systems.

First was to create the ISOs of the CDs needed (dd if=/dev/cdrom of=NameOfCD.iso) and kept it in a directory for ISOs which I created in the /opt directory.

Second was to fire-up virt-manager (from the GUI so that my friend knows what is happening), and then go about creating a new VM. The virt-manager had problems to start up which puzzled me.  This is 2012 and this machine is a server class machine. It could not be that Dell shipped the machine with support for virtualization turned off in the BIOS, could it? Was I so wrong. For reasons I cannot explain, Dell chose to DISABLE support for virtualization in the BIOS even for this server class machine. I had to reboot the machine, go into the BIOS settings, enable the virtualization option and restart RHEL.

This time, firing up virt-manager worked like a charm and the proceeded to create a new VM.

The following screenshots are self-explanatory including the installation screens from SCO:

The key choices in the dialog boxes were as follows:

a) Check on the “Customize configuration before install”

b) Set Virt Type as qemu and Architecture to be i686

c) Change the NIC type to pcnet

d) Change the Video to vga

With those settings, the installation of the VM started.

The SCO installation is so archaic and ancient that it amazes me that I could still install it into a 21st century virtual machine! And kudos to the KVM and virt engineers!

As the SCO installation proceeds, there are few things that need to be chosen:

a) The installation device is an IDE CDROM on the secondary master.

b) When chosing the “Hard Disk Setup”, change the “Tracking” to “Bad Tracking Off”. This enormously speeds up the “formating” of the drive by SCO.

c) Change the “Network Card” to manual select and then chose “AMD PCNet-PCI Adapter”

d)And continue to the last screen and go ahead with installation.

So, a few minutes later, it is all installed and the system will shutdown.  You can then safely restart the VM and you should be in the default text console. Like any Linux machine, you do have alternate screens available by using the menu options of the VM window “Send Key” and send “Ctl-Alt-F1″ etc to the VM and it will switch to the various virtual consoles available.  

Once you are logged into the system, you can go ahead and use it.

I will follow-up with the installation of a product called “Throughbred 8.4.1” in a subsequent post.

In the meantime, if you have additional SCO CDs such as:

a) SCO-Optional-Services.iso, or

b) SCO-RS-505A.iso, or

c) SCO-SkunkWare.iso, or

d) SCO-Vision-2K.iso, and

e) SCO-Voyager-5-JDK.iso,

You can use Virt-Manager’s interface for the VM-in-question’s “Details” menu option and chose the CDROM option to connect to the ISO that is needed. Once it is linked up, switch over to the VM’s console, and assuming you are logged in as root, type in “mount /dev/cd0 /mnt” to mount it. For some reason, the first time I type the command it throws an error, and have to do it a second time when it succeeds. Then you have access to the ISO as a local CD.

Cool tech tip


Saw this on identi.ca feed:

“@climagic youtube-dl -q -o- http://www.youtube.com/watch?v=zscrs94_pFc | mplayer – cache 1000 – # Watch youtube streaming directly to mplayer”

So, do yum install youtube-dl mplayer on your Fedora machines, then you can pull in youtube videos with the youtube-dl command and then pipe it (the “|”) to the video player, mplayer and watch it immediately. No need for a browser and this is really cool.

Naturally, if all you wanted was to download the youtube video and keep a copy, just use youtube-dl [URL].

You can replace mplayer with vlc as well so the one above would look like this:

youtube-dl -q -o- http://www.youtube.com/watch?v=zscrs94_pFc | vlc

OSCON 2011 – Tuesday July 26


Finally, I’ve found time and motivation to attend the O’Reilly organized Open Source Convention 2011 in Portland, Oregon.

It has been many years since I was in Portland – in fact, the times I spent in Portland was when I was in school in OSU in the latter half of 1980s. Most times, I will drive up from Corvallis on a Saturday morning, go to Powell’s and spend the whole time there. I did do some hiking around the area, but it was Powell’s for me.

So, it is somewhat of a de ja vu and yet new.

I have signed up for the sessions on Tuesday/Wednesday/Thursday and will also be supporting the Fedora team who has a booth as well as the Open Source for America.

Tuesday’s sessions show a few HTML5 talks.  Looks like HTML5 is indeed the next new shiny thing. May be not. But it is nice to be in a techie session and actually do some coding – it is always a good adrenalin rush for me. Coding and hacking has always been.

Here’s the site the speaker Remy Sharp is using for his talk “Is HTML5 Ready for Production” – jsbin.com,  html5demos.compusher.com and responsivepx.com/. Cool stuff – he is now showing WebSockets as well. Awesome. WebSocket servers have to be node.js machines for superfast connections.

Change and Opportunity


Change and evolution are hallmarks of any open source project. Ideas form, code gets cut, repurposed, refined and released (and sometimes thrashed).

Much the same thing happens with teams of people.  In the True Spirit of The Open Source Way, people in teams will see individuals come in, contribute, leave. Sometimes, they return. Sometimes, they contribute from afar.

Change has come to Red Hat’s Community Architecture and Leadership (CommArch) team.  Max has written about his decision to move on from Red Hat, and Red Hat has asked me to take on the leadership of the group.  We have all (Max, myself, Jared, Robyn, and the entire CommArch team) been working hard over the past few weeks to make sure that transition is smooth, in particular as it relates to the Fedora Project.

I have been with Red Hat, working out of the Asia Pacific headquarters based in Singapore, for the last 8 years or so. I have had the good fortune to be able to work in very different areas of the business and it continues to be exciting, thrilling and fulfilling.

The business ethics and model of Red Hat resonates very much with me. Red Hat harvests from the open source commons and makes it available as enterprise quality software that organizations, business big and small can run confidently and reliably. That entire value chain is a two way chain, in that the work Red Hat does to make open source enterprise deployable, gets funnelled back to the open source commons to benefit everyone. This process ensures that the Tragedy of the Commons is avoided.

This need to Do The Right Thing was one of the tenets behind the establishment of the Community Architecture and Leadership team within Red Hat. Since its inception, I have had been an honorary member of the team, complementing its core group.  About a year ago, I moved from honorary member to being a full-timer in the group.

The team’s charter is to ensure that the practises and learnings that have helped Red Hat to harness open source for the enterprise continues to be refined and reinforced within Red Hat.  The team has always focused on Fedora in this regard, and will continue to do so. We’ve been lucky to have team members who have had leadership positions within different parts of the Fedora Project over the years, and this has given us an opportunity to sharpen and hone what it means to run, maintain, manage, and nurture a community.

The group also drives educational activities through the Teaching Open Source (TOS) community, such as the amazingly useful and strategic “Professors Open Source Summer Experience” (POSSE) event.  If the ideas of open source collaboration and the creation of open source software is to continue and flourish, we have to reach out to the next generation of developers who are in schools around the world. To do that, if faculty members can be shown the tools for open source collaboration, the knock-on effect of students picking it up and adopting is much higher. That can only be a good thing for the global
open source movement.

This opportunity for me to lead CommArch does mean that, with the team, I can help drive a wider and more embracing scope of work that also includes the JBoss.org community and the newly forming Cloud-related communities.

The work ahead is exciting and has enormous knock-on effects within Red Hat as well as the wider IT industry.  Red Hat’s mission statement states: “To be the catalyst in communities of customers, contributors, and partners creating better technology the open source way.”

In many ways, CommArch is one of the catalysts. I intend to keep it that way.

Now all machines at home are on Fedora 15!


I spent 30 minutes this morning upgrading my sons’ laptops to Fedora 15. I used a Fedora 15 LiveDVD (installed on a USB) that I had created that included stuff that the standard Fedora 15 LiveCD does not because of space. Tools like LibreOffice, Scribus, Xournal, Inkscape, Thunderbird, mutt, msmtp, wget, arduino, R, lyx, dia, and filezilla. I’ve thrown in blender and some games into the mix as well.

The updates of the systems went super quick (20 minutes to first boot) and then on to Spot’s Chromium repo:

  1. su –
  2. cd /etc/yum.repos.d/
  3. wget http://repos.fedorapeople.org/repos/spot/chromium/fedora-chromium.repo
  4. yum install chromium

Following that, on to rpmfusion.org to get the free and non-free setup RPMs to get to the tools that are patent encumbered and otherwise forbidden to be included in a standard Fedora distribution.

  1. yum install http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm
  2. yum install http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm
  3. yum install vlc
  4. yum install thunderbird-enigmail

[Update, June 19, 2011 0050 SGT] Based on the comment from Jeremy to this post, I’m updating the instructions]

The last bit is flash from Adobe – the 64-bit version:

  1. wget http://download.macromedia.com/pub/labs/flashplayer10/flashplayer10_2_p3_64bit_linux_111710.tar.gz.
  2. tar xvfz flashplayer10_2_p3_64bit_linux_111710.tar.gz
  3. cp libflashplayer.so /usr/lib64/mozilla/plugins/
  4. chmod +x /usr/lib64/mozilla/plugins/libflashplayer.so

Installing a 32-bit version of Adobe Flash for a 64-bit Fedora installation:

  1. Go to http://fedoraproject.org/wiki/Flash#Enabling_Flash_plugin
  2. Installing a 32-bit wrapped into a 64-bit version
  3. ln -s /usr/lib64/mozilla/plugins-wrapped/nswrapper_32_64.libflashplayer.so /usr/lib64/chromium-browser/plugins
  4. These steps should be sufficient for flash to be enabled for both Firefox and Chromium

Once done, restart your browser and you will have flash enabled.

Yes, I am aware that I’ve had to compromise and load up non-free software. It is less than ideal and I am looking forward to GNU Flash maturing as well as MP3 and related codec getting out of patent.

Printer/cups tip


Every time I update the OS on my laptops, I have to add the CUPS printer settings for the in office systems. It used to be that there was an internally usable RPM to do this, but I always thought that it was not really a clean enough solution.

So, this post is more of a reminder to myself that all I need to do is the following:

echo “BrowsePoll cups.server.domain.com” >> /etc/cups/cupsd.conf

service cups restart

And, viola, like magic, the printers get discovered and all is well. Nice.

Early thoughts on GNOME 3


I must admit, the first time I installed Fedora 15 alpha, I did it only to test out what GNOME 3 was all about. It looked like an interesting interface that would work on a tablet-like device, having used the Andriod-based Archos 10.1 for while now.

When Fedora 15 was officially launched on May 24th, I decided to move my work machine (a Dell Vostro v13) from Fedora 14 to 15.

For the tl;dr, I like GNOME 3.

Now the rest of the story:

The default background looked like a curtain from another era. I hit the right-button of the mouse to see what’s available, but nothing came up. I know I have stuff on the Desktop. How do I get to that now? By moving the mouse to the top left hand corner, the desktop “collapses” to show a whole of other things amongst them being the “search” box on the right side of the screen. I typed in “Desktop” and among other things, it came up with “Places and Devices”. Hmm. Interesting way to navigate.

One of the best uses of Fedora has been the fact that I could share my network connection with anyone. I am often in situations where I have my 3G USB dongle connected up and turning my laptop into a wifi hotspot. Alas, as I write this blog, it is not working in GNOME 3. It is one of the minor things I have to put up with now. I am hopeful that it will be reinstated RSN.

In general, I think there has be a lot of rethinking that has gone into the design of GNOME 3. I like that fact that the desktop is kept really clean. I am one of those guilty of a crowded and busy desktop. Now all of that is hidden away in a FOLDER (which is was anyway) called Desktop. Maybe it is time to retire that Desktop folder meme as well.

Now that I’ve been using GNOME 3 for about two days, it has begun to grow on me.  All of my other machines at home (and which my family uses) are all running the older GNOME and it does seem clunky and ancient.

Overall, I am pleased thus far. Just give me the means to share out my network, I’ll be productive.

My must-haves on any new Fedora installation


So, I’ve taken the plunge and gone ahead and updated my Fedora 14 to the next rev of Fedora 15. F15 comes default with GNOME 3. I am still finding my way around it, but it seems to be less clunky than GNOME 2.x. There are some minor stuff missing. I am hoping that the network-sharing part gets included in a hurry.

The purpose of this post is to document for myself, the extra apps that I include in a standard installation.

Firstly, I started the installation from a Fedora 15 x86_64 live CD. I turned on the encryption of my /home directory for obvious security reasons. I think it should be made mandatory for everyone.

Once the system was all set up, I added the following:

a) go to http://www.rpmfusion.org – set up the free and non-free stuff

b) go to spot’s repo for the open sourced version of Chrome – chromium.

c) install xournal, mutt, msmtp, wget, arduino, scribus, inkscape, audacity, libreoffice, thunderbird-lightning, thunderbird-enigmail, etherape, nmap, lyx, vlc, dia, R-project, gimp, twinkle, virt-* and x-chat

d) adding my sshtunnel alias command into the ~/.bashrc:

#setting up ssh tunnel
alias sshtunnel="ssh -C2qTnN -D 8080 username@somedomain.com &"

e) updating the network proxy to “socks, localhost, port 8080”.

Is Vietnam blocking Facebook?


I am sitting at a lounge in Ho Chi Minh City’s international airport and connected to the wifi. Interestingly, I cannot reach facebook.com.  Here’s the dig and traceroute info:

$ dig www.facebook.com

; <<>> DiG 9.7.3-RedHat-9.7.3-1.fc14 <<>> www.facebook.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15351
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;www.facebook.com.		IN	A

;; AUTHORITY SECTION:
www.facebook.com.	86400	IN	SOA	vdc-hn01.vnn.vn. postmaster.vnn.vn. 2005010501 10800 3600 604800 86400

;; Query time: 17 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)
;; WHEN: Thu May 12 19:11:39 2011
;; MSG SIZE  rcvd: 96

$ dig facebook.com

; <<>> DiG 9.7.3-RedHat-9.7.3-1.fc14 <<>> facebook.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22473
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;facebook.com.			IN	A

;; AUTHORITY SECTION:
facebook.com.		86400	IN	SOA	vdc-hn01.vnn.vn. postmaster.vnn.vn. 2005010501 10800 3600 604800 86400

;; Query time: 15 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)
;; WHEN: Thu May 12 19:12:16 2011
;; MSG SIZE  rcvd: 92
# traceroute facebook.com
facebook.com: No address associated with hostname
Cannot handle "host" cmdline arg `facebook.com' on position 1 (argc 1)

# traceroute www.facebook.com
www.facebook.com: No address associated with hostname
Cannot handle "host" cmdline arg `www.facebook.com' on position 1 (argc 1)
# dig @8.8.4.4 www.facebook.com

; <<>> DiG 9.7.3-RedHat-9.7.3-1.fc14 <<>> @8.8.4.4 www.facebook.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22333
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.facebook.com.		IN	A

;; ANSWER SECTION:
www.facebook.com.	1	IN	A	69.63.189.26

;; Query time: 128 msec
;; SERVER: 8.8.4.4#53(8.8.4.4)
;; WHEN: Thu May 12 19:18:37 2011
;; MSG SIZE  rcvd: 50

Once I turned on my sshtunnel, I can get to facebook not otherwise. Interesting.

Managing open source skepticism


I had an opportunity to speak to a few people from a government tender drafting committee on Wednesday.  They are looking at solutions that will be essentially a cloud for a large number of users and have spoken to many vendors.

I was given an opportunity to pitch the use of open source technologies to build their cloud and I think I gave it my best shot. I had to use many keywords – automatic technology transfer (you have the source code), helps to maintain national sovereignty, learning to engage the right way with the FOSS community, enabling the next generation of innovators and entrepreneurs and preventing vendor lock-in.

By and large, I think the audience agreed, except for one person who said “yeah, now it is open source, but it will become proprietary like the others”. Obviously this person has been fed FUD from the usual suspects and I had to take extra pains to explain that everything that we, Red Hat, ships is either under the GNU General Public License or GNU Lesser/Library General Public License.  The GPL means no one can ever close up the code for whatever reason. I am not entirely sure I managed to convince that member of the audience. In a lot of ways, this is the burden we carry as Red Hatters in explaining our business model and how we engage with the FOSS community etc.

Glad to have participated in the Cloud Workshop in Penang


I am pleased to have spent two days at the National Cloud Computing Workshop 2011 held in Penang, Malaysia April 11-12 2011. Targeted at the Malaysian academic community, it offered some insights to the initiatives that the various universities in Malaysia are undertaking on rolling out an academic cloud that is being set up with a fully accountable Malaysian identity and access framework.  I think this bodes well for their plans to push for a Malaysian Research Network (MyREN) Cloud that is hoped will be a way to encourage the collaboration of both faculty and students in sharing knowledge and learning. I was particularly pleased to have been invited to speak about cloud technologies from a Red Hat perspective as well as to introduce the audience to the various open source collaboration and empowerment work Red Hat is doing from the Community Architecture team. When I mentioned, during my talk, about POSSE and Red Hat Academy as well as “The Open Source Way” and “Teaching Open Source“, I could sense a level of interest from the audience in wanting to know more.  And true enough, the post-talk q&a focused a lot on “how can we take part in POSSE”.  Looks like it is going to be a few POSSEs in Malaysia this year! Let the POSSE bidding process begin!

On day two, I was invited to take part as a panelist with some of the other speakers to discuss the future of cloud in Malaysia and to throw up suggestions and ideas about what they could be targeting. One of my two suggestions was to first create a “researchpedia.my” as a definitive wiki-based resource that brings together the various research activities in Malaysia in the private and public universities as well as public-funded research institutions. The key is in a site that is wiki-based so that there are no unneeded bottlenecks in updates etc and helps with keeping the information current.  The second suggestion to the audience was to consider the various Grand Challenges and see if any of them are interesting to be picked upon. What is needed is to aim really high so that at least you will land on the moon if you miss. Aiming only to land on the moon may result in you landing in the ocean!

Overall, I think the organization was good. I am looking forward to the presentation materials of the speakers to be made online and to the next event!

Cloud for Academics


I am pleased to have an opportunity to speak from both a Red Hat and an open source presective about cloud technologies to the academic community in Malaysia.  

Clearly there is a lot to convey and I am hopeful that they have an appreciation that they can and are welcome to participate in cloud-related projects.  I hope that they’ve understood that projects such as Delta Cloud and related projects that they could direct their students (undergrad orgrad) to participate.  
For the benefit of all, here are some links that would be good to explore:
I was also asked about what Red Hat does for academics and was a prefect shoe-in to introduce both POSSE and Red Hat Academy.  Hopefully I will be run a POSSE in Malaysia really soon.

Believing in your own BS


I spend pretty much 100% of my waking hours in the IT world. A world that changes and evolves rapidly. This results in a massive amount of churn in ideas, methods and processes. Some of these see traction and light of day and become adapted.  The adoption happens with the help of a) “messaging” and b) “positioning”.  The people who do this first have to believe that it is worth their while to push the message and second to repeat it enough times to make sure that it gets sunk in.

But, sometimes, this brainwashing reaches the level of “believing your own bullshit“. The bullshit (BS in short) that your product, service, offering is so damn good and wonderful that there is no way anyone can question or challenge it.

Years ago, I was the professional services director and later CTO of a company selling technology to banks. I liked only some portions of the technology – the crypto stuff – but the rest of it was so-so. More so-so because even though it was developed on Linux environments, it was never sold to be run on that. I was placed in many speaking situations to promote the tech and talk about how wonderful it is etc. I had to do both a) and b) above and eventually began believing in my own BS. I truly disliked it. I felt unclean every time I spoke highly of the products knowing full well that quite a bit of it was high-grade BS.  The reason for it to have been BS was its development model – proprietary and all in-house. No one could inspect, change or improve it for it was all done internally. There was no external developer community, nothing.

I contrast that with what I had done post leaving that entity over a decade ago. It was wonderful that I got back to my roots – the Free and Open Source world. This is a world in which I do not have to spin a story, promise a capability or functionality to anyone. If something works, it will. If it doesn’t, let’s work on making it happen. No hidden agendas. Nothing to BS about. I sleep well knowing that I did not hoodwink anyone.

Now, let’s look at what we have seen and will continue to see in the Singapore political scene in the last few months. With parliamentary elections looming, the ruling elite have (re)started to spin their BS. Almost every one of the ruling incumbents believe it to be true – lock, stock and barrel. They are all completely whitewashed and project an image that they are credible through and through and “You, Mr Elector, do not forget that. You, Mr Elector, owe it to us to re-elect us to office. Or else, Mr Elector, watch out.”

We, as Singapore citizens, can help snap the ruling elite out of their stupor and hypnosis. By voting in non-PAP candidates into parliament, we will finally have the best Singapore we can have. A Singapore that we do not have to “believe in our own BS”. A Singapore that we can be truly proud to represent every one of us. A Singapore that Singaporeans can truly call their own.

True Leadership and The Open Source Way


I live in the Free and Open Source World. A lot of what the FOSS movement’s ethos and principles are quite core to me.  I think this webinar featuring Charlene Li is a required viewing.  Remember, this is not about technology.  It is about how you should do things, how you should be authentic and how you should consider the notion of leadership.

This is a model that applies very well in daily life, including politics. Yes, politics. If you want to gain trust of the population, openness, authenticity and honesty are very important.  Lessons from The Open Source Way are very useful and appropriate as my country prepares for the upcoming parliamentary elections (likely to be on April 30, 2011).

NASA’s inaugural Open Source Summit


I missed the live streaming of the NASA OSS Summit but it is mostly all captured and available on ustream.tv.  These are the links to the recordings:

Day One:

Day Two:

And a great post on OSDC.

Enjoy.

Taking the higher ground


I am disappointed with the kinds of ad hominem attacks being made at the person from the PAP who is being labelled as the PAP’s youngest candidate to be introduced this time around.

It is one thing to comment on how the MSM covered her introduction with a “Ring”-like photo on the front page – the criticism is about how the MSM made the classic editorial mistake of a bad photo, and it is another to do character-assassination which seems to be what is being done. Give the lady a chance. Everyone deserves a chance. Yes, even though I will never vote PAP, I will still want to hear them out.  I am sure she has some sincerity and clearly would want to serve. She says that she has been working on the ground in the Ulu Pandan area for 4 years. Kudos to her then.

The vitriol that is being made is with regards to her husband being the principal private secretary to Lee Hsien Loong (the Prime Minister). That there is nepotism and/or cronyism in play could be a fair comment; but that is a field that is well oiled with the ruling party, so one should not be surprised.

The scenario that would will disappoint my fellow citizens will be if she is grouped in the GerrymandeRed-Constituency-scheme and that GRC does not get contested. In that case, she walks into parliament without being actually voted in.

Remember – in 2006,  only 34.27% of all voters VOTED for the PAP who went on to get 97.6% of seats in parliament! An unaccountable parliament could again be in place in 2011.

So, let’s take the higher ground. Let’s show the world that Singaporeans are fair and passionate people.  See Cherian’s post on this topic.

Interesting post from a non-techie moving to Fedora


A good friend of mine sent me a note about his friend’s experience in moving to a Fedora and Red Hat desktop environment.  That person is a non-techie and this is his report – all unsolicited – but posted with permission and anonymized.

=====
I’ve installed both Fedora and Red Hat, here’s my first impressions:

1. Both Fedora and Red Hat are well designed. Because they use GNOME, both have a similar look and feel to Ubuntu. This is great as it makes for an easier transition! 🙂

2. Just like Ubuntu, after you first install Fedora and Red Hat, the system jumps onto the Internet and looks for software updates and security fixes that need to be installed.

3. With my high speed Internet connection Fedora took several hours to upload and install its initial updates.

I’m guessing with your connection that the initial update (and the annual update) will take a full day. Fortunately, during the update there were only two events that required me to click a button. Otherwise I was able to walk away from the computer and just let it do its business.

3. Red Hat took a bit less time in its initial update. I’m guessing this is because it has less software.

4. Fedora and Red Hat are identical in their look and feel. They have different applications pre-installed and, most importantly, Fedora has access more software than Red Hat does.

Red Hat is very conservative in the software it includes. I’m guessing this is because it is typically used as a secure server for business. Hence, it doesn’t offer as much end-user software.

Note the difference in pre-installed software available as seen in the attached screen shots.

5. Finding and updating software is very similar to Ubuntu. I found the package lists easier to navigate in Ubuntu, but Fedora and Red Hat are still easy.

That’s what I have for you thus far!

=====

RMS in Singapore


I was invited by the Singapore Management University to be a participant at a talk by Richard M Stallman of the Free Software Foundation. It was held on November 1st at the SMU Auditorium.

RMS spoke about Free Software and Patents though he did say that it is a very wide set topic that will take a good two hours to cover.

I have to agree that software patents are inherently and that we must guard against this being instituted in Singapore. Alas, the Intellectual Property Office of Singapore does not have anything specific to this and I guess it is just as well.

Something to note about this talk – the session following RMS was to be a response to what he had to say and there were to be two persons – one myself and the other someone from the IDA.

What is very telling about the ambivalence of the Singapore government to Free Software (and by extension Open Source) is that the IDA representative “choose not to participate” in the discussion even though his name was advertised everywhere.

Ah, the beauty of a closed mind.