Quarter Century of Innovation – aka Happy Birthday Linux!


Screenshot from 2016-08-25 14-35-23

Happy Birthday, Linux! Thank you Linus for that post (and code) from a quarter of a century ago.

I distinctly remember coming across the post above on comp.os.minix while I was trying to figure out something called 386BSD. I was following the 386BSD development by Lynne Jolitz and William Jolitz back when I was in graduate school in OSU. I am not sure where I first heard about 386BSD, but it could have been in some newsgroup or the BYTE magazine (unfortunately I can’t find any references). Suffice to say, the work of 386BSD was subsequently documented by the Dr. Dobb’s Journal from around the 1992. Fortunately, the good people at Dr. Dobb’s Journal have placed their entire contents on the Internet and the first post of the port of 386BSD is now online.

I was back in Singapore by then and was working at CSA Research doing work in building networking functionality for a software engineering project. The development team had access to a SCO Unix machine but because we did not buy “client access licenses” (I think that was what it was called), we could only have exactly 2 users – one on the console via X-Windows and the other via telnet. I was not going to suggest to the management to get the additional access rights (I was told it would cost S$1,500!!) and instead, tried to find out why it was that the 3rd and subsequent login requests were being rejected.

That’s when I discovered that SCO Unix was doing some form of access locking that was part of the login process used by the built-in telnet daemon. I figured that if I can replace the telnet daemon with one that does not do the check, I can get as many people telnetting into the system and using it.

To create a new telnet daemon, I needed the source code and then to compile it. SCO Unix never provided any source code. I managed, however, to get the source code to a telnet daemon (from I think ftp.stanford.edu although I could be wrong).

Remember that during those days, there was no Internet access in Singapore – no TCP/IP access anyway. And the only way to the Internet was via UUCP (and Bitnet at the universities). I used ftpmail@decwrl.com (an ftp via email service by Digital Equipment Corporation) to go out and pull in the code and send it to me via email in 64k uuencoded chunks. Slow, but hey, it worked and it worked well.

Once I got the code, the next challenge was to compile it. We did have the C compiler but for some reason, we did not have the needed crypto library to compile against. That was when I came across the incredible stupidity of labeling cryptography as a munition by the US Department of Commerce. Because of that, we, in Singapore, could not get to the crypto library.

After some checking around, I got to someone who happened to have a full blown SCO Unix system and had the crypto library in their system. I requested that they compile a telnet daemon without the crypto library enabled and to then send me the compiled binary.

After some to and fro via email, I finally received the compiled telnet daemon without the crypto linked in and replaced the telnetd on my SCO Unix machine. Viola, everyone else in the office LAN could telnet in. The multi-user SCO machine was now really multi-user.

That experience was what pushed me to explore what would I need to do to make sure that both crypto code and needed libraries are available to anyone, anywhere. The fact that 386BSD was a US-originated project meant that tying my kite to them would eventually discriminate against me in not being able to get to the best of cryptography and in turn, security and privacy. That was when Linus’ work on Linux became interesting for me.

The fact that this was done outside the US meant that it was not crippled by politics and other shortsighted rules and that if it worked well enough, it could be an interesting operating system.

I am glad that I did make that choice.

The very first Linux distribution I got was from Soft Landing Systems (SLS in short) which I had to get via the amazingly trusty ftpmail@decwrl.com service which happily replied with dozens of 64K uuencoded emails.

What a thrill it was when I started getting serialized uuencoded emails with the goodies in them. I don’t think I have any of the 5.25″ on to which I had to put the uudecoded contents. I do remember selling complete sets of SLS diskettes (all 5.25″ ones) for $10 per box (in addition to the cost of the diskettes). I must have sold it to 10-15 people. Yes, I made money from free software, but it was for the labour and “expertise”.

Fast forward twenty five years to 2016, I have so many systems running Linux (TV, wireless access points, handphones, laptops, set-top boxes etc etc etc) that if I were asked to point to ONE thing that made and is still making a huge difference to all of us, I will point to Linux.

The impact of Linux on society cannot be accurately quantified.  It is hard. Linux is like water. It is everywhere and that is the beauty of it. In choosing the GPLv2 license for Linux, Linus released a huge amount of value for all of humanity. He paid forward.

It is hard to predict what the next 25 years will mean and how Linux will impact us all, but if the first 25 years is a hint, it cannot but be spectacular. What an amazing time to be alive.

Happy birthday Linux. You’ve defined how we should be using and adoption technology. You’ve disrupted and continue to disrupt, industries all over the place. You’ve helped define what it means to share ideas openly and freely. You’ve shown what happens when we collaborate and work together. Free and Open Source is a win-win for all and Linux is the Gold Standard of that.

Linux (and Linus) You done well and thank you!

This is quite a nice tool – magic-wormhole


I was catching up on the various talks at PyCon 2016 held in the wonderful city of Portland, Oregon last month.

There are lots of good content available from PyCon 2016 on youtube. What I was particularly struck was, what one could say is a mundane tool for file transfer.

This tool, called magic-wormhole, allows for any two systems, anywhere to be able to send files (via a intermediary), fully encrypted and secured.

This beats doing a scp from system to system, especially if the receiving system is behind a NAT and/or firewall.

I manage lots of systems for myself as well as part of the work I at Red Hat. Over the years, I’ve managed a good workflow when I need to send files around but all of it involved having to use some of the techniques like using http, or using scp and even miredo.

But to me, magic-wormhole is easy enough to set up, uses webrtc and encryption, that I think deserves to get a much higher profile and wider use.

On the Fedora 24 systems I have, I had to ensure that the following were all set up and installed (assuming you already have gcc installed):

a) dnf install libffi-devel python-devel redhat-rpm-config

b) pip install –upgrade pip

c) pip install magic-wormhole

That’s it.

Now I would want to run a server to provide the intermediary function instead of depending on the goodwill of Brian Warner.

 

UEFI and Fedora/RHEL – trivially working.


My older son just enrolled into my alma mater, Singapore Polytechnic, to do Electrical Engineering.  It is really nice to see that he has an interest in that field and, yes, make me smile as well.

So, as part of the preparations for the new program, the school does need the use of software as part of the curriculum. Fortunately, to get a computer was not an issue per se, but what bothered me was that the school “is only familiar with windows” and so that applications needed are also meant to run on windows.

One issue led to another and eventually, we decided to get a new laptop for his work in school. Sadly, the computer comes only with windows 8.1 installed and nothing else. The machine has ample disk space (1TB) and the system was set up with two partitions – one for the windows stuff (about 250G) and the 2nd partition as the “D: drive”. Have not seen that in years.

I wanted to make the machine dual bootable and went about planning to repartition the 2nd partition into two and have about 350G allocated to running Fedora.

Then I hit an issue.  The machine was installed with Windows using the UEFI. While the UEFI has some good traits, but unfortunately, it does throw off those who want to install it with another OS – ie to do dual-boot.

Fortunately, Fedora (and RHEL) can be installed into a UEFI enabled system. This was taken care of by work done by Matthew Garrett as part of the Fedora project. Matthew also received the FSF Award for the Advancement of Free Software earlier this year. It could be argued that perhaps UEFI is not something that should be supported, but then again, as long as systems continue to be shipped with it, the free software world has to find a way to continue to work.

The details around UEFI and Fedora (and RHEL) is all documented in Fedora Secure Boot pages.

Now on to describing how to install Fedora/RHEL into a UEFI-enabled system:

a) If you have not already done so, download the Fedora (and RHEL) ISOs from their respective pages. Fedora is available at https://fedoraproject.org/en/get-fedora and RHEL 7 Release Candidate is at ftp://ftp.redhat.com/pub/redhat/rhel/rc/7/.

b) With the ISOs downloaded, if you are running a Linux system, you can use the following command to create a bootable live USB drive with the ISO:

dd  if=Fedora-Live-Desktop-x86_64-20-1.iso of=/dev/sdb

assuming that /dev/sdb is where the USB drive is plugged into. The most interesting thing about the ISOs from Fedora and RHEL is that they are already set up to boot into a UEFI enabled system, i.e., no need to disable in BIOS the secure boot mode.

c) Boot up the target computer via the USB drive.

d) In the case of my son’s laptop, I had to repartition the “D: drive” and so after boot up from the USB device, I did the following:

i) (in Fedora live session): download and install gparted (sudo yum install gparted) within the live boot session.

ii) start gparted and resize the “D: drive” partition. In my case, it was broken into 2 partitions with about 300G for the new “D: drive” and the rest for Fedora.

e) Once the repartitioning is done, go ahead and choose the “Install to drive” option and follow the screen prompts.

Once the installation is done, you can safely reboot the machine.

You will be presented with a boot menu to choose the OS to start.

QED.

 

A helper note for family and friends about your connectivity to the Internet from July 9 2012


This is a note targeted at family and friends who might find that they are not able to connect to the Internet from July 9, 2012 onwards.

This only affects those whose machines were are running Windows or Mac OSX and have a piece of software called DNSChanger installed.  The DNSChanger modifies a key part of the way a computer discovers other machines on the internet (called the Domain Name Server or DNS).

Quick introduction to DNS:

For example, you want to visit the website, http://www.cnn.com. You type this in your browser and magically, the CNN website appears in a few seconds. The way your browser figured out to reach the http://www.cnn.com server was to do the following:

a) The browser took the http://www.cnn.com domain name and did what is called a DNS lookup.

b) What it would have received in the DNS lookup is a mapping of the http://www.cnn.com to a bunch of numbers.  In this case, it would have received something like:

http://www.cnn.com.        60    IN    A    157.166.255.18
http://www.cnn.com.        60    IN    A    157.166.255.19
http://www.cnn.com.        60    IN    A    157.166.226.25
http://www.cnn.com.        60    IN    A    157.166.226.26

c) The numbers you see in the lines above (157.166.255.18 for example) are the Internet Protocol (IP) number of the server on which http://www.cnn.com resides. You notice that there are more than one IP number.  That is for managing requests from millions of systems and not having to depend only on one machine to reply.  This is good network architecture. For fun, let’s look at http://www.google.com:

http://www.google.com.      59    IN    CNAME    www.l.google.com.
http://www.l.google.com.    59    IN    A    173.194.38.147
http://www.l.google.com.    59    IN    A    173.194.38.148
http://www.l.google.com.    59    IN    A    173.194.38.144
http://www.l.google.com.    59    IN    A    173.194.38.145
http://www.l.google.com.    59    IN    A    173.194.38.146

http://www.google.com has 5 IP #s associated to it but you notice that there is something that says CNAME (stands for Canonical Name) in the first line. What that means is that http://www.google.com is also the same as http://www.l.google.com which in turns has 5 IP#s associated with it.

d) The beauty of this is that in a few seconds, you got to the website that you wanted to without remembering the IP # that is needed.

What is this important? If you have a cell phone, how do you dial the numbers of your family and friends?  Do you remember by heart their respective phone numbers? Not really or at least not anymore You probably know your own number and a small close group (your home, your work, your children, spouse, siblings).  Even then, their names are in your contact book and when you want to call (or text) them, you just punch in their names and your phone will look up the number and send out.

The difference between your cell phone directory and the DNS is that, you control what is in your phone directory.  So, a name like “Wife” in your phone could point to a phone number that is very different from a similar name in your friend’s phone directory.  That is all well and good.

But on the global Internet, we cannot have name clashes and that is why domain names are such hot things and people have snapped up pretty much a very large chunk of names during the dot.com rush in the late 1990s.

Now on to the issue at hand

So, what’s that got to do with this alarmist issue of connecting to the Internet from July 9, 2012?

Well, it has to with the fact that there as a piece of software – malware in this case – that got added to those running Windows and Mac OSX.  In all computers, the magic to do the DNS lookup is maintained by a file which contains information about which Domain Namer Server to query when presented with a domain name like http://www.cnn.com.

For example, on my laptop (which runs Fedora), the file that directs DNS looks is called /etc/resolv.conf.  This is the same for a Mac OSX file and I think it there is something similar in the Windows world as well. Fedora and Mac OSX share a common Unix heritage and so many files are in common.

The contents of my /etc/resolv.conf file is:

# Generated by NetworkManager
domain temasek.net
search temasek.net lan
nameserver 192.168.10.1

The file is automatically generated when I connect to the network and the crucial line is the line that reads “nameserver”. In this case, it points to 192.168.10.1 which happens to be my FonSpot wireless access point. But what is interesting is that my FonSpot access point is not a DNS server per se.  In the setup of the FonSpot, I’ve got it to look up domain names to Google’s public DNS server whose IP #s are 8.8.8.8 and 8.8.4.4.

Huh? What does this mean?  Simply put, when I type in http://www.cnn.com on my browser, that name’s IP# is looked up first by my browser asking the nameserver 192.168.0.1 which is the FonSpot will then return to my browser that it should go ask 8.8.8.8 for an answer. If 8.8.8.8 does not know, hopefully 8.8.8.8 will give an IP # to my browser to ask next.  Eventually, when an IP # is found, my browser will use that IP # and send a connection request to that site. All of this happens in milliseconds and when it all works, it looks like magic.

What if you don’t get to the site?  What if the entry in the /etc/resolv.conf file pointed to some IP # that was a malicious entity that wanted to “hijack” your web surfing?  There is a legitimate reason for this. For example, when you connect to a public wifi access point (like Wireless@SG for example), you will initially get a DNS nameserver entry that belongs to the wifi access provider. Once you successfully logged into that access point, then your DNS lookup will be properly directed. This technique is called “captive portal”. My FonSpot is a captive portal btw.

The issue here is that those machines who have the malware DNSChanger have the DNS lookup being hijacked and directed elsewhere.  See this note by the US Federal Bureau of Investigation about it.

It appears that the DNSChanger malware had set up a bunch of IP# to redirect maliciously all access to the Internet. If your /etc/resolv.conf file has nameserver entries that contain numbers in the following range:

85.255.112.0 to 85.255.127.255

67.210.0.0 to 67.210.15.255

93.188.160.0 to 93.188.167.255

77.67.83.0 to 77.67.83.255

213.109.64.0 to 213.109.79.255

67.28.176.0 to 67.28.191.255

you are vulnerable.

Here’s a test I did with the 1st of those IP#s on my fedora machine:

[harish@vostro ~]$ dig @85.255.112.0 www.google.com

; <<>> DiG 9.9.1-P1-RedHat-9.9.1-2.P1.fc17 <<>> @85.255.112.0 www.google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34883
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 4, ADDITIONAL: 5

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.google.com.            IN    A

;; ANSWER SECTION:
www.google.com.        464951    IN    CNAME    www.l.google.com.
www.l.google.com.    241    IN    CNAME    www-infected.l.google.com.
www-infected.l.google.com. 252    IN    A    216.239.32.6

;; AUTHORITY SECTION:
google.com.        32951    IN    NS    ns2.google.com.
google.com.        32951    IN    NS    ns4.google.com.
google.com.        32951    IN    NS    ns3.google.com.
google.com.        32951    IN    NS    ns1.google.com.

;; ADDITIONAL SECTION:
ns1.google.com.        33061    IN    A    216.239.32.10
ns2.google.com.        33061    IN    A    216.239.34.10
ns3.google.com.        317943    IN    A    216.239.36.10
ns4.google.com.        33297    IN    A    216.239.38.10

;; Query time: 305 msec
;; SERVER: 85.255.112.0#53(85.255.112.0)
;; WHEN: Sun Jul  8 21:40:07 2012
;; MSG SIZE  rcvd: 242

Some explanation of what the is shown above. “dig” is a command “domain internet groper” that allows me, from the command line, to see what a domain’s IP address is. With the extra stuff “@85.255.112.0”, I am telling the dig command to use 85.255.112.0 as my domain name server and get the IP for the domain http://www.google.com. Currently 85.255.112.0 is being run as a “clean” DNS server by the those who’ve been asked to by the FBI.

Hence, what will happen on July 9th 2012 is that the request by FBI to give a reply when 85.225.112.0 is used, will expire. Therefore the command I executed above on July 8th 2012 will not return a valid IP number from July 9th 2012. While the Internet will work, there would be people whose systems have been compromised to point to the bad-but-made-to-work-OK DNS servers, will find that they can’t seem to get to any site easily by using domain names. If they instead used IP#s, they can get to the site with no issue.

A quick way to check if your system needs fixing is to go to http://www.dns-ok.us/ NOW to check. If it is OK, ie your system’s /etc/resolv.conf is not affected (or the equivalent for those still running Windows).

See the announcement from Singapore’s CERT on this issue.

FUDCon Kuala Lumpur 2012


It is wonderful to see the Fedora Users and Developers Conference kick off in Kuala Lumpur today, May 18 2012. The plan was for me to attend, do a keynote and also pitch a talk for the barcamp. But, Murphy was watching how everything was coming together and pulled the rug from under me on Wednesday. I experienced what I found out later to be “tennis calf”

The symptoms were 100% spot on; felt something hit my calf followed by a pull. Quickly arranged to visit a sports doctor and he advised me about what needs to be done and recommended that perhaps I should not travel for the next two to three days. Bummer. I was so looking forward to being among the Fedora community flying in from Europe, Australia, Vietnam, India, Sri Lanka, Bangladesh etc.

Among the things I wanted to talk about at FUDCon KL was the following:

  1. A demo of the plugable USB2.0 docking station that turns a Fedora 17 machine (server, desktop, laptop – does not matter) into a multi-seat Linux environment. I bought a pair from Amazon. I received it on Wednesday (shipped to Singapore via vpost.com.sg) and it worked exactly as stated – plug the USB to the laptop’s USB port, have a VGA monitor, USB keyboard and mouse plugged into the docking station, and viola, a fresh GNOME login screen. Amazing. You can even do an audio chat and watch streaming video via this setup. Really good stuff and kudos to the developers for main streaming the code into the Linux kernel and working with the Fedora devs to make this workable out of the box on Fedora 17.  What was really amazing from my point of view was the this works even when a machine is booted from a Fedora 17 LiveCD/USB. While this would suggest that the idea of the K12LTSP project is no longer needed, I think there are clear areas where they complement.
  2. My journey in OpenShift.redhat.com. I wanted to share my learnings about OpenShift and Git and all the associated stuff. More importantly, the fact that OpenShift is a technology that is being used for a 24-hour programming contest in Singapore called code::XteremeApps was important to share as well to encourage international participation in the contest.  I am hopeful that this blog post will trigger interest.

I guess all is not lost. The show has to go on and I am glad to have facilitated a lot of it.  But the main kudos has to go to the Malaysian Fedora Ambassadors who managed to pull this off in the 8 weeks when they were awarded the hosting rights!

And it’s live now – SCO Open Server 5.0.5 running in a RHEL 6 KVM


As promised earlier, the final bits of getting an application that runs on the old hardware on to the VM is now all done.  I tried to install the app but, I really did not want to spend too much time trying to figure out all the nuances about it.  Since this is really an effort that would eventually see the app being replaced at some future date, I wanted to get it done easily.

So, over the last long weekend, I did the following:

a) Created a brand new VM running SCO Open Server 5.0.5 on the RHEL 6.2 machine. The specs of the VM are: 2GB RAM, 8GB disk, qemu (not kvm), i686, set the network card to be PC-Net and Video as VGA. This is the best settings to complete the installation of SCO in the VM.

b) Meanwhile on the old machine, I did a tar of the whole system – “tar cvf wholesystem.tar /”. This is probably not the best way to do it, but hey, I did not want to spend time just picking what I wanted and what I did not need from the old machine. The resulting “wholesystem.tar” file was about 2G in size.

c) Ftp’ed the wholesystem.tar file to the VM and did an untar of it on to the VM – “cd /; tar xvf /tmp/wholesystem.tar “. This resulted in a VM that could boot, but needed some tweaks.

d) The tweaks were:

  1. Changing the network card to reflect the VM’s settings
  2. Changing the IP#
  3. Disabling the mouse on the VM

d) SCO is msft-ish (or may be msft learned it from SCO) in that the tool that is used to do the changes “scoadmin” will, after changes are done, need the kernel be rebuilt which then necessitates the rebooting of the VM to pick up the new values

e) Edited the /etc/hosts file to reflect the new IPs and added in /etc/rc.d/8/userdef file a line to set the default route on the VM: route add default 192.1.2.5

The VM’s IP is 192.1.2.100 and in the /etc/resolv.conf file, the nameserver was set to 8.8.8.8 and 8.8.4.4 (Google’s public DNS)

Printing:

a) The old machine had two printers – an 80 column and a 132-column dot matrix printer – connected to its serial and parallel ports.  I did not want to deal with this issue for the VM and got hold of two TP Link PS110P print servers. What’s nice about these are that they are trivial to work with (they are running Linux anyway) and by plugging them to the printers (even the serial printer had a parallel port), both printers were on the network and so printing from the SCO VM was now trivial.

b) Configuring the SCO VM to print to the network printer was using the rlpconf command. The TP Link print server has an amazing array of options and I picked the LPR option and the LPT0 and LPT1 device queue on the two TP Link print server. While the scoadmin has a printer settings section, for some reason the remote printers set up by it never quite worked.  In any case, the rlpconf edits the /etc/printcap file to reflect the remote printers and that is all that is needed.  Here’s what the /etc/printcap looked after the rplconf command was run:

cat /etc/printcap
# Remote Line Printer (BSD format)
#rhel6-pdf:\
#       :lp=:rm=rhel6:rp=rhel6-pdf:sd=/usr/spool/lpd/rhel6-pdf:
LPT0:\
:lp=:rm=192.1.2.51:rp=LPT0:sd=/usr/spool/lpd/LPT0:
LPT1:\
:lp=:rm=192.1.2.52:rp=LPT1:sd=/usr/spool/lpd/LPT1:

the IP #s were set in the TP Link print servers and their respective print spools.

c) so, once that was done, running lpstat -o all on the VM shows the remote printer status:

#lpstat -o all
LPT0:
lp1 is available ! (06,05,02,000000|01|448044|443364|04,02,02|8.2,8.3)
LPT1:
lp1 is available ! (03,02,03,000000|01|450384|445932|04,02,01|8.2,8.3)

Networking issues:

Initially, I had set up the VMs using the default networking setting for KVM.  The standard networking in KVM assumes that the VM is going to go out to the network and not running as a server per se. But this VM was going to be accessed by other machines (not the RHEL6 host) on the office LAN, so the right thing to do is to set up the a Bridging network instead of a NATed network. RHEL 6.2 does not, by default, have bridging set up and I think that need to change. NATing is fine, but in order for the VM to be accessed from systems other than the host, there has to be additional firewall rules set up if it is to be NATed, but a one liner iptables rule: “iptables -I FORWARD -m physdev –physdev-is-bridged -j ACCEPT” if it was on a Bridge.

I think the dialog box that sets up the VM via virt-manager should add an option to ask if a you need a bridged network. The option is there, but not obvious. So following these instructions carefully – they work.

Well, that was it. The SCO Open Server 5.0.5 with the application that was needed is now running happily in a VM on a RHEL 6.2 machine and the printing is via the network to a couple of print server.

I must, once again, take my hats off to the awesome open source developers of KVM, QEMU, BOCHS etc for the wonderful way all the technologies have some together in a Linux kernel as fully supported by Red Hat in Red Hat Enterprise Linux. There is an enormous amount of value in all of this, that even a premium subscription of this RHEL installation is a fraction of the true value derived. The mere fact that a 20th century SCO Open Server can now be made to run in perpetuity on a KVM instance is mind-boggling (even if Red Hat does not officially support this particular setup).

QED.