As many of you know I’ve been quite fond of FreeSWITCH for some time now. I’ve been impressed with the functionality, stability, and performance. Did I say impressed? I suppose I meant to say thrilled. Some of my more long-term readers may remember this post from three years ago. I can’t believe it’s been that long. I suppose that makes me (officially) old.
Getting back on track (as I often have to do) I have had one concern with FreeSWITCH over the years - the lack of a stable branch. Traditionally (in software development) once a project or piece of code reaches a certain level of maturity (or use, even) there comes a need to segment the code into an independently maintained entity. In most revision control systems this is typically called a “branch”. Makes sense because lines of code are often referred to as “trees”. I get it!
Since the beginning of the project in 2006 all of FreeSWITCH development has take place in one central code repository (trunk - trees again). At any given point in time a user could pull down trunk and receive the latest and “greatest” code - new features, bug fixes, security patches, etc. Unfortunately these new features can introduce new bugs. They can re-expose old bugs. They can change behavior in unexpected ways.
One of the things that has been most impressive about the FreeSWITCH project, actually, has been the relative stability of this often new and untested code. It’s been extremely rare to find a serious issue in trunk. However, the mere knowledge of this code management practice (or relative lack thereof) and potential for new features/bugs is an issue.
I learned a long time ago that people expect their phones to just work. The same user who finds it perfectly acceptable to reboot their computer after a software crash will jump and scream when a telephone doesn’t work. I guess that’s a testament to the level of experience people have historically expected. Interestingly this expectation seems to be changing in the face of new technology, which is something I have covered before.
Anyway, for the time being people (including myself) expect these things to “just work”. Of course this expectation places a tremendous burden on network and facility operators such as myself (and rightfully so). Meeting this expectation with FreeSWITCH trunk has required a significant amount of testing, testing, re-testing, regression testing, patching, etc. We simply cannot unleash a new version of software in the field without significant testing. When issues are found we have to address them. This takes time and resources.
Meanwhile FreeSWITCH is an open source project. That can’t be expected to provide for every need and whim of large commercial users such as myself. Why should they? They already spend so much of their limited time and resources to provide software that essentially powers my business and (indirectly) provides for my livelihood - for free. With these limited resources alone they can’t be expected to make time for the maintenance of yet another collection of code.
That’s why I’m happy to announce the Star2Star sponsorship of the FreeSWITCH stable branch! For the past several months Star2Star has been providing the financial assistance necessary for the FreeSWITCH project to hire another full-time team member to not only maintain a stable branch (1.2 as of this writing) but improve documentation, packaging, and community interaction. Everyone at Star2Star (including myself) couldn’t be happier to provide this resource for the project and the community. We look forward to working more with the FreeSWITCH team on the stable branch and any other projects that may advance FreeSWITCH and the state of the art in communications!
I created AstLinux but I write and rant about a lot of other things here. Mostly rants about SIP and the other various technologies I deal with on a daily basis.
Friday, September 21, 2012
Thursday, July 26, 2012
Performance Testing (Part 2) - Going Commercial
Some time ago I wrote the first installment of a multi-part series on performance testing. Here we go with part two!
In part one I talked about some of the difficulties surrounding performance testing - functionality, flexibility, high scale, quality metrics, etc.
After looking at a couple commercial products we discovered a few things:
1) Some of these products really do “have it all”.
2) They can be very expensive.
There was some initial sticker shock ($100,000+) but looking back I shouldn’t have been so surprised. My first reaction (of course) was to reach out to a few people in the open source community with a proposal. In a classic “build vs. buy” scenario I wanted to build. This is (roughly) what I needed:
- At least 40,000 RTP streams (20,000 calls)
- At least 100,000 SIP calls and/or registrations
- The ability to emulate multiple user agents (VLAN, MAC address, DHCP client, SIP UA)
- RTP stream analysis on each RTP leg (MOS, R-Factor, PESQ, etc)
- Flexible device emulation - SIP headers, supported SIP features, etc
- Multiple codec support (at least G711, G729, G722, SILK/OPUS with FEC et
all, etc).
- Control of test scenarios - CPS, number of calls, duration of call,
total duration of test, etc
- Ability to save/load tests via web interface (for ease of use,
comparison of results, etc)
- Ability to perform feature testing - generate DTMF during a call to
navigate an IVR, for example.
- A modular system for monitoring the device under test - Linux load,
CPU usage, disk usage, network I/O, etc. Could also monitor Cisco
switches in between devices, Windows hosts, etc. Maybe even
FreeSWITCH or Asterisk if that's what was running on the device under
test.
- Saving and graphing of all relevant performance data - call setup
time, delay, duration, RTP jitter, packet loss, RTP stream stats, etc.
Ability to save data and generate reports from said data.
- Scalable design with master/slave architecture to scale across hosts
with the ability for hardware.
Did I mention this tool needs to be usable by various test engineers, some of which don’t know the difference between SIP and SDP (and rightfully so, they shouldn’t need to)?
With the open source software already available I figured this could be made available for less than the cost of a commercial testing solution.
I gave it away with the title of this post but you can guess what happened next: it was going to cost far more to develop everything I needed. By the way - it would also take six months to build and take 10 years off my life hunting bugs, managing the project, etc.
BUY
For less than what it would cost to build everything above I could buy this. A multi-user chassis with one Xcellon-Ultra NP load module and room for another one. 180,000 emulated SIP endpoints. 96,000 RTP streams. Wire speed 10 gigabit VoIP testing (and 12 gigabit ports).
Of course this isn’t a perfect world... The chassis runs Windows. The client software is only available for Windows and the interface is probably the furthest from what I want. As a guy that eats, lives, and breathes CLI (and has for a decade) multi-pane/dropdown/hidden/shadow GUIs are NOT my thing. I don’t even know what to call or how to describe some of the window/GUI elements present in the IxLoad user interface...
Ixia even has a solution to this problem. They offer TCL scripting with clients for various platforms, including Linux! While we’ll eventually get into that for the time being we went with a much simpler solution: we setup a Windows terminal server. I use CoRD from my Mac, login to the terminal server, and run a test. As you’ll see in part three - IT JUST WORKS.
In part one I talked about some of the difficulties surrounding performance testing - functionality, flexibility, high scale, quality metrics, etc.
After looking at a couple commercial products we discovered a few things:
1) Some of these products really do “have it all”.
2) They can be very expensive.
There was some initial sticker shock ($100,000+) but looking back I shouldn’t have been so surprised. My first reaction (of course) was to reach out to a few people in the open source community with a proposal. In a classic “build vs. buy” scenario I wanted to build. This is (roughly) what I needed:
- At least 40,000 RTP streams (20,000 calls)
- At least 100,000 SIP calls and/or registrations
- The ability to emulate multiple user agents (VLAN, MAC address, DHCP client, SIP UA)
- RTP stream analysis on each RTP leg (MOS, R-Factor, PESQ, etc)
- Flexible device emulation - SIP headers, supported SIP features, etc
- Multiple codec support (at least G711, G729, G722, SILK/OPUS with FEC et
all, etc).
- Control of test scenarios - CPS, number of calls, duration of call,
total duration of test, etc
- Ability to save/load tests via web interface (for ease of use,
comparison of results, etc)
- Ability to perform feature testing - generate DTMF during a call to
navigate an IVR, for example.
- A modular system for monitoring the device under test - Linux load,
CPU usage, disk usage, network I/O, etc. Could also monitor Cisco
switches in between devices, Windows hosts, etc. Maybe even
FreeSWITCH or Asterisk if that's what was running on the device under
test.
- Saving and graphing of all relevant performance data - call setup
time, delay, duration, RTP jitter, packet loss, RTP stream stats, etc.
Ability to save data and generate reports from said data.
- Scalable design with master/slave architecture to scale across hosts
with the ability for hardware.
Did I mention this tool needs to be usable by various test engineers, some of which don’t know the difference between SIP and SDP (and rightfully so, they shouldn’t need to)?
With the open source software already available I figured this could be made available for less than the cost of a commercial testing solution.
I gave it away with the title of this post but you can guess what happened next: it was going to cost far more to develop everything I needed. By the way - it would also take six months to build and take 10 years off my life hunting bugs, managing the project, etc.
BUY
For less than what it would cost to build everything above I could buy this. A multi-user chassis with one Xcellon-Ultra NP load module and room for another one. 180,000 emulated SIP endpoints. 96,000 RTP streams. Wire speed 10 gigabit VoIP testing (and 12 gigabit ports).
Of course this isn’t a perfect world... The chassis runs Windows. The client software is only available for Windows and the interface is probably the furthest from what I want. As a guy that eats, lives, and breathes CLI (and has for a decade) multi-pane/dropdown/hidden/shadow GUIs are NOT my thing. I don’t even know what to call or how to describe some of the window/GUI elements present in the IxLoad user interface...
Ixia even has a solution to this problem. They offer TCL scripting with clients for various platforms, including Linux! While we’ll eventually get into that for the time being we went with a much simpler solution: we setup a Windows terminal server. I use CoRD from my Mac, login to the terminal server, and run a test. As you’ll see in part three - IT JUST WORKS.
Thursday, July 12, 2012
My Linux Story
While digging through some boxes the other day I came across a book. Not just any book - a very important book. A book I both cursed and loved more than any other book in my entire life.
To tell the story of the book I need to give a little background. My father was a professor at the University of Illinois at Chicago. When I was old enough (13) I would spend my summers volunteering in his department (Occupational Therapy) doing random IT jobs - making ethernet cables, cleaning up viruses, fixing printers, etc.
One day they took me on a tour of the other departments in his building. They had some medical visualization apps running in one of the departments. Here we were, sitting in a room, with some PhD students working on some of the coolest computers I’d ever seen. They had bright colors and huge monitors. They ran tons of advanced looking applications. Most of the interaction was through a very interesting command line interface. I was intrigued.
Being a rambunctious 13 year old I approached one of the students and complimented him on his computer. I also asked him how I could get one of them for my house. He laughed. At the time I didn’t understand why it was a funny question. Looking back now I know it’s because those “computers” were actually high powered Silicon Graphics Indigo workstations that cost upwards of $40,000 each (in 1997 dollars).
His response was a muffled “Ummm, you can’t”. Not being dissuaded my follow-up question was “How can I get something like it?”. One of the students looked at the other student. They both kind of shrugged until one of them said “umm, Linux?”. Still not being quite sure they both agreed that “Linux” would be my best bet. I asked them how I could get “Linux”. They told me to go down to the university book store and buy a book about Linux. A good Linux book, they said, should include a CD-ROM with a copy of Linux I could install on my computer at home.
That day, before we went to Union Station to catch the Metra, my dad and I went to the university bookstore to find a book about Linux. Sure enough we found a Linux book that included not one but TWO CD-ROMs! I read as much of the book as I could before I got home that night.
Once I was at home I booted the family computer (with a DOS bootdisk) and ran a loadlin (wow, remember that?) based install program from the CD-ROM (no el torito!). During the course of the install I repartitioned and reformatted the (only) family computer - an IBM Aptiva M51. It had a Pentium 100, 1GB hard drive, and 16MB of RAM (upgraded from 8MB). It also came with one of those HORRIBLE MWAVE adapters and some shady Cirrus Logic graphics (IIRC).
Anyway, the install process (which was pretty horrible, actually) left me with what was (at the time) a barely usable computer. How was my sister going to do her homework? How were we going to get on to the internet? Uh-oh, looks like I better learn more about this “Linux” thing...
So that’s how it started. Later that year (for Christmas) my parents realized they weren’t going to get the Aptiva back from me so we bought another computer to run Windows. At that point my little Linux workstation became my computer and the “gateway” for my first home network - a 10BASE2 (coax!) Ethernet network using NE2000 cards from Linksys. Internet access went through my Aptiva using an external modem that, regardless of type, could only negotiate a maximum of 19,200 bps on our crappy phone line. PPP, chat scripts, dial-on-demand, squid for caching, etc. 15 years later I’m still (primarily) using Linux!
After finding “My First Linux Book” I wondered what would happen if I tried to install that version of Linux today. Some people reminisce about their childhood by hearing certain songs, playing a sport, or collecting action figures. I (apparently) do it by installing ancient versions of Linux.
I needed to install this version of Linux but the CD-ROMs in my book were missing. I looked around the internet for a while but could not find an ISO or copy of the distro anywhere. I could barely find references to the book. Where else could I look? Everywhere else I look - Amazon!
Sure enough, Amazon had a used copy of the book (with CD-ROM) for $5 with shipping. Two days later it was here. To my surprise the book (and the CDs) were in excellent condition. Who the hell is keeping a warehouse full of mid-nineties Linux books (3rd edition with Kernel 2.0!)?
I get on my main development machine at home, download a FreeDOS ISO (to install from, remember), and create a VirtualBox virtual machine. What should it look like? I decide to “go big”:
Keep in mind I’m going to be running kernel 2.0 here - this hardware needs to be supported by a 15 year old kernel. I get Unifix Linux 2.0 installed. Moments later I’m logged into my “new” Linux system. Not knowing exactly what to do now, I decide to try to get networking to work.
Long story short I could not get Linux 2.0 to recognize the emulated Am79C973 ethernet controller. I tried changing the device ids and recompiling the kernel (takes less than one minute, btw) but couldn’t get it to work.
Hmmm, what else could I do for connectivity? Maybe I could go really nostalgic and get something running over the serial port?
I configured VirtualBox to emulate a 16550 serial port as COM1. I setup VirtualBox to point the other end of the emulated serial port to a local pipe. I figured that if I could somehow run pppd on both sides of this serial port (host and guest) and configure NAT I could get this thing on the internet.
Here’s how I did it:
1) Launch socat to convert the unix domain socket provided by VirtualBox to a standard Linux tty so pppd can run on it:
(on host)
socat UNIX-CONNECT:[com1] PTY,link=[vmodem0],raw,echo=0,waitslave
Where [com1] is the path to your VirtualBox socket and [vmodem0] is the path to your (new) tty.
2) Launch pppd on the new tty:
(on host)
pppd [vmodem0] 57600 192.168.100.1:192.168.100.2 nodetach local
Once again where [vmodem0] is the path to your new socat tty. Make sure that the IP addresses provided for each end of the PPP link don’t conflict with any local IP addresses.
3) Setup kernel iptables on the host:
(on host)
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth0 -o ppp0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i ppp0 -o eth0 -j ACCEPT
4) Connect the virtual machine to the host:
(on guest)
pppd /dev/ttyS0 57600 defaultroute passive
Sure enough, here’s what I saw on the host:
Using interface ppp0
Connect: ppp0 <--> /home/kris/projects/linux_universe/vmodem0
local IP address 192.168.100.1
remote IP address 192.168.100.2
Boom! 1990s Linux, meet the 21st century!
Once I had networking up and running things really took off. I noticed all of the services running by default on my old Linux host (portmap, yp, apache, telnet, echo, chargen, sendmail, wu-ftpd, etc). Remember the 90s when the internet wasn’t such a hostile place!?!
Here’s some fun command output from my “new” host old_linux:
root@old_linux:~ # uname -a
Linux old_linux 2.0.25 Unifix-2.0 i686
root@old_linux:~ # ping -c 5 192.168.100.1
PING 192.168.100.1 (192.168.100.1): 56 data bytes
64 bytes from 192.168.100.1: icmp_seq=0 ttl=64 time=15.1 ms
64 bytes from 192.168.100.1: icmp_seq=1 ttl=64 time=19.9 ms
64 bytes from 192.168.100.1: icmp_seq=2 ttl=64 time=19.9 ms
64 bytes from 192.168.100.1: icmp_seq=3 ttl=64 time=19.9 ms
64 bytes from 192.168.100.1: icmp_seq=4 ttl=64 time=19.9 ms
--- 192.168.100.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 15.1/18.9/19.9 ms
root@old_linux:~ #
20ms of latency on the same physical machine (pppd -> VirtualBox -> socat -> pppd)!
root@old_linux:~ # ping -c 5 www.google.com
PING www.l.google.com (74.125.139.104): 56 data bytes
64 bytes from 74.125.139.104: icmp_seq=0 ttl=45 time=59.5 ms
64 bytes from 74.125.139.104: icmp_seq=1 ttl=45 time=49.9 ms
64 bytes from 74.125.139.104: icmp_seq=2 ttl=45 time=49.9 ms
64 bytes from 74.125.139.104: icmp_seq=3 ttl=45 time=50.5 ms
64 bytes from 74.125.139.104: icmp_seq=4 ttl=45 time=69.9 ms
--- www.l.google.com ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 49.9/55.9/69.9 ms
root@old_linux:~ # gcc -v
Reading specs from /usr/lib/gcc-lib/i486-unknown-linux/2.7.2.1/specs
gcc version 2.7.2.1
root@old_linux:~ # httpd -v
Server version Apache/1.1.1.
root@old_linux:~ # ssh -v
SSH Version 1.2.14 [i486-unknown-linux], protocol version 1.4.
Standard version. Does not use RSAREF.
Pre-iptables. Pre-ipchains. IPFWADM!
root@old_linux:~ # ipfwadm -h
ipfwadm 2.3.0, 1996/07/30
root@old_linux:/usr/src/linux-2.0.25 # time make
[trimmed output]
The new kernel is in file arch/i386/boot/bzImage; now install it as vmlinuz
A name list of the kernel is in vmlinux.map; you may install it as vmlinux
9.86user 11.74system 0:22.78elapsed 94%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+0minor)pagefaults 0swaps
root@old_linux:/usr/src/linux-2.0.25 #
23 seconds to compile the kernel!
Ok, but what about the modules?
make[1]: Leaving directory `/usr/src/linux-2.0.25/arch/i386/math-emu'
8.80user 7.42system 0:16.80elapsed 96%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+0minor)pagefaults 0swaps
root@old_linux:/usr/src/linux-2.0.25 #
16 seconds to compile the modules!
I have to admit I’m fascinated with this distro and the state of Linux as I was first introduced to it. Of course some of the memories have faded over time but more than anything it’s amazing to see how far we’ve come...
1997:
2012:
...and this machine is pretty old at this point! Needless to say with 256MB of RAM (double the maximum possible in my Aptiva) and even one emulated CPU Unifix 2.0 barely knows what to do with this new hardware (even if it isn’t real)!
While I was never quite sure why I was doing any of this I can tell you that it was very fun to remember how all of this got started. I remember my next distro being RedHat 5.2...
To tell the story of the book I need to give a little background. My father was a professor at the University of Illinois at Chicago. When I was old enough (13) I would spend my summers volunteering in his department (Occupational Therapy) doing random IT jobs - making ethernet cables, cleaning up viruses, fixing printers, etc.
One day they took me on a tour of the other departments in his building. They had some medical visualization apps running in one of the departments. Here we were, sitting in a room, with some PhD students working on some of the coolest computers I’d ever seen. They had bright colors and huge monitors. They ran tons of advanced looking applications. Most of the interaction was through a very interesting command line interface. I was intrigued.
Being a rambunctious 13 year old I approached one of the students and complimented him on his computer. I also asked him how I could get one of them for my house. He laughed. At the time I didn’t understand why it was a funny question. Looking back now I know it’s because those “computers” were actually high powered Silicon Graphics Indigo workstations that cost upwards of $40,000 each (in 1997 dollars).
His response was a muffled “Ummm, you can’t”. Not being dissuaded my follow-up question was “How can I get something like it?”. One of the students looked at the other student. They both kind of shrugged until one of them said “umm, Linux?”. Still not being quite sure they both agreed that “Linux” would be my best bet. I asked them how I could get “Linux”. They told me to go down to the university book store and buy a book about Linux. A good Linux book, they said, should include a CD-ROM with a copy of Linux I could install on my computer at home.
That day, before we went to Union Station to catch the Metra, my dad and I went to the university bookstore to find a book about Linux. Sure enough we found a Linux book that included not one but TWO CD-ROMs! I read as much of the book as I could before I got home that night.
Once I was at home I booted the family computer (with a DOS bootdisk) and ran a loadlin (wow, remember that?) based install program from the CD-ROM (no el torito!). During the course of the install I repartitioned and reformatted the (only) family computer - an IBM Aptiva M51. It had a Pentium 100, 1GB hard drive, and 16MB of RAM (upgraded from 8MB). It also came with one of those HORRIBLE MWAVE adapters and some shady Cirrus Logic graphics (IIRC).
Anyway, the install process (which was pretty horrible, actually) left me with what was (at the time) a barely usable computer. How was my sister going to do her homework? How were we going to get on to the internet? Uh-oh, looks like I better learn more about this “Linux” thing...
So that’s how it started. Later that year (for Christmas) my parents realized they weren’t going to get the Aptiva back from me so we bought another computer to run Windows. At that point my little Linux workstation became my computer and the “gateway” for my first home network - a 10BASE2 (coax!) Ethernet network using NE2000 cards from Linksys. Internet access went through my Aptiva using an external modem that, regardless of type, could only negotiate a maximum of 19,200 bps on our crappy phone line. PPP, chat scripts, dial-on-demand, squid for caching, etc. 15 years later I’m still (primarily) using Linux!
After finding “My First Linux Book” I wondered what would happen if I tried to install that version of Linux today. Some people reminisce about their childhood by hearing certain songs, playing a sport, or collecting action figures. I (apparently) do it by installing ancient versions of Linux.
I needed to install this version of Linux but the CD-ROMs in my book were missing. I looked around the internet for a while but could not find an ISO or copy of the distro anywhere. I could barely find references to the book. Where else could I look? Everywhere else I look - Amazon!
Sure enough, Amazon had a used copy of the book (with CD-ROM) for $5 with shipping. Two days later it was here. To my surprise the book (and the CDs) were in excellent condition. Who the hell is keeping a warehouse full of mid-nineties Linux books (3rd edition with Kernel 2.0!)?
I get on my main development machine at home, download a FreeDOS ISO (to install from, remember), and create a VirtualBox virtual machine. What should it look like? I decide to “go big”:
- 1 CPU
- 256MB of RAM
- 8GB Intel PIIX3 hard drive
- 1 Am79C973 ethernet port
Keep in mind I’m going to be running kernel 2.0 here - this hardware needs to be supported by a 15 year old kernel. I get Unifix Linux 2.0 installed. Moments later I’m logged into my “new” Linux system. Not knowing exactly what to do now, I decide to try to get networking to work.
Long story short I could not get Linux 2.0 to recognize the emulated Am79C973 ethernet controller. I tried changing the device ids and recompiling the kernel (takes less than one minute, btw) but couldn’t get it to work.
Hmmm, what else could I do for connectivity? Maybe I could go really nostalgic and get something running over the serial port?
I configured VirtualBox to emulate a 16550 serial port as COM1. I setup VirtualBox to point the other end of the emulated serial port to a local pipe. I figured that if I could somehow run pppd on both sides of this serial port (host and guest) and configure NAT I could get this thing on the internet.
Here’s how I did it:
1) Launch socat to convert the unix domain socket provided by VirtualBox to a standard Linux tty so pppd can run on it:
(on host)
socat UNIX-CONNECT:[com1] PTY,link=[vmodem0],raw,echo=0,waitslave
Where [com1] is the path to your VirtualBox socket and [vmodem0] is the path to your (new) tty.
2) Launch pppd on the new tty:
(on host)
pppd [vmodem0] 57600 192.168.100.1:192.168.100.2 nodetach local
Once again where [vmodem0] is the path to your new socat tty. Make sure that the IP addresses provided for each end of the PPP link don’t conflict with any local IP addresses.
3) Setup kernel iptables on the host:
(on host)
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth0 -o ppp0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i ppp0 -o eth0 -j ACCEPT
4) Connect the virtual machine to the host:
(on guest)
pppd /dev/ttyS0 57600 defaultroute passive
Sure enough, here’s what I saw on the host:
Using interface ppp0
Connect: ppp0 <--> /home/kris/projects/linux_universe/vmodem0
local IP address 192.168.100.1
remote IP address 192.168.100.2
Boom! 1990s Linux, meet the 21st century!
Once I had networking up and running things really took off. I noticed all of the services running by default on my old Linux host (portmap, yp, apache, telnet, echo, chargen, sendmail, wu-ftpd, etc). Remember the 90s when the internet wasn’t such a hostile place!?!
Here’s some fun command output from my “new” host old_linux:
root@old_linux:~ # uname -a
Linux old_linux 2.0.25 Unifix-2.0 i686
root@old_linux:~ # ping -c 5 192.168.100.1
PING 192.168.100.1 (192.168.100.1): 56 data bytes
64 bytes from 192.168.100.1: icmp_seq=0 ttl=64 time=15.1 ms
64 bytes from 192.168.100.1: icmp_seq=1 ttl=64 time=19.9 ms
64 bytes from 192.168.100.1: icmp_seq=2 ttl=64 time=19.9 ms
64 bytes from 192.168.100.1: icmp_seq=3 ttl=64 time=19.9 ms
64 bytes from 192.168.100.1: icmp_seq=4 ttl=64 time=19.9 ms
--- 192.168.100.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 15.1/18.9/19.9 ms
root@old_linux:~ #
20ms of latency on the same physical machine (pppd -> VirtualBox -> socat -> pppd)!
root@old_linux:~ # ping -c 5 www.google.com
PING www.l.google.com (74.125.139.104): 56 data bytes
64 bytes from 74.125.139.104: icmp_seq=0 ttl=45 time=59.5 ms
64 bytes from 74.125.139.104: icmp_seq=1 ttl=45 time=49.9 ms
64 bytes from 74.125.139.104: icmp_seq=2 ttl=45 time=49.9 ms
64 bytes from 74.125.139.104: icmp_seq=3 ttl=45 time=50.5 ms
64 bytes from 74.125.139.104: icmp_seq=4 ttl=45 time=69.9 ms
--- www.l.google.com ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 49.9/55.9/69.9 ms
root@old_linux:~ # gcc -v
Reading specs from /usr/lib/gcc-lib/i486-unknown-linux/2.7.2.1/specs
gcc version 2.7.2.1
root@old_linux:~ # httpd -v
Server version Apache/1.1.1.
root@old_linux:~ # ssh -v
SSH Version 1.2.14 [i486-unknown-linux], protocol version 1.4.
Standard version. Does not use RSAREF.
Pre-iptables. Pre-ipchains. IPFWADM!
root@old_linux:~ # ipfwadm -h
ipfwadm 2.3.0, 1996/07/30
root@old_linux:/usr/src/linux-2.0.25 # time make
[trimmed output]
The new kernel is in file arch/i386/boot/bzImage; now install it as vmlinuz
A name list of the kernel is in vmlinux.map; you may install it as vmlinux
9.86user 11.74system 0:22.78elapsed 94%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+0minor)pagefaults 0swaps
root@old_linux:/usr/src/linux-2.0.25 #
23 seconds to compile the kernel!
Ok, but what about the modules?
make[1]: Leaving directory `/usr/src/linux-2.0.25/arch/i386/math-emu'
8.80user 7.42system 0:16.80elapsed 96%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+0minor)pagefaults 0swaps
root@old_linux:/usr/src/linux-2.0.25 #
16 seconds to compile the modules!
I have to admit I’m fascinated with this distro and the state of Linux as I was first introduced to it. Of course some of the memories have faded over time but more than anything it’s amazing to see how far we’ve come...
1997:
- Linux 2.0.25
- Original Pentium at 100MHz
- 16MB RAM
- 1GB hard drive
- 1MB VRAM
2012:
- Linux 3.2.0 (Ubuntu 12.04)
- Quad-core Pentium Xeon at 2.27GHz
- 12GB RAM
- 1TB hard drive x2 (RAID 1)
- 1GB VRAM (dual 1080p displays)
...and this machine is pretty old at this point! Needless to say with 256MB of RAM (double the maximum possible in my Aptiva) and even one emulated CPU Unifix 2.0 barely knows what to do with this new hardware (even if it isn’t real)!
While I was never quite sure why I was doing any of this I can tell you that it was very fun to remember how all of this got started. I remember my next distro being RedHat 5.2...
Thursday, June 14, 2012
Everything you wish you didn't need to know about VoIP
A few years back I was talking with my editor at O'Reilly Media about a book I'd like to write. The book would cover details of the SIP protocol, best practice, interop scenarios, and even a few implementation specifics - FreeSWITCH, Asterisk, OpenSIPS, Kamailio, etc. Basically your typical open source software book only this time it would be SIP protocol inward.
While my editor liked the idea (I think they have to tell you that) he said there wouldn't be much of a market for it. If I remember correctly his exact words were "Kris, that's a great idea but only 100 people in the world would buy it". Clearly you can't print a physical book through a major publisher with editors, technical reviewers, etc if only 100 people are going to buy it. I tabled the idea.
Several years later I find myself still regularly talking about SIP and going into many protocol and implementation specifics. Like my editor once told me it seems there aren't a lot of people in this area with either the interest or experience. I guess he was right. Still, SIP is confusing enough and widespread enough that something has to be done.
Over the past couple of months (off and on - I rarely work uninterrupted these days) I sat down and wrote. Stream of consciousness, without reference, writing. What I ended up with is a (currently) 21 page document I like to call "Everything you wish you didn't need to know about VoIP".
It's still an early draft but as we say in open source "release early, release often". It has typos. It may not always be factually correct. There are no headings, chapters, or spacing. I may not always use apostrophes correctly. Over time I will correct these mistakes and hopefully (with your help) address other topics of concern or questions my readers may have. I may even divide it into logical chapters at some point! Wow, that would be nice.
However, as the philosophers say a 100 mile journey begins with a single step. With that said, blog reader, I present to you "Everything you wish you didn't need to know about VoIP".
Let me know what you think (I have a comments section and easy to find e-mail address).
While my editor liked the idea (I think they have to tell you that) he said there wouldn't be much of a market for it. If I remember correctly his exact words were "Kris, that's a great idea but only 100 people in the world would buy it". Clearly you can't print a physical book through a major publisher with editors, technical reviewers, etc if only 100 people are going to buy it. I tabled the idea.
Several years later I find myself still regularly talking about SIP and going into many protocol and implementation specifics. Like my editor once told me it seems there aren't a lot of people in this area with either the interest or experience. I guess he was right. Still, SIP is confusing enough and widespread enough that something has to be done.
Over the past couple of months (off and on - I rarely work uninterrupted these days) I sat down and wrote. Stream of consciousness, without reference, writing. What I ended up with is a (currently) 21 page document I like to call "Everything you wish you didn't need to know about VoIP".
It's still an early draft but as we say in open source "release early, release often". It has typos. It may not always be factually correct. There are no headings, chapters, or spacing. I may not always use apostrophes correctly. Over time I will correct these mistakes and hopefully (with your help) address other topics of concern or questions my readers may have. I may even divide it into logical chapters at some point! Wow, that would be nice.
However, as the philosophers say a 100 mile journey begins with a single step. With that said, blog reader, I present to you "Everything you wish you didn't need to know about VoIP".
Let me know what you think (I have a comments section and easy to find e-mail address).
Monday, June 4, 2012
Sprechen sie deutsch?
Do you speak German?
I don't. I'm sure this comes as a shock to the many people who have sent me e-mails in German over the years. I suppose my last name may give that impression...
Anyway, longtime AstLinux community member and contributor Michael Keuter has setup an AstLinux focused blog in German. Check it out!
I don't. I'm sure this comes as a shock to the many people who have sent me e-mails in German over the years. I suppose my last name may give that impression...
Anyway, longtime AstLinux community member and contributor Michael Keuter has setup an AstLinux focused blog in German. Check it out!
Monday, March 12, 2012
AstLinux Custom Build Engine now available!
Ever since releasing the AstLinux Development Environment several years ago the AstLinux Developers have spent a significant amount of time supporting new users who are (in many cases) building their first image with only minor customizations - slightly different hardware support, different Asterisk versions, etc.
The trouble is, cross compiling anything is an extremely complicated task. To be honest I'm surprised it works as often as it does. When you step back and really look at what's going on it all starts to seem like magic. Many people working in this space will openly admit to being practitioners of vodoo or one of the other dark arts.
After a brief e-mail exchange last week Lonnie Abelbeck and I decided to do something about this. What if we could host a system with a web interface to build custom AstLinux images for users on demand? What if this system could cache previous image configurations and save them for future users? What if this system could be easily adapted to meet future configuration needs?
Amazingly, barely a week later, Lonnie has provided all of these features and more. Available immediately, the AstLinux Custom Build Engine is online to build custom AstLinux images that meet your needs.
In an effort to keep bots, crawlers, and robots in general out we've added simple username and password authentication. The secret is out and the username is "admin" with a password of "astlinux". AstLinux users will recognize these credentials from the default administrative web interface provided with AstLinux. These users will also recognize the familiar tabbed interface.
Go ahead and give it a try!
These interfaces look alike because they share the same DNA. Lonnie Abelbeck has done a great job creating a build system to serve our users now and in the future. Thanks again Lonnie!
P.S. - Lonnie just found this post from me, dated 8/25/2005, where I talk about something that looks a lot like build.astlinux.org. If only Lonnie were around back then to help me actually create such a beast!
The trouble is, cross compiling anything is an extremely complicated task. To be honest I'm surprised it works as often as it does. When you step back and really look at what's going on it all starts to seem like magic. Many people working in this space will openly admit to being practitioners of vodoo or one of the other dark arts.
After a brief e-mail exchange last week Lonnie Abelbeck and I decided to do something about this. What if we could host a system with a web interface to build custom AstLinux images for users on demand? What if this system could cache previous image configurations and save them for future users? What if this system could be easily adapted to meet future configuration needs?
Amazingly, barely a week later, Lonnie has provided all of these features and more. Available immediately, the AstLinux Custom Build Engine is online to build custom AstLinux images that meet your needs.
In an effort to keep bots, crawlers, and robots in general out we've added simple username and password authentication. The secret is out and the username is "admin" with a password of "astlinux". AstLinux users will recognize these credentials from the default administrative web interface provided with AstLinux. These users will also recognize the familiar tabbed interface.
Go ahead and give it a try!
These interfaces look alike because they share the same DNA. Lonnie Abelbeck has done a great job creating a build system to serve our users now and in the future. Thanks again Lonnie!
P.S. - Lonnie just found this post from me, dated 8/25/2005, where I talk about something that looks a lot like build.astlinux.org. If only Lonnie were around back then to help me actually create such a beast!
Monday, February 13, 2012
Hyperspecialization and the shakeup of a 100 year old industry
As someone who often finds themselves "in the trenches" dealing with some extremely nerdy technical nuances it's often easy to miss the larger picture. I guess Mark Spencer was right when he said "Not many people get excited about telephones but the ones who do get REALLY excited about telephones".
As someone who's natural inclination is to get stuck in the details I certainly understand this. Some of you might be right there with me. At this point I've gotten so specialized I'm next to useless on some pretty basic "computer things". Think of the aunt or other relative/friend that lights up when they find out you're a "computer guy". Then they inevitably pull you aside at a wedding to ask for help with their printer or "some box that pops up in Windows". I'm happy to not have to feign ignorance any longer: I truly am ignorant on issues like these.
I primarily use a Mac because it just works - for me. That's not the point I'm trying to make here, though. A Mac works so well for me because I use just two applications: a terminal (Iterm2, to be exact) and Google Chrome. Ok, ok every once in a while I whip up some crazy Wireshark display syntax but that's another post for another day. For the most part when I need to work I take out my Mac, it comes out of sleep, connects to a network, and I start working. It's a tool.
As far as anything with a GUI goes that's the extent of my "expertise". If my aunt wanted to ask me about my bash_profile, screenrc settings, IPv4 address exhaustion, or SIP network architecture I may have something to say. Other than that you'll find me speaking in vague generalities than may lead the more paranoid to suspect I'm secretly a double for the CIA or some international crime syndicate member. I wish I were kidding, this has actually happened before although "international crime syndicate" usually gets loosely translated to "drug dealer". How else does a supposed "computer guy" not understand what's wrong with my printer?!?!
As usual there's a point to all of this. My hyperspecialization, in this case, allows me to forget what is really going on all around me: a shakeup in the 100 year old industry I find myself in and a change in the way we communicate.
The evolution of the telephone is a strange thing. It is a device and service that has remained largely unchanged for 100 years. I'm not kidding. To this day, in some parts of the United States, the only telephone service available could be installed by Alexander Graham Bell himself. Sure there have been many advances since the 1900s but they've been incremental improvements at best - digital services with the same voice bandwidth (dating to 1972), various capacity and engineering changes, and of course - the cell phone.
In the end, however, we're left with a service that isn't much different than what my grandparents had. You still have to phonetically spell various upper-frequency consonants ("S as in Sam, P as in Paul, T as in Tom") because the upper limit of the voice bandwidth on these services is ridiculously low (3.1 kHz). Straining to hear the party at the remote end of a phone has only gotten worse with various digital compression standards in use today - EVRC, AMR, G.729, etc. I love to compare the "pin drop" Sprint commercials of the 80s and 90s to the Verizon Wireless "CAN YOU HEAR ME NOW?" campaign over 20 years later. We still dial by randomly assigned strings of 10 digit numbers. This is supposedly progress?
One thing that has changed - the network has gotten bigger. Much bigger. My grandparents may have not had much use for their party line because they didn't have anyone of interest to talk to on the other end. In this manner the network has exploded - and it has exploded using the same standards that have been in place for these past 100 years. I can directly dial a cell phone on the other side of the world and be connected in seconds.
Meanwhile, there has been another network explosion - IP networks and the internet. The internet, of course, needs no introduction. While I'd love to spend some time talking about IP that's time I don't have at this point. Let's just look at a couple of ways IP has been extremely disruptive for this 100 year old franchise.
Not many people outside of telecom noticed it at the time but back in 2009 AT&T (THE AT&T) petitioned the FCC to decommission the legacy PSTN (copper and pairs and what-not). Just over two years later we're starting to see some results, and AT&T is realizing some ancillary benefits.
As someone who has spent some time (not a lot, thankfully) in these central offices the maze of patch cables, wiring blocks, DC battery banks, etc make you really appreciate the analysis of this report. Normally networks are completely faceless - you go to www.google.com or dial 8005551212 without seeing the equipment that gets you to the other end. The fact that SBC reclaimed as much as 250 MILLION square feet by eliminating this legacy equipment is incredible.
That's all well and good but what has AT&T done for us, the users? The answer is, unfortunately, both good and bad. AT&T like many physical, trench-digging network providers, has realized they are in the business of providing IP connectivity. They don't have much of a product anymore and the product they do have is becoming more and more of a commodity everyday.
Getting out of the way is the smartest thing they could be doing. Speaking of AT&T, remember the Apple iPhone deal? At the time a cell phone was a cell phone - AT&T provided an IP network and got some minutes but Apple built an application platform and changed the way people view the devices they carry with them everywhere they go. Huge.
Watch any sci-fi movie from the past 50 years and one almost ubiquitous "innovation" is the video phone. Did AT&T or some other 100 year old company provide the video phone for baby's first steps to be beamed to Grandma across the country? No - Apple did it with Facetime and a little company from Estonia (Skype) did it over the internet. Thanks to these companies and IP networks we finally have video conferencing (maybe they'll release a 30th anniversary edition of Blade Runner to celebrate).
Unfortunately, there will always be people that cling to technologies of days past - this new network works well for all of these applications that were designed for it. Meanwhile, some technologies are being shoehorned in with disastrous results. Has anyone noticed faxing has actually gotten LESS reliable over the past several years? That's what happens when you try to use decades-old modem designs on a completely different network. You might as well try to burn diesel in your gasoline engine.
The future is the network and the network (regardless of physical access medium) is IP.
And now, for good measure, here are some random links for further reading:
Graves on SOHO Technology - An early advocate of HD Voice, etc.
The Voice Communication Exchange - Wants to push the world to HD Voice by 2018.
As someone who's natural inclination is to get stuck in the details I certainly understand this. Some of you might be right there with me. At this point I've gotten so specialized I'm next to useless on some pretty basic "computer things". Think of the aunt or other relative/friend that lights up when they find out you're a "computer guy". Then they inevitably pull you aside at a wedding to ask for help with their printer or "some box that pops up in Windows". I'm happy to not have to feign ignorance any longer: I truly am ignorant on issues like these.
I primarily use a Mac because it just works - for me. That's not the point I'm trying to make here, though. A Mac works so well for me because I use just two applications: a terminal (Iterm2, to be exact) and Google Chrome. Ok, ok every once in a while I whip up some crazy Wireshark display syntax but that's another post for another day. For the most part when I need to work I take out my Mac, it comes out of sleep, connects to a network, and I start working. It's a tool.
As far as anything with a GUI goes that's the extent of my "expertise". If my aunt wanted to ask me about my bash_profile, screenrc settings, IPv4 address exhaustion, or SIP network architecture I may have something to say. Other than that you'll find me speaking in vague generalities than may lead the more paranoid to suspect I'm secretly a double for the CIA or some international crime syndicate member. I wish I were kidding, this has actually happened before although "international crime syndicate" usually gets loosely translated to "drug dealer". How else does a supposed "computer guy" not understand what's wrong with my printer?!?!
As usual there's a point to all of this. My hyperspecialization, in this case, allows me to forget what is really going on all around me: a shakeup in the 100 year old industry I find myself in and a change in the way we communicate.
The evolution of the telephone is a strange thing. It is a device and service that has remained largely unchanged for 100 years. I'm not kidding. To this day, in some parts of the United States, the only telephone service available could be installed by Alexander Graham Bell himself. Sure there have been many advances since the 1900s but they've been incremental improvements at best - digital services with the same voice bandwidth (dating to 1972), various capacity and engineering changes, and of course - the cell phone.
In the end, however, we're left with a service that isn't much different than what my grandparents had. You still have to phonetically spell various upper-frequency consonants ("S as in Sam, P as in Paul, T as in Tom") because the upper limit of the voice bandwidth on these services is ridiculously low (3.1 kHz). Straining to hear the party at the remote end of a phone has only gotten worse with various digital compression standards in use today - EVRC, AMR, G.729, etc. I love to compare the "pin drop" Sprint commercials of the 80s and 90s to the Verizon Wireless "CAN YOU HEAR ME NOW?" campaign over 20 years later. We still dial by randomly assigned strings of 10 digit numbers. This is supposedly progress?
One thing that has changed - the network has gotten bigger. Much bigger. My grandparents may have not had much use for their party line because they didn't have anyone of interest to talk to on the other end. In this manner the network has exploded - and it has exploded using the same standards that have been in place for these past 100 years. I can directly dial a cell phone on the other side of the world and be connected in seconds.
Meanwhile, there has been another network explosion - IP networks and the internet. The internet, of course, needs no introduction. While I'd love to spend some time talking about IP that's time I don't have at this point. Let's just look at a couple of ways IP has been extremely disruptive for this 100 year old franchise.
Not many people outside of telecom noticed it at the time but back in 2009 AT&T (THE AT&T) petitioned the FCC to decommission the legacy PSTN (copper and pairs and what-not). Just over two years later we're starting to see some results, and AT&T is realizing some ancillary benefits.
As someone who has spent some time (not a lot, thankfully) in these central offices the maze of patch cables, wiring blocks, DC battery banks, etc make you really appreciate the analysis of this report. Normally networks are completely faceless - you go to www.google.com or dial 8005551212 without seeing the equipment that gets you to the other end. The fact that SBC reclaimed as much as 250 MILLION square feet by eliminating this legacy equipment is incredible.
That's all well and good but what has AT&T done for us, the users? The answer is, unfortunately, both good and bad. AT&T like many physical, trench-digging network providers, has realized they are in the business of providing IP connectivity. They don't have much of a product anymore and the product they do have is becoming more and more of a commodity everyday.
Getting out of the way is the smartest thing they could be doing. Speaking of AT&T, remember the Apple iPhone deal? At the time a cell phone was a cell phone - AT&T provided an IP network and got some minutes but Apple built an application platform and changed the way people view the devices they carry with them everywhere they go. Huge.
Watch any sci-fi movie from the past 50 years and one almost ubiquitous "innovation" is the video phone. Did AT&T or some other 100 year old company provide the video phone for baby's first steps to be beamed to Grandma across the country? No - Apple did it with Facetime and a little company from Estonia (Skype) did it over the internet. Thanks to these companies and IP networks we finally have video conferencing (maybe they'll release a 30th anniversary edition of Blade Runner to celebrate).
Unfortunately, there will always be people that cling to technologies of days past - this new network works well for all of these applications that were designed for it. Meanwhile, some technologies are being shoehorned in with disastrous results. Has anyone noticed faxing has actually gotten LESS reliable over the past several years? That's what happens when you try to use decades-old modem designs on a completely different network. You might as well try to burn diesel in your gasoline engine.
The future is the network and the network (regardless of physical access medium) is IP.
And now, for good measure, here are some random links for further reading:
Graves on SOHO Technology - An early advocate of HD Voice, etc.
The Voice Communication Exchange - Wants to push the world to HD Voice by 2018.