Thursday, July 26, 2012

Performance Testing (Part 2) - Going Commercial

Some time ago I wrote the first installment of a multi-part series on performance testing. Here we go with part two!

In part one I talked about some of the difficulties surrounding performance testing - functionality, flexibility, high scale, quality metrics, etc.

After looking at a couple commercial products we discovered a few things:

1)  Some of these products really do “have it all”.
2)  They can be very expensive.

There was some initial sticker shock ($100,000+) but looking back I shouldn’t have been so surprised.  My first reaction (of course) was to reach out to a few people in the open source community with a proposal.  In a classic “build vs. buy” scenario I wanted to build.  This is (roughly) what I needed:

- At least 40,000 RTP streams (20,000 calls)
- At least 100,000 SIP calls and/or registrations
- The ability to emulate multiple user agents (VLAN, MAC address, DHCP client, SIP UA)

- RTP stream analysis on each RTP leg (MOS, R-Factor, PESQ, etc)
- Flexible device emulation - SIP headers, supported SIP features, etc
- Multiple codec support (at least G711, G729, G722, SILK/OPUS with FEC et
all, etc).
- Control of test scenarios - CPS, number of calls, duration of call,
total duration of test, etc
- Ability to save/load tests via web interface (for ease of use,
comparison of results, etc)
- Ability to perform feature testing - generate DTMF during a call to
navigate an IVR, for example.
- A modular system for monitoring the device under test - Linux load,
CPU usage, disk usage, network I/O, etc.  Could also monitor Cisco
switches in between devices, Windows hosts, etc.  Maybe even
FreeSWITCH or Asterisk if that's what was running on the device under
test.
- Saving and graphing of all relevant performance data - call setup
time, delay, duration, RTP jitter, packet loss, RTP stream stats, etc.
Ability to save data and generate reports from said data.
- Scalable design with master/slave architecture to scale across hosts
with the ability for hardware.

Did I mention this tool needs to be usable by various test engineers, some of which don’t know the difference between SIP and SDP (and rightfully so, they shouldn’t need to)?

With the open source software already available I figured this could be made available for less than the cost of a commercial testing solution.

I gave it away with the title of this post but you can guess what happened next: it was going to cost far more to develop everything I needed.  By the way - it would also take six months to build and take 10 years off my life hunting bugs, managing the project, etc.

BUY

For less than what it would cost to build everything above I could buy this.  A multi-user chassis with one Xcellon-Ultra NP load module and room for another one.  180,000 emulated SIP endpoints.  96,000 RTP streams.  Wire speed 10 gigabit VoIP testing (and 12 gigabit ports).

Of course this isn’t a perfect world...  The chassis runs Windows.  The client software is only available for Windows and the interface is probably the furthest from what I want.  As a guy that eats, lives, and breathes CLI (and has for a decade) multi-pane/dropdown/hidden/shadow GUIs are NOT my thing.  I don’t even know what to call or how to describe some of the window/GUI elements present in the IxLoad user interface...


Ixia even has a solution to this problem.  They offer TCL scripting with clients for various platforms, including Linux!  While we’ll eventually get into that for the time being we went with a much simpler solution:  we setup a Windows terminal server.  I use CoRD from my Mac, login to the terminal server, and run a test.  As you’ll see in part three - IT JUST WORKS. 

Thursday, July 12, 2012

My Linux Story

While digging through some boxes the other day I came across a book.  Not just any book - a very important book.  A book I both cursed and loved more than any other book in my entire life.

To tell the story of the book I need to give a little background.  My father was a professor at the University of Illinois at Chicago.  When I was old enough (13) I would spend my summers volunteering in his department (Occupational Therapy) doing random IT jobs - making ethernet cables, cleaning up viruses, fixing printers, etc.

One day they took me on a tour of the other departments in his building.  They had some medical visualization apps running in one of the departments.  Here we were, sitting in a room, with some PhD students working on some of the coolest computers I’d ever seen.  They had bright colors and huge monitors.  They ran tons of advanced looking applications.  Most of the interaction was through a very interesting command line interface.  I was intrigued.

Being a rambunctious 13 year old I approached one of the students and complimented him on his computer.  I also asked him how I could get one of them for my house.  He laughed.  At the time I didn’t understand why it was a funny question.  Looking back now I know it’s because those “computers” were actually high powered Silicon Graphics Indigo workstations that cost upwards of $40,000 each (in 1997 dollars).

His response was a muffled “Ummm, you can’t”.  Not being dissuaded my follow-up question was “How can I get something like it?”.  One of the students looked at the other student.  They both kind of shrugged until one of them said “umm, Linux?”.  Still not being quite sure they both agreed that “Linux” would be my best bet.  I asked them how I could get “Linux”.  They told me to go down to the university book store and buy a book about Linux.  A good Linux book, they said, should include a CD-ROM with a copy of Linux I could install on my computer at home.

That day, before we went to Union Station to catch the Metra, my dad and I went to the university bookstore to find a book about Linux.  Sure enough we found a Linux book that included not one but TWO CD-ROMs!  I read as much of the book as I could before I got home that night.

Once I was at home I booted the family computer (with a DOS bootdisk) and ran a loadlin (wow, remember that?) based install program from the CD-ROM (no el torito!).  During the course of the install I repartitioned and reformatted the (only) family computer - an IBM Aptiva M51.  It had a Pentium 100, 1GB hard drive, and 16MB of RAM (upgraded from 8MB).  It also came with one of those HORRIBLE MWAVE adapters and some shady Cirrus Logic graphics (IIRC).

Anyway, the install process (which was pretty horrible, actually) left me with what was (at the time) a barely usable computer.  How was my sister going to do her homework?  How were we going to get on to the internet?  Uh-oh, looks like I better learn more about this “Linux” thing...

So that’s how it started.  Later that year (for Christmas) my parents realized they weren’t going to get the Aptiva back from me so we bought another computer to run Windows.  At that point my little Linux workstation became my computer and the “gateway” for my first home network - a 10BASE2 (coax!) Ethernet network using NE2000 cards from Linksys.  Internet access went through my Aptiva using an external modem that, regardless of type, could only negotiate a maximum of 19,200 bps on our crappy phone line.  PPP, chat scripts, dial-on-demand, squid for caching, etc.  15 years later I’m still (primarily) using Linux!

After finding “My First Linux Book” I wondered what would happen if I tried to install that version of Linux today.  Some people reminisce about their childhood by hearing certain songs, playing a sport, or collecting action figures.  I (apparently) do it by installing ancient versions of Linux.

I needed to install this version of Linux but the CD-ROMs in my book were missing.  I looked around the internet for a while but could not find an ISO or copy of the distro anywhere.  I could barely find references to the book.  Where else could I look?  Everywhere else I look - Amazon!

Sure enough, Amazon had a used copy of the book (with CD-ROM) for $5 with shipping.  Two days later it was here.  To my surprise the book (and the CDs) were in excellent condition.  Who the hell is keeping a warehouse full of mid-nineties Linux books (3rd edition with Kernel 2.0!)?

I get on my main development machine at home, download a FreeDOS ISO (to install from, remember), and create a VirtualBox virtual machine.  What should it look like?  I decide to “go big”:

  • 1 CPU
  • 256MB of RAM
  • 8GB Intel PIIX3 hard drive
  • 1 Am79C973 ethernet port

Keep in mind I’m going to be running kernel 2.0 here - this hardware needs to be supported by a 15 year old kernel.  I get Unifix Linux 2.0 installed.  Moments later I’m logged into my “new” Linux system.  Not knowing exactly what to do now, I decide to try to get networking to work.

Long story short I could not get Linux 2.0 to recognize the emulated Am79C973 ethernet controller.  I tried changing the device ids and recompiling the kernel (takes less than one minute, btw) but  couldn’t get it to work.

Hmmm, what else could I do for connectivity?  Maybe I could go really nostalgic and get something running over the serial port?

I configured VirtualBox to emulate a 16550 serial port as COM1.  I setup VirtualBox to point the other end of the emulated serial port to a local pipe.  I figured that if I could somehow run pppd on both sides of this serial port (host and guest) and configure NAT I could get this thing on the internet.
Here’s how I did it:

1)  Launch socat to convert the unix domain socket provided by VirtualBox to a standard Linux tty so pppd can run on it:

(on host)
socat UNIX-CONNECT:[com1] PTY,link=[vmodem0],raw,echo=0,waitslave

Where [com1] is the path to your VirtualBox socket and [vmodem0] is the path to your (new) tty.

2)  Launch pppd on the new tty:

(on host)
pppd [vmodem0] 57600 192.168.100.1:192.168.100.2 nodetach local

Once again where [vmodem0] is the path to your new socat tty.  Make sure that the IP addresses provided for each end of the PPP link don’t conflict with any local IP addresses.

3)  Setup kernel iptables on the host:

(on host)
echo 1 > /proc/sys/net/ipv4/ip_forward

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth0 -o ppp0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i ppp0 -o eth0 -j ACCEPT

4)  Connect the virtual machine to the host:


(on guest)
pppd /dev/ttyS0 57600 defaultroute passive

Sure enough, here’s what I saw on the host:

Using interface ppp0
Connect: ppp0 <--> /home/kris/projects/linux_universe/vmodem0
local  IP address 192.168.100.1
remote IP address 192.168.100.2

Boom!  1990s Linux, meet the 21st century!

Once I had networking up and running things really took off.  I noticed all of the services running by default on my old Linux host (portmap, yp, apache, telnet, echo, chargen, sendmail, wu-ftpd, etc).  Remember the 90s when the internet wasn’t such a hostile place!?!

Here’s some fun command output from my “new” host old_linux:

root@old_linux:~ # uname -a
Linux old_linux 2.0.25 Unifix-2.0 i686

root@old_linux:~ # ping -c 5 192.168.100.1
PING 192.168.100.1 (192.168.100.1): 56 data bytes
64 bytes from 192.168.100.1: icmp_seq=0 ttl=64 time=15.1 ms
64 bytes from 192.168.100.1: icmp_seq=1 ttl=64 time=19.9 ms
64 bytes from 192.168.100.1: icmp_seq=2 ttl=64 time=19.9 ms
64 bytes from 192.168.100.1: icmp_seq=3 ttl=64 time=19.9 ms
64 bytes from 192.168.100.1: icmp_seq=4 ttl=64 time=19.9 ms

--- 192.168.100.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 15.1/18.9/19.9 ms
root@old_linux:~ #

20ms of latency on the same physical machine (pppd -> VirtualBox -> socat -> pppd)!

root@old_linux:~ # ping -c 5 www.google.com
PING www.l.google.com (74.125.139.104): 56 data bytes
64 bytes from 74.125.139.104: icmp_seq=0 ttl=45 time=59.5 ms
64 bytes from 74.125.139.104: icmp_seq=1 ttl=45 time=49.9 ms
64 bytes from 74.125.139.104: icmp_seq=2 ttl=45 time=49.9 ms
64 bytes from 74.125.139.104: icmp_seq=3 ttl=45 time=50.5 ms
64 bytes from 74.125.139.104: icmp_seq=4 ttl=45 time=69.9 ms

--- www.l.google.com ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 49.9/55.9/69.9 ms

root@old_linux:~ # gcc -v
Reading specs from /usr/lib/gcc-lib/i486-unknown-linux/2.7.2.1/specs
gcc version 2.7.2.1

root@old_linux:~ # httpd -v
Server version Apache/1.1.1.

root@old_linux:~ # ssh -v
SSH Version 1.2.14 [i486-unknown-linux], protocol version 1.4.
Standard version.  Does not use RSAREF.

Pre-iptables.  Pre-ipchains.  IPFWADM!

root@old_linux:~ # ipfwadm -h
ipfwadm 2.3.0, 1996/07/30

root@old_linux:/usr/src/linux-2.0.25 # time make

[trimmed output]

The new kernel is in file arch/i386/boot/bzImage; now install it as vmlinuz
A name list of the kernel is in vmlinux.map; you may install it as vmlinux
9.86user 11.74system 0:22.78elapsed 94%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+0minor)pagefaults 0swaps
root@old_linux:/usr/src/linux-2.0.25 #

23 seconds to compile the kernel!

Ok, but what about the modules?

make[1]: Leaving directory `/usr/src/linux-2.0.25/arch/i386/math-emu'
8.80user 7.42system 0:16.80elapsed 96%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+0minor)pagefaults 0swaps
root@old_linux:/usr/src/linux-2.0.25 #

16 seconds to compile the modules!

I have to admit I’m fascinated with this distro and the state of Linux as I was first introduced to it.  Of course some of the memories have faded over time but more than anything it’s amazing to see how far we’ve come...

1997:

  • Linux 2.0.25
  • Original Pentium at 100MHz
  • 16MB RAM
  • 1GB hard drive
  • 1MB VRAM

2012:

  • Linux 3.2.0 (Ubuntu 12.04)
  • Quad-core Pentium Xeon at 2.27GHz
  • 12GB RAM
  • 1TB hard drive x2 (RAID 1)
  • 1GB VRAM (dual 1080p displays)

...and this machine is pretty old at this point!  Needless to say with 256MB of RAM (double the maximum possible in my Aptiva) and even one emulated CPU Unifix 2.0 barely knows what to do with this new hardware (even if it isn’t real)!

While I was never quite sure why I was doing any of this I can tell you that it was very fun to remember how all of this got started. I remember my next distro being RedHat 5.2...