A few years back I was talking with my editor at O'Reilly Media about a book I'd like to write. The book would cover details of the SIP protocol, best practice, interop scenarios, and even a few implementation specifics - FreeSWITCH, Asterisk, OpenSIPS, Kamailio, etc. Basically your typical open source software book only this time it would be SIP protocol inward.
While my editor liked the idea (I think they have to tell you that) he said there wouldn't be much of a market for it. If I remember correctly his exact words were "Kris, that's a great idea but only 100 people in the world would buy it". Clearly you can't print a physical book through a major publisher with editors, technical reviewers, etc if only 100 people are going to buy it. I tabled the idea.
Several years later I find myself still regularly talking about SIP and going into many protocol and implementation specifics. Like my editor once told me it seems there aren't a lot of people in this area with either the interest or experience. I guess he was right. Still, SIP is confusing enough and widespread enough that something has to be done.
Over the past couple of months (off and on - I rarely work uninterrupted these days) I sat down and wrote. Stream of consciousness, without reference, writing. What I ended up with is a (currently) 21 page document I like to call "Everything you wish you didn't need to know about VoIP".
It's still an early draft but as we say in open source "release early, release often". It has typos. It may not always be factually correct. There are no headings, chapters, or spacing. I may not always use apostrophes correctly. Over time I will correct these mistakes and hopefully (with your help) address other topics of concern or questions my readers may have. I may even divide it into logical chapters at some point! Wow, that would be nice.
However, as the philosophers say a 100 mile journey begins with a single step. With that said, blog reader, I present to you "Everything you wish you didn't need to know about VoIP".
Let me know what you think (I have a comments section and easy to find e-mail address).
I created AstLinux but I write and rant about a lot of other things here. Mostly rants about SIP and the other various technologies I deal with on a daily basis.
Thursday, June 14, 2012
Monday, June 4, 2012
Sprechen sie deutsch?
Do you speak German?
I don't. I'm sure this comes as a shock to the many people who have sent me e-mails in German over the years. I suppose my last name may give that impression...
Anyway, longtime AstLinux community member and contributor Michael Keuter has setup an AstLinux focused blog in German. Check it out!
I don't. I'm sure this comes as a shock to the many people who have sent me e-mails in German over the years. I suppose my last name may give that impression...
Anyway, longtime AstLinux community member and contributor Michael Keuter has setup an AstLinux focused blog in German. Check it out!
Monday, March 12, 2012
AstLinux Custom Build Engine now available!
Ever since releasing the AstLinux Development Environment several years ago the AstLinux Developers have spent a significant amount of time supporting new users who are (in many cases) building their first image with only minor customizations - slightly different hardware support, different Asterisk versions, etc.
The trouble is, cross compiling anything is an extremely complicated task. To be honest I'm surprised it works as often as it does. When you step back and really look at what's going on it all starts to seem like magic. Many people working in this space will openly admit to being practitioners of vodoo or one of the other dark arts.
After a brief e-mail exchange last week Lonnie Abelbeck and I decided to do something about this. What if we could host a system with a web interface to build custom AstLinux images for users on demand? What if this system could cache previous image configurations and save them for future users? What if this system could be easily adapted to meet future configuration needs?
Amazingly, barely a week later, Lonnie has provided all of these features and more. Available immediately, the AstLinux Custom Build Engine is online to build custom AstLinux images that meet your needs.
In an effort to keep bots, crawlers, and robots in general out we've added simple username and password authentication. The secret is out and the username is "admin" with a password of "astlinux". AstLinux users will recognize these credentials from the default administrative web interface provided with AstLinux. These users will also recognize the familiar tabbed interface.
Go ahead and give it a try!
These interfaces look alike because they share the same DNA. Lonnie Abelbeck has done a great job creating a build system to serve our users now and in the future. Thanks again Lonnie!
P.S. - Lonnie just found this post from me, dated 8/25/2005, where I talk about something that looks a lot like build.astlinux.org. If only Lonnie were around back then to help me actually create such a beast!
The trouble is, cross compiling anything is an extremely complicated task. To be honest I'm surprised it works as often as it does. When you step back and really look at what's going on it all starts to seem like magic. Many people working in this space will openly admit to being practitioners of vodoo or one of the other dark arts.
After a brief e-mail exchange last week Lonnie Abelbeck and I decided to do something about this. What if we could host a system with a web interface to build custom AstLinux images for users on demand? What if this system could cache previous image configurations and save them for future users? What if this system could be easily adapted to meet future configuration needs?
Amazingly, barely a week later, Lonnie has provided all of these features and more. Available immediately, the AstLinux Custom Build Engine is online to build custom AstLinux images that meet your needs.
In an effort to keep bots, crawlers, and robots in general out we've added simple username and password authentication. The secret is out and the username is "admin" with a password of "astlinux". AstLinux users will recognize these credentials from the default administrative web interface provided with AstLinux. These users will also recognize the familiar tabbed interface.
Go ahead and give it a try!
These interfaces look alike because they share the same DNA. Lonnie Abelbeck has done a great job creating a build system to serve our users now and in the future. Thanks again Lonnie!
P.S. - Lonnie just found this post from me, dated 8/25/2005, where I talk about something that looks a lot like build.astlinux.org. If only Lonnie were around back then to help me actually create such a beast!
Monday, February 13, 2012
Hyperspecialization and the shakeup of a 100 year old industry
As someone who often finds themselves "in the trenches" dealing with some extremely nerdy technical nuances it's often easy to miss the larger picture. I guess Mark Spencer was right when he said "Not many people get excited about telephones but the ones who do get REALLY excited about telephones".
As someone who's natural inclination is to get stuck in the details I certainly understand this. Some of you might be right there with me. At this point I've gotten so specialized I'm next to useless on some pretty basic "computer things". Think of the aunt or other relative/friend that lights up when they find out you're a "computer guy". Then they inevitably pull you aside at a wedding to ask for help with their printer or "some box that pops up in Windows". I'm happy to not have to feign ignorance any longer: I truly am ignorant on issues like these.
I primarily use a Mac because it just works - for me. That's not the point I'm trying to make here, though. A Mac works so well for me because I use just two applications: a terminal (Iterm2, to be exact) and Google Chrome. Ok, ok every once in a while I whip up some crazy Wireshark display syntax but that's another post for another day. For the most part when I need to work I take out my Mac, it comes out of sleep, connects to a network, and I start working. It's a tool.
As far as anything with a GUI goes that's the extent of my "expertise". If my aunt wanted to ask me about my bash_profile, screenrc settings, IPv4 address exhaustion, or SIP network architecture I may have something to say. Other than that you'll find me speaking in vague generalities than may lead the more paranoid to suspect I'm secretly a double for the CIA or some international crime syndicate member. I wish I were kidding, this has actually happened before although "international crime syndicate" usually gets loosely translated to "drug dealer". How else does a supposed "computer guy" not understand what's wrong with my printer?!?!
As usual there's a point to all of this. My hyperspecialization, in this case, allows me to forget what is really going on all around me: a shakeup in the 100 year old industry I find myself in and a change in the way we communicate.
The evolution of the telephone is a strange thing. It is a device and service that has remained largely unchanged for 100 years. I'm not kidding. To this day, in some parts of the United States, the only telephone service available could be installed by Alexander Graham Bell himself. Sure there have been many advances since the 1900s but they've been incremental improvements at best - digital services with the same voice bandwidth (dating to 1972), various capacity and engineering changes, and of course - the cell phone.
In the end, however, we're left with a service that isn't much different than what my grandparents had. You still have to phonetically spell various upper-frequency consonants ("S as in Sam, P as in Paul, T as in Tom") because the upper limit of the voice bandwidth on these services is ridiculously low (3.1 kHz). Straining to hear the party at the remote end of a phone has only gotten worse with various digital compression standards in use today - EVRC, AMR, G.729, etc. I love to compare the "pin drop" Sprint commercials of the 80s and 90s to the Verizon Wireless "CAN YOU HEAR ME NOW?" campaign over 20 years later. We still dial by randomly assigned strings of 10 digit numbers. This is supposedly progress?
One thing that has changed - the network has gotten bigger. Much bigger. My grandparents may have not had much use for their party line because they didn't have anyone of interest to talk to on the other end. In this manner the network has exploded - and it has exploded using the same standards that have been in place for these past 100 years. I can directly dial a cell phone on the other side of the world and be connected in seconds.
Meanwhile, there has been another network explosion - IP networks and the internet. The internet, of course, needs no introduction. While I'd love to spend some time talking about IP that's time I don't have at this point. Let's just look at a couple of ways IP has been extremely disruptive for this 100 year old franchise.
Not many people outside of telecom noticed it at the time but back in 2009 AT&T (THE AT&T) petitioned the FCC to decommission the legacy PSTN (copper and pairs and what-not). Just over two years later we're starting to see some results, and AT&T is realizing some ancillary benefits.
As someone who has spent some time (not a lot, thankfully) in these central offices the maze of patch cables, wiring blocks, DC battery banks, etc make you really appreciate the analysis of this report. Normally networks are completely faceless - you go to www.google.com or dial 8005551212 without seeing the equipment that gets you to the other end. The fact that SBC reclaimed as much as 250 MILLION square feet by eliminating this legacy equipment is incredible.
That's all well and good but what has AT&T done for us, the users? The answer is, unfortunately, both good and bad. AT&T like many physical, trench-digging network providers, has realized they are in the business of providing IP connectivity. They don't have much of a product anymore and the product they do have is becoming more and more of a commodity everyday.
Getting out of the way is the smartest thing they could be doing. Speaking of AT&T, remember the Apple iPhone deal? At the time a cell phone was a cell phone - AT&T provided an IP network and got some minutes but Apple built an application platform and changed the way people view the devices they carry with them everywhere they go. Huge.
Watch any sci-fi movie from the past 50 years and one almost ubiquitous "innovation" is the video phone. Did AT&T or some other 100 year old company provide the video phone for baby's first steps to be beamed to Grandma across the country? No - Apple did it with Facetime and a little company from Estonia (Skype) did it over the internet. Thanks to these companies and IP networks we finally have video conferencing (maybe they'll release a 30th anniversary edition of Blade Runner to celebrate).
Unfortunately, there will always be people that cling to technologies of days past - this new network works well for all of these applications that were designed for it. Meanwhile, some technologies are being shoehorned in with disastrous results. Has anyone noticed faxing has actually gotten LESS reliable over the past several years? That's what happens when you try to use decades-old modem designs on a completely different network. You might as well try to burn diesel in your gasoline engine.
The future is the network and the network (regardless of physical access medium) is IP.
And now, for good measure, here are some random links for further reading:
Graves on SOHO Technology - An early advocate of HD Voice, etc.
The Voice Communication Exchange - Wants to push the world to HD Voice by 2018.
As someone who's natural inclination is to get stuck in the details I certainly understand this. Some of you might be right there with me. At this point I've gotten so specialized I'm next to useless on some pretty basic "computer things". Think of the aunt or other relative/friend that lights up when they find out you're a "computer guy". Then they inevitably pull you aside at a wedding to ask for help with their printer or "some box that pops up in Windows". I'm happy to not have to feign ignorance any longer: I truly am ignorant on issues like these.
I primarily use a Mac because it just works - for me. That's not the point I'm trying to make here, though. A Mac works so well for me because I use just two applications: a terminal (Iterm2, to be exact) and Google Chrome. Ok, ok every once in a while I whip up some crazy Wireshark display syntax but that's another post for another day. For the most part when I need to work I take out my Mac, it comes out of sleep, connects to a network, and I start working. It's a tool.
As far as anything with a GUI goes that's the extent of my "expertise". If my aunt wanted to ask me about my bash_profile, screenrc settings, IPv4 address exhaustion, or SIP network architecture I may have something to say. Other than that you'll find me speaking in vague generalities than may lead the more paranoid to suspect I'm secretly a double for the CIA or some international crime syndicate member. I wish I were kidding, this has actually happened before although "international crime syndicate" usually gets loosely translated to "drug dealer". How else does a supposed "computer guy" not understand what's wrong with my printer?!?!
As usual there's a point to all of this. My hyperspecialization, in this case, allows me to forget what is really going on all around me: a shakeup in the 100 year old industry I find myself in and a change in the way we communicate.
The evolution of the telephone is a strange thing. It is a device and service that has remained largely unchanged for 100 years. I'm not kidding. To this day, in some parts of the United States, the only telephone service available could be installed by Alexander Graham Bell himself. Sure there have been many advances since the 1900s but they've been incremental improvements at best - digital services with the same voice bandwidth (dating to 1972), various capacity and engineering changes, and of course - the cell phone.
In the end, however, we're left with a service that isn't much different than what my grandparents had. You still have to phonetically spell various upper-frequency consonants ("S as in Sam, P as in Paul, T as in Tom") because the upper limit of the voice bandwidth on these services is ridiculously low (3.1 kHz). Straining to hear the party at the remote end of a phone has only gotten worse with various digital compression standards in use today - EVRC, AMR, G.729, etc. I love to compare the "pin drop" Sprint commercials of the 80s and 90s to the Verizon Wireless "CAN YOU HEAR ME NOW?" campaign over 20 years later. We still dial by randomly assigned strings of 10 digit numbers. This is supposedly progress?
One thing that has changed - the network has gotten bigger. Much bigger. My grandparents may have not had much use for their party line because they didn't have anyone of interest to talk to on the other end. In this manner the network has exploded - and it has exploded using the same standards that have been in place for these past 100 years. I can directly dial a cell phone on the other side of the world and be connected in seconds.
Meanwhile, there has been another network explosion - IP networks and the internet. The internet, of course, needs no introduction. While I'd love to spend some time talking about IP that's time I don't have at this point. Let's just look at a couple of ways IP has been extremely disruptive for this 100 year old franchise.
Not many people outside of telecom noticed it at the time but back in 2009 AT&T (THE AT&T) petitioned the FCC to decommission the legacy PSTN (copper and pairs and what-not). Just over two years later we're starting to see some results, and AT&T is realizing some ancillary benefits.
As someone who has spent some time (not a lot, thankfully) in these central offices the maze of patch cables, wiring blocks, DC battery banks, etc make you really appreciate the analysis of this report. Normally networks are completely faceless - you go to www.google.com or dial 8005551212 without seeing the equipment that gets you to the other end. The fact that SBC reclaimed as much as 250 MILLION square feet by eliminating this legacy equipment is incredible.
That's all well and good but what has AT&T done for us, the users? The answer is, unfortunately, both good and bad. AT&T like many physical, trench-digging network providers, has realized they are in the business of providing IP connectivity. They don't have much of a product anymore and the product they do have is becoming more and more of a commodity everyday.
Getting out of the way is the smartest thing they could be doing. Speaking of AT&T, remember the Apple iPhone deal? At the time a cell phone was a cell phone - AT&T provided an IP network and got some minutes but Apple built an application platform and changed the way people view the devices they carry with them everywhere they go. Huge.
Watch any sci-fi movie from the past 50 years and one almost ubiquitous "innovation" is the video phone. Did AT&T or some other 100 year old company provide the video phone for baby's first steps to be beamed to Grandma across the country? No - Apple did it with Facetime and a little company from Estonia (Skype) did it over the internet. Thanks to these companies and IP networks we finally have video conferencing (maybe they'll release a 30th anniversary edition of Blade Runner to celebrate).
Unfortunately, there will always be people that cling to technologies of days past - this new network works well for all of these applications that were designed for it. Meanwhile, some technologies are being shoehorned in with disastrous results. Has anyone noticed faxing has actually gotten LESS reliable over the past several years? That's what happens when you try to use decades-old modem designs on a completely different network. You might as well try to burn diesel in your gasoline engine.
The future is the network and the network (regardless of physical access medium) is IP.
And now, for good measure, here are some random links for further reading:
Graves on SOHO Technology - An early advocate of HD Voice, etc.
The Voice Communication Exchange - Wants to push the world to HD Voice by 2018.
Tuesday, December 20, 2011
Performance Testing (Part 1)
Over the past few years (like many other people in this business) I’ve needed to do performance testing. Open source software is great but this is one place where you need to do your own leg work. This conundrum first presented itself in the Asterisk community. There are literally thousands of variables that can affect system the performance of Asterisk, FreeSWITCH, or any other software solution. In no particular order:
- Configuration. Which modules do you have loaded? How are they configured? If you’re using Kamailio, do you do hundreds of huge, slow, nasty DB queries for each call setup? How is your logging configured? Maybe you use Asterisk or FreeSWITCH and so several system calls, DB lookups, LUA scripts, etc? Even the slightest misstep in configuration (synchronous syslogging with Kamailio, for example) can reduce your performance by 90%.
- Features in use. Paging groups (unicast) are notorious for destroying performance on standard hardware - every call needs to be setup individually, you need to handle RTP, and some audio mixing is involved. Hardware that can’t do 10 members in a page group using Asterisk or FreeSWITCH may be capable of hundreds of sessions using Kamailio with no media.
- Standard performance metrics. “Thousands of calls” you say? How many calls per second? Are you transcoding? Maybe you’re not handling any media at all? What is the delay in call setup?
- Hardware. This may seem obvious (MORE HERTZ) but even then there are issues... If you’re handling RTP, what are you using for timing? If you have lots of RTP, which network card are you using? Does it and your kernel support MSI or MSI-X for better interrupt handling? Can you load balance IRQs across cores? How efficient (or buggy) is the driver (Realtek I’m looking at you)?!?
- The “guano” effect. As features are added to the underlying toolkit (Asterisk, FreeSWITCH, etc) and to your configuration, how is performance affected over time? Add a feature here, and a feature there - and repeat. Over the months and years (even with faster hardware) you may find that each “little” feature reduced call capacity by 5%. Or maybe your calls per second went down by two each time. Not a big deal overall yet over time this adds up - assuming no other optimizations your call capacity could be down by 50% after ten “minor” changes. It adds up - it really does.
Even when pointing out all of these issues you’d still be surprised how often one is faced with the question “Well yeah but how many calls can I handle on my dual core Dell server?”.
In almost every case the best answer is “Get your hardware, develop your solution, run sipp against it and see what happens”. That’s really about as good as we can do.
SIPP is a great example of a typical, high quality open source tool. In true “Unix philosophy” it does one thing and it does it well: SIP performance testing. SIPP can be configured to initiate (or receive) just about any conceivable SIP scenario - from simple INVITE call handling to full SIMPLE test cases. In these tests SIPP will tell you call setup time, messages received, successful dialogs, etc.
SIPP even goes a step further and includes some support for RTP. SIPP has the ability to echo RTP from the remote end or even replay RTP from a PCAP file you have saved to disk. This is where SIPP starts to show some deficiencies. Again, you can’t blame SIPP because SIPP is a SIP performance testing tool - it does that and it does it well. RTP testing leaves a lot to be desired. First of all, you’re on your own when it comes to manipulating any of the PCAP parameters. Length, content, codec, payload types, etc, etc need to be configured separately. This isn’t a problem, necesarily, as there are various open source tools to help you with some of these tasks. I won’t get into all of them here but they too leave something to be desired.
What about analyzing the quality of the RTP streams? SIPP provides mechanisms to measure various SIP “quality” metrics - SIP response times, SIP retransmits, etc. With RTP you’re on your own. Once again, sure, you could setup tshark on a SPAN port (or something) to do RTP stream analysis on every stream but this would be tedious and (once again) subject you to some of the harsh realities of processing a tremendous amount of small packets in software on commodity hardware.
Let’s face it - for a typical B2BUA handling RTP the numbers add up very quickly - let’s assume 20ms packetization for the following:
Single RTP stream = 50 packets per second (pps)
Bi-directional RTP stream = 100 pps
A-leg bi-directional RTP stream = 100 pps
B-leg bi-directional RTP stream = 100 pps
A leg + B leg = 200 pps PER CALL
What does this look like with 10,000 channels (using g711u)?
952 mbit/s (close to Gigabit wire speed) in each direction
1,000,000 (total) packets per second
Open source software is great - it provides us with the tools to (ultimately) build services and businesses. Many of us choose what to focus on (our core competency). At Star2Star we provide business grade communication services and we spend a lot of time and energy to build these services because it’s what we do. We don’t sell, manufacture, or support testing platforms.
At this point some of you may be getting an idea... Why don’t I build/design an open source testing solution? It’s a good question and while I don’t want to crush your dreams there are some harsh realities:
1) This gets insanely complicated, quickly. Anyone who follows this blog knows SIP itself is complicated enough.
2) Scaling becomes a concern (as noted above).
3) Who would use it?
The last question is probably the most serious - who really needs the ability to initiate 10,000 SIP channels at 100 calls per second while monitoring RTP stream quality, etc? SIP carriers? SIP equipment manufacturers? A few SIP software developers? How large is the market? What kind of investment would be required to even get the project off the ground? What does the competition look like? While I don’t have the answers to most of these questions I can answer the last one.
Commercial SIP testing equipment is available from a few vendors:
Spirent
Empirix
Ixia
...and I’m sure others. We evaluated a few of these solutions and I’ll be talking more about them in a follow-up post in the near future.
- Configuration. Which modules do you have loaded? How are they configured? If you’re using Kamailio, do you do hundreds of huge, slow, nasty DB queries for each call setup? How is your logging configured? Maybe you use Asterisk or FreeSWITCH and so several system calls, DB lookups, LUA scripts, etc? Even the slightest misstep in configuration (synchronous syslogging with Kamailio, for example) can reduce your performance by 90%.
- Features in use. Paging groups (unicast) are notorious for destroying performance on standard hardware - every call needs to be setup individually, you need to handle RTP, and some audio mixing is involved. Hardware that can’t do 10 members in a page group using Asterisk or FreeSWITCH may be capable of hundreds of sessions using Kamailio with no media.
- Standard performance metrics. “Thousands of calls” you say? How many calls per second? Are you transcoding? Maybe you’re not handling any media at all? What is the delay in call setup?
- Hardware. This may seem obvious (MORE HERTZ) but even then there are issues... If you’re handling RTP, what are you using for timing? If you have lots of RTP, which network card are you using? Does it and your kernel support MSI or MSI-X for better interrupt handling? Can you load balance IRQs across cores? How efficient (or buggy) is the driver (Realtek I’m looking at you)?!?
- The “guano” effect. As features are added to the underlying toolkit (Asterisk, FreeSWITCH, etc) and to your configuration, how is performance affected over time? Add a feature here, and a feature there - and repeat. Over the months and years (even with faster hardware) you may find that each “little” feature reduced call capacity by 5%. Or maybe your calls per second went down by two each time. Not a big deal overall yet over time this adds up - assuming no other optimizations your call capacity could be down by 50% after ten “minor” changes. It adds up - it really does.
Even when pointing out all of these issues you’d still be surprised how often one is faced with the question “Well yeah but how many calls can I handle on my dual core Dell server?”.
In almost every case the best answer is “Get your hardware, develop your solution, run sipp against it and see what happens”. That’s really about as good as we can do.
SIPP is a great example of a typical, high quality open source tool. In true “Unix philosophy” it does one thing and it does it well: SIP performance testing. SIPP can be configured to initiate (or receive) just about any conceivable SIP scenario - from simple INVITE call handling to full SIMPLE test cases. In these tests SIPP will tell you call setup time, messages received, successful dialogs, etc.
SIPP even goes a step further and includes some support for RTP. SIPP has the ability to echo RTP from the remote end or even replay RTP from a PCAP file you have saved to disk. This is where SIPP starts to show some deficiencies. Again, you can’t blame SIPP because SIPP is a SIP performance testing tool - it does that and it does it well. RTP testing leaves a lot to be desired. First of all, you’re on your own when it comes to manipulating any of the PCAP parameters. Length, content, codec, payload types, etc, etc need to be configured separately. This isn’t a problem, necesarily, as there are various open source tools to help you with some of these tasks. I won’t get into all of them here but they too leave something to be desired.
What about analyzing the quality of the RTP streams? SIPP provides mechanisms to measure various SIP “quality” metrics - SIP response times, SIP retransmits, etc. With RTP you’re on your own. Once again, sure, you could setup tshark on a SPAN port (or something) to do RTP stream analysis on every stream but this would be tedious and (once again) subject you to some of the harsh realities of processing a tremendous amount of small packets in software on commodity hardware.
Let’s face it - for a typical B2BUA handling RTP the numbers add up very quickly - let’s assume 20ms packetization for the following:
Single RTP stream = 50 packets per second (pps)
Bi-directional RTP stream = 100 pps
A-leg bi-directional RTP stream = 100 pps
B-leg bi-directional RTP stream = 100 pps
A leg + B leg = 200 pps PER CALL
What does this look like with 10,000 channels (using g711u)?
952 mbit/s (close to Gigabit wire speed) in each direction
1,000,000 (total) packets per second
Open source software is great - it provides us with the tools to (ultimately) build services and businesses. Many of us choose what to focus on (our core competency). At Star2Star we provide business grade communication services and we spend a lot of time and energy to build these services because it’s what we do. We don’t sell, manufacture, or support testing platforms.
At this point some of you may be getting an idea... Why don’t I build/design an open source testing solution? It’s a good question and while I don’t want to crush your dreams there are some harsh realities:
1) This gets insanely complicated, quickly. Anyone who follows this blog knows SIP itself is complicated enough.
2) Scaling becomes a concern (as noted above).
3) Who would use it?
The last question is probably the most serious - who really needs the ability to initiate 10,000 SIP channels at 100 calls per second while monitoring RTP stream quality, etc? SIP carriers? SIP equipment manufacturers? A few SIP software developers? How large is the market? What kind of investment would be required to even get the project off the ground? What does the competition look like? While I don’t have the answers to most of these questions I can answer the last one.
Commercial SIP testing equipment is available from a few vendors:
Spirent
Empirix
Ixia
...and I’m sure others. We evaluated a few of these solutions and I’ll be talking more about them in a follow-up post in the near future.
Stay tuned because this series is going to be good!
Friday, December 2, 2011
Star2Star Gets Noticed
Just a quick one today (and some shameless self promotion on my part)... Star2Star has been recognized on a few "lists" this year, check it out:
Inc 500
Forbes 100 "Most Promising"
I'm lucky enough to tell people the same story all of the time - when I was a little kid I played with all of this stuff because I thought it was fun and I loved it. Only later did I realize that one day I'd be getting paid for it. I certainly never thought it could come to this!
Ok, enough of that for now. I'll be getting back to some tech stuff soon...
Inc 500
Forbes 100 "Most Promising"
I'm lucky enough to tell people the same story all of the time - when I was a little kid I played with all of this stuff because I thought it was fun and I loved it. Only later did I realize that one day I'd be getting paid for it. I certainly never thought it could come to this!
Ok, enough of that for now. I'll be getting back to some tech stuff soon...
Tuesday, November 15, 2011
Building a Startup (the right way)
(Continued from Building a Startup)
Our way wasn’t working. To put it mildly our “business grade” solution didn’t perform much better than Vonage. We became to exemplify VoIP - jittery calls, dropped calls, one way calls, etc, etc, etc. Most of this was because of the lack of quality ITSPs at that time. Either way our customers didn’t care. It was us. If we went to market with what we had the first time around we were going to loose.
The problem was the other predominant architecture at the time was “hosted”. Someone hosts a PBX for you and ships you some phones. You plug them in behind your router and magically you have a phone system. They weren’t doing much better. Sure, their sales looked good but even then it was becoming obvious customer churn was quite high. People didn’t like hosted either, and for good reason. Typically they have less control over the call than we do.
As I’ve eluded before I thought there was a better way. We needed to host the voice applications where it made the most “sense”. We were primarily using Asterisk and with a little creative provisioning, a kick-ass SIP proxy, and enough Asterisk machines we could build the perfect business PBX - even if that meant virtually none of it existed at the customer premise. Or maybe all of it did. That flexibility was key. After a lot of discussions, whiteboard sessions, and late nights everyone agreed. We needed a do-over.
So we got to work and slowly our new architecture began to take shape. We added a kick-ass SIP proxy (OpenSER). OpenSER would power the core routing between various Asterisk servers each meeting different needs - IVR/Auto Attendant, Conferencing, Voicemail, remote phones (for “hosted” phones/softphones), etc. The beauty was the SIP proxy could route between all of these different systems including the original AstLinux system at the customer premise. Customer needs to call voicemail? No problem - the AstLinux system at the CPE fires an INVITE off to the proxy and the proxy figures out where their voicemail server is. The call is connected and the media goes directly between the two endpoints. Same thing for calls between any two points on the network - AstLinux CPE to AstLinux CPE, PSTN to voicemail, IVR to conference.
This is a good time to take a break and acknowledge what really made this all possible - OpenSER. While it’s difficult to explain the exact history and family tree with any piece of SER software I can tell you one thing - this company would not be possible without it. There is no question in my mind. It’s now 2011 and whether you select Kamailio or OpenSIPS for your SIP project you will not be sorry. Even after five years you will not find a more capable, flexible, scalable piece of SIP server software. It was one of the best decisions we ever made.
Need to add another server to meet demand for IVR? No problem, bring another server online, add the IP to a table and presto - you’re now taking calls on your new IVR. Eventually a new IVR lead to several new IVRs, voicemail servers, conference systems, web portals, mail servers, various monitoring systems, etc.
What about infrastructure? Starting at our small scale, regional footprint, and focus on quality we began to buy our own PRIs and running them on a couple of Cisco AS5350XM gateways. This got us past our initial issues with questionable ITSPs. Bandwidth was becoming another problem... We had an excellent colocation provider that provided blended bandwidth but still we needed more control. Here came BGP, ARIN, AS numbers, a pair of Cisco 7206VXRs w/ G2s, iBGP, multiple upstream providers, etc.
At times I would wonder - whatever happened to spending my time worrying about cross compilers? Looking back I’m not sure which was worse - GNU autoconf cross-compiling hell or SIP interop, BGP, etc. It’s fairly safe to say I’m a sadomasochist either way.
Even with all of the pain, missteps, and work we finally had an architecture to take to market. It would be the architecture that would serve us well for several years. Of course there was more work to be done...
The problem was the other predominant architecture at the time was “hosted”. Someone hosts a PBX for you and ships you some phones. You plug them in behind your router and magically you have a phone system. They weren’t doing much better. Sure, their sales looked good but even then it was becoming obvious customer churn was quite high. People didn’t like hosted either, and for good reason. Typically they have less control over the call than we do.
As I’ve eluded before I thought there was a better way. We needed to host the voice applications where it made the most “sense”. We were primarily using Asterisk and with a little creative provisioning, a kick-ass SIP proxy, and enough Asterisk machines we could build the perfect business PBX - even if that meant virtually none of it existed at the customer premise. Or maybe all of it did. That flexibility was key. After a lot of discussions, whiteboard sessions, and late nights everyone agreed. We needed a do-over.
So we got to work and slowly our new architecture began to take shape. We added a kick-ass SIP proxy (OpenSER). OpenSER would power the core routing between various Asterisk servers each meeting different needs - IVR/Auto Attendant, Conferencing, Voicemail, remote phones (for “hosted” phones/softphones), etc. The beauty was the SIP proxy could route between all of these different systems including the original AstLinux system at the customer premise. Customer needs to call voicemail? No problem - the AstLinux system at the CPE fires an INVITE off to the proxy and the proxy figures out where their voicemail server is. The call is connected and the media goes directly between the two endpoints. Same thing for calls between any two points on the network - AstLinux CPE to AstLinux CPE, PSTN to voicemail, IVR to conference.
This is a good time to take a break and acknowledge what really made this all possible - OpenSER. While it’s difficult to explain the exact history and family tree with any piece of SER software I can tell you one thing - this company would not be possible without it. There is no question in my mind. It’s now 2011 and whether you select Kamailio or OpenSIPS for your SIP project you will not be sorry. Even after five years you will not find a more capable, flexible, scalable piece of SIP server software. It was one of the best decisions we ever made.
Need to add another server to meet demand for IVR? No problem, bring another server online, add the IP to a table and presto - you’re now taking calls on your new IVR. Eventually a new IVR lead to several new IVRs, voicemail servers, conference systems, web portals, mail servers, various monitoring systems, etc.
What about infrastructure? Starting at our small scale, regional footprint, and focus on quality we began to buy our own PRIs and running them on a couple of Cisco AS5350XM gateways. This got us past our initial issues with questionable ITSPs. Bandwidth was becoming another problem... We had an excellent colocation provider that provided blended bandwidth but still we needed more control. Here came BGP, ARIN, AS numbers, a pair of Cisco 7206VXRs w/ G2s, iBGP, multiple upstream providers, etc.
At times I would wonder - whatever happened to spending my time worrying about cross compilers? Looking back I’m not sure which was worse - GNU autoconf cross-compiling hell or SIP interop, BGP, etc. It’s fairly safe to say I’m a sadomasochist either way.
Even with all of the pain, missteps, and work we finally had an architecture to take to market. It would be the architecture that would serve us well for several years. Of course there was more work to be done...
Subscribe to:
Posts (Atom)