A few years back I was talking with my editor at O'Reilly Media about a book I'd like to write. The book would cover details of the SIP protocol, best practice, interop scenarios, and even a few implementation specifics - FreeSWITCH, Asterisk, OpenSIPS, Kamailio, etc. Basically your typical open source software book only this time it would be SIP protocol inward.
While my editor liked the idea (I think they have to tell you that) he said there wouldn't be much of a market for it. If I remember correctly his exact words were "Kris, that's a great idea but only 100 people in the world would buy it". Clearly you can't print a physical book through a major publisher with editors, technical reviewers, etc if only 100 people are going to buy it. I tabled the idea.
Several years later I find myself still regularly talking about SIP and going into many protocol and implementation specifics. Like my editor once told me it seems there aren't a lot of people in this area with either the interest or experience. I guess he was right. Still, SIP is confusing enough and widespread enough that something has to be done.
Over the past couple of months (off and on - I rarely work uninterrupted these days) I sat down and wrote. Stream of consciousness, without reference, writing. What I ended up with is a (currently) 21 page document I like to call "Everything you wish you didn't need to know about VoIP".
It's still an early draft but as we say in open source "release early, release often". It has typos. It may not always be factually correct. There are no headings, chapters, or spacing. I may not always use apostrophes correctly. Over time I will correct these mistakes and hopefully (with your help) address other topics of concern or questions my readers may have. I may even divide it into logical chapters at some point! Wow, that would be nice.
However, as the philosophers say a 100 mile journey begins with a single step. With that said, blog reader, I present to you "Everything you wish you didn't need to know about VoIP".
Let me know what you think (I have a comments section and easy to find e-mail address).
I created AstLinux but I write and rant about a lot of other things here. Mostly rants about SIP and the other various technologies I deal with on a daily basis.
Thursday, June 14, 2012
Monday, June 4, 2012
Sprechen sie deutsch?
Do you speak German?
I don't. I'm sure this comes as a shock to the many people who have sent me e-mails in German over the years. I suppose my last name may give that impression...
Anyway, longtime AstLinux community member and contributor Michael Keuter has setup an AstLinux focused blog in German. Check it out!
I don't. I'm sure this comes as a shock to the many people who have sent me e-mails in German over the years. I suppose my last name may give that impression...
Anyway, longtime AstLinux community member and contributor Michael Keuter has setup an AstLinux focused blog in German. Check it out!
Monday, March 12, 2012
AstLinux Custom Build Engine now available!
Ever since releasing the AstLinux Development Environment several years ago the AstLinux Developers have spent a significant amount of time supporting new users who are (in many cases) building their first image with only minor customizations - slightly different hardware support, different Asterisk versions, etc.
The trouble is, cross compiling anything is an extremely complicated task. To be honest I'm surprised it works as often as it does. When you step back and really look at what's going on it all starts to seem like magic. Many people working in this space will openly admit to being practitioners of vodoo or one of the other dark arts.
After a brief e-mail exchange last week Lonnie Abelbeck and I decided to do something about this. What if we could host a system with a web interface to build custom AstLinux images for users on demand? What if this system could cache previous image configurations and save them for future users? What if this system could be easily adapted to meet future configuration needs?
Amazingly, barely a week later, Lonnie has provided all of these features and more. Available immediately, the AstLinux Custom Build Engine is online to build custom AstLinux images that meet your needs.
In an effort to keep bots, crawlers, and robots in general out we've added simple username and password authentication. The secret is out and the username is "admin" with a password of "astlinux". AstLinux users will recognize these credentials from the default administrative web interface provided with AstLinux. These users will also recognize the familiar tabbed interface.
Go ahead and give it a try!
These interfaces look alike because they share the same DNA. Lonnie Abelbeck has done a great job creating a build system to serve our users now and in the future. Thanks again Lonnie!
P.S. - Lonnie just found this post from me, dated 8/25/2005, where I talk about something that looks a lot like build.astlinux.org. If only Lonnie were around back then to help me actually create such a beast!
The trouble is, cross compiling anything is an extremely complicated task. To be honest I'm surprised it works as often as it does. When you step back and really look at what's going on it all starts to seem like magic. Many people working in this space will openly admit to being practitioners of vodoo or one of the other dark arts.
After a brief e-mail exchange last week Lonnie Abelbeck and I decided to do something about this. What if we could host a system with a web interface to build custom AstLinux images for users on demand? What if this system could cache previous image configurations and save them for future users? What if this system could be easily adapted to meet future configuration needs?
Amazingly, barely a week later, Lonnie has provided all of these features and more. Available immediately, the AstLinux Custom Build Engine is online to build custom AstLinux images that meet your needs.
In an effort to keep bots, crawlers, and robots in general out we've added simple username and password authentication. The secret is out and the username is "admin" with a password of "astlinux". AstLinux users will recognize these credentials from the default administrative web interface provided with AstLinux. These users will also recognize the familiar tabbed interface.
Go ahead and give it a try!
These interfaces look alike because they share the same DNA. Lonnie Abelbeck has done a great job creating a build system to serve our users now and in the future. Thanks again Lonnie!
P.S. - Lonnie just found this post from me, dated 8/25/2005, where I talk about something that looks a lot like build.astlinux.org. If only Lonnie were around back then to help me actually create such a beast!
Monday, February 13, 2012
Hyperspecialization and the shakeup of a 100 year old industry
As someone who often finds themselves "in the trenches" dealing with some extremely nerdy technical nuances it's often easy to miss the larger picture. I guess Mark Spencer was right when he said "Not many people get excited about telephones but the ones who do get REALLY excited about telephones".
As someone who's natural inclination is to get stuck in the details I certainly understand this. Some of you might be right there with me. At this point I've gotten so specialized I'm next to useless on some pretty basic "computer things". Think of the aunt or other relative/friend that lights up when they find out you're a "computer guy". Then they inevitably pull you aside at a wedding to ask for help with their printer or "some box that pops up in Windows". I'm happy to not have to feign ignorance any longer: I truly am ignorant on issues like these.
I primarily use a Mac because it just works - for me. That's not the point I'm trying to make here, though. A Mac works so well for me because I use just two applications: a terminal (Iterm2, to be exact) and Google Chrome. Ok, ok every once in a while I whip up some crazy Wireshark display syntax but that's another post for another day. For the most part when I need to work I take out my Mac, it comes out of sleep, connects to a network, and I start working. It's a tool.
As far as anything with a GUI goes that's the extent of my "expertise". If my aunt wanted to ask me about my bash_profile, screenrc settings, IPv4 address exhaustion, or SIP network architecture I may have something to say. Other than that you'll find me speaking in vague generalities than may lead the more paranoid to suspect I'm secretly a double for the CIA or some international crime syndicate member. I wish I were kidding, this has actually happened before although "international crime syndicate" usually gets loosely translated to "drug dealer". How else does a supposed "computer guy" not understand what's wrong with my printer?!?!
As usual there's a point to all of this. My hyperspecialization, in this case, allows me to forget what is really going on all around me: a shakeup in the 100 year old industry I find myself in and a change in the way we communicate.
The evolution of the telephone is a strange thing. It is a device and service that has remained largely unchanged for 100 years. I'm not kidding. To this day, in some parts of the United States, the only telephone service available could be installed by Alexander Graham Bell himself. Sure there have been many advances since the 1900s but they've been incremental improvements at best - digital services with the same voice bandwidth (dating to 1972), various capacity and engineering changes, and of course - the cell phone.
In the end, however, we're left with a service that isn't much different than what my grandparents had. You still have to phonetically spell various upper-frequency consonants ("S as in Sam, P as in Paul, T as in Tom") because the upper limit of the voice bandwidth on these services is ridiculously low (3.1 kHz). Straining to hear the party at the remote end of a phone has only gotten worse with various digital compression standards in use today - EVRC, AMR, G.729, etc. I love to compare the "pin drop" Sprint commercials of the 80s and 90s to the Verizon Wireless "CAN YOU HEAR ME NOW?" campaign over 20 years later. We still dial by randomly assigned strings of 10 digit numbers. This is supposedly progress?
One thing that has changed - the network has gotten bigger. Much bigger. My grandparents may have not had much use for their party line because they didn't have anyone of interest to talk to on the other end. In this manner the network has exploded - and it has exploded using the same standards that have been in place for these past 100 years. I can directly dial a cell phone on the other side of the world and be connected in seconds.
Meanwhile, there has been another network explosion - IP networks and the internet. The internet, of course, needs no introduction. While I'd love to spend some time talking about IP that's time I don't have at this point. Let's just look at a couple of ways IP has been extremely disruptive for this 100 year old franchise.
Not many people outside of telecom noticed it at the time but back in 2009 AT&T (THE AT&T) petitioned the FCC to decommission the legacy PSTN (copper and pairs and what-not). Just over two years later we're starting to see some results, and AT&T is realizing some ancillary benefits.
As someone who has spent some time (not a lot, thankfully) in these central offices the maze of patch cables, wiring blocks, DC battery banks, etc make you really appreciate the analysis of this report. Normally networks are completely faceless - you go to www.google.com or dial 8005551212 without seeing the equipment that gets you to the other end. The fact that SBC reclaimed as much as 250 MILLION square feet by eliminating this legacy equipment is incredible.
That's all well and good but what has AT&T done for us, the users? The answer is, unfortunately, both good and bad. AT&T like many physical, trench-digging network providers, has realized they are in the business of providing IP connectivity. They don't have much of a product anymore and the product they do have is becoming more and more of a commodity everyday.
Getting out of the way is the smartest thing they could be doing. Speaking of AT&T, remember the Apple iPhone deal? At the time a cell phone was a cell phone - AT&T provided an IP network and got some minutes but Apple built an application platform and changed the way people view the devices they carry with them everywhere they go. Huge.
Watch any sci-fi movie from the past 50 years and one almost ubiquitous "innovation" is the video phone. Did AT&T or some other 100 year old company provide the video phone for baby's first steps to be beamed to Grandma across the country? No - Apple did it with Facetime and a little company from Estonia (Skype) did it over the internet. Thanks to these companies and IP networks we finally have video conferencing (maybe they'll release a 30th anniversary edition of Blade Runner to celebrate).
Unfortunately, there will always be people that cling to technologies of days past - this new network works well for all of these applications that were designed for it. Meanwhile, some technologies are being shoehorned in with disastrous results. Has anyone noticed faxing has actually gotten LESS reliable over the past several years? That's what happens when you try to use decades-old modem designs on a completely different network. You might as well try to burn diesel in your gasoline engine.
The future is the network and the network (regardless of physical access medium) is IP.
And now, for good measure, here are some random links for further reading:
Graves on SOHO Technology - An early advocate of HD Voice, etc.
The Voice Communication Exchange - Wants to push the world to HD Voice by 2018.
As someone who's natural inclination is to get stuck in the details I certainly understand this. Some of you might be right there with me. At this point I've gotten so specialized I'm next to useless on some pretty basic "computer things". Think of the aunt or other relative/friend that lights up when they find out you're a "computer guy". Then they inevitably pull you aside at a wedding to ask for help with their printer or "some box that pops up in Windows". I'm happy to not have to feign ignorance any longer: I truly am ignorant on issues like these.
I primarily use a Mac because it just works - for me. That's not the point I'm trying to make here, though. A Mac works so well for me because I use just two applications: a terminal (Iterm2, to be exact) and Google Chrome. Ok, ok every once in a while I whip up some crazy Wireshark display syntax but that's another post for another day. For the most part when I need to work I take out my Mac, it comes out of sleep, connects to a network, and I start working. It's a tool.
As far as anything with a GUI goes that's the extent of my "expertise". If my aunt wanted to ask me about my bash_profile, screenrc settings, IPv4 address exhaustion, or SIP network architecture I may have something to say. Other than that you'll find me speaking in vague generalities than may lead the more paranoid to suspect I'm secretly a double for the CIA or some international crime syndicate member. I wish I were kidding, this has actually happened before although "international crime syndicate" usually gets loosely translated to "drug dealer". How else does a supposed "computer guy" not understand what's wrong with my printer?!?!
As usual there's a point to all of this. My hyperspecialization, in this case, allows me to forget what is really going on all around me: a shakeup in the 100 year old industry I find myself in and a change in the way we communicate.
The evolution of the telephone is a strange thing. It is a device and service that has remained largely unchanged for 100 years. I'm not kidding. To this day, in some parts of the United States, the only telephone service available could be installed by Alexander Graham Bell himself. Sure there have been many advances since the 1900s but they've been incremental improvements at best - digital services with the same voice bandwidth (dating to 1972), various capacity and engineering changes, and of course - the cell phone.
In the end, however, we're left with a service that isn't much different than what my grandparents had. You still have to phonetically spell various upper-frequency consonants ("S as in Sam, P as in Paul, T as in Tom") because the upper limit of the voice bandwidth on these services is ridiculously low (3.1 kHz). Straining to hear the party at the remote end of a phone has only gotten worse with various digital compression standards in use today - EVRC, AMR, G.729, etc. I love to compare the "pin drop" Sprint commercials of the 80s and 90s to the Verizon Wireless "CAN YOU HEAR ME NOW?" campaign over 20 years later. We still dial by randomly assigned strings of 10 digit numbers. This is supposedly progress?
One thing that has changed - the network has gotten bigger. Much bigger. My grandparents may have not had much use for their party line because they didn't have anyone of interest to talk to on the other end. In this manner the network has exploded - and it has exploded using the same standards that have been in place for these past 100 years. I can directly dial a cell phone on the other side of the world and be connected in seconds.
Meanwhile, there has been another network explosion - IP networks and the internet. The internet, of course, needs no introduction. While I'd love to spend some time talking about IP that's time I don't have at this point. Let's just look at a couple of ways IP has been extremely disruptive for this 100 year old franchise.
Not many people outside of telecom noticed it at the time but back in 2009 AT&T (THE AT&T) petitioned the FCC to decommission the legacy PSTN (copper and pairs and what-not). Just over two years later we're starting to see some results, and AT&T is realizing some ancillary benefits.
As someone who has spent some time (not a lot, thankfully) in these central offices the maze of patch cables, wiring blocks, DC battery banks, etc make you really appreciate the analysis of this report. Normally networks are completely faceless - you go to www.google.com or dial 8005551212 without seeing the equipment that gets you to the other end. The fact that SBC reclaimed as much as 250 MILLION square feet by eliminating this legacy equipment is incredible.
That's all well and good but what has AT&T done for us, the users? The answer is, unfortunately, both good and bad. AT&T like many physical, trench-digging network providers, has realized they are in the business of providing IP connectivity. They don't have much of a product anymore and the product they do have is becoming more and more of a commodity everyday.
Getting out of the way is the smartest thing they could be doing. Speaking of AT&T, remember the Apple iPhone deal? At the time a cell phone was a cell phone - AT&T provided an IP network and got some minutes but Apple built an application platform and changed the way people view the devices they carry with them everywhere they go. Huge.
Watch any sci-fi movie from the past 50 years and one almost ubiquitous "innovation" is the video phone. Did AT&T or some other 100 year old company provide the video phone for baby's first steps to be beamed to Grandma across the country? No - Apple did it with Facetime and a little company from Estonia (Skype) did it over the internet. Thanks to these companies and IP networks we finally have video conferencing (maybe they'll release a 30th anniversary edition of Blade Runner to celebrate).
Unfortunately, there will always be people that cling to technologies of days past - this new network works well for all of these applications that were designed for it. Meanwhile, some technologies are being shoehorned in with disastrous results. Has anyone noticed faxing has actually gotten LESS reliable over the past several years? That's what happens when you try to use decades-old modem designs on a completely different network. You might as well try to burn diesel in your gasoline engine.
The future is the network and the network (regardless of physical access medium) is IP.
And now, for good measure, here are some random links for further reading:
Graves on SOHO Technology - An early advocate of HD Voice, etc.
The Voice Communication Exchange - Wants to push the world to HD Voice by 2018.
Tuesday, December 20, 2011
Performance Testing (Part 1)
Over the past few years (like many other people in this business) I’ve needed to do performance testing. Open source software is great but this is one place where you need to do your own leg work. This conundrum first presented itself in the Asterisk community. There are literally thousands of variables that can affect system the performance of Asterisk, FreeSWITCH, or any other software solution. In no particular order:
- Configuration. Which modules do you have loaded? How are they configured? If you’re using Kamailio, do you do hundreds of huge, slow, nasty DB queries for each call setup? How is your logging configured? Maybe you use Asterisk or FreeSWITCH and so several system calls, DB lookups, LUA scripts, etc? Even the slightest misstep in configuration (synchronous syslogging with Kamailio, for example) can reduce your performance by 90%.
- Features in use. Paging groups (unicast) are notorious for destroying performance on standard hardware - every call needs to be setup individually, you need to handle RTP, and some audio mixing is involved. Hardware that can’t do 10 members in a page group using Asterisk or FreeSWITCH may be capable of hundreds of sessions using Kamailio with no media.
- Standard performance metrics. “Thousands of calls” you say? How many calls per second? Are you transcoding? Maybe you’re not handling any media at all? What is the delay in call setup?
- Hardware. This may seem obvious (MORE HERTZ) but even then there are issues... If you’re handling RTP, what are you using for timing? If you have lots of RTP, which network card are you using? Does it and your kernel support MSI or MSI-X for better interrupt handling? Can you load balance IRQs across cores? How efficient (or buggy) is the driver (Realtek I’m looking at you)?!?
- The “guano” effect. As features are added to the underlying toolkit (Asterisk, FreeSWITCH, etc) and to your configuration, how is performance affected over time? Add a feature here, and a feature there - and repeat. Over the months and years (even with faster hardware) you may find that each “little” feature reduced call capacity by 5%. Or maybe your calls per second went down by two each time. Not a big deal overall yet over time this adds up - assuming no other optimizations your call capacity could be down by 50% after ten “minor” changes. It adds up - it really does.
Even when pointing out all of these issues you’d still be surprised how often one is faced with the question “Well yeah but how many calls can I handle on my dual core Dell server?”.
In almost every case the best answer is “Get your hardware, develop your solution, run sipp against it and see what happens”. That’s really about as good as we can do.
SIPP is a great example of a typical, high quality open source tool. In true “Unix philosophy” it does one thing and it does it well: SIP performance testing. SIPP can be configured to initiate (or receive) just about any conceivable SIP scenario - from simple INVITE call handling to full SIMPLE test cases. In these tests SIPP will tell you call setup time, messages received, successful dialogs, etc.
SIPP even goes a step further and includes some support for RTP. SIPP has the ability to echo RTP from the remote end or even replay RTP from a PCAP file you have saved to disk. This is where SIPP starts to show some deficiencies. Again, you can’t blame SIPP because SIPP is a SIP performance testing tool - it does that and it does it well. RTP testing leaves a lot to be desired. First of all, you’re on your own when it comes to manipulating any of the PCAP parameters. Length, content, codec, payload types, etc, etc need to be configured separately. This isn’t a problem, necesarily, as there are various open source tools to help you with some of these tasks. I won’t get into all of them here but they too leave something to be desired.
What about analyzing the quality of the RTP streams? SIPP provides mechanisms to measure various SIP “quality” metrics - SIP response times, SIP retransmits, etc. With RTP you’re on your own. Once again, sure, you could setup tshark on a SPAN port (or something) to do RTP stream analysis on every stream but this would be tedious and (once again) subject you to some of the harsh realities of processing a tremendous amount of small packets in software on commodity hardware.
Let’s face it - for a typical B2BUA handling RTP the numbers add up very quickly - let’s assume 20ms packetization for the following:
Single RTP stream = 50 packets per second (pps)
Bi-directional RTP stream = 100 pps
A-leg bi-directional RTP stream = 100 pps
B-leg bi-directional RTP stream = 100 pps
A leg + B leg = 200 pps PER CALL
What does this look like with 10,000 channels (using g711u)?
952 mbit/s (close to Gigabit wire speed) in each direction
1,000,000 (total) packets per second
Open source software is great - it provides us with the tools to (ultimately) build services and businesses. Many of us choose what to focus on (our core competency). At Star2Star we provide business grade communication services and we spend a lot of time and energy to build these services because it’s what we do. We don’t sell, manufacture, or support testing platforms.
At this point some of you may be getting an idea... Why don’t I build/design an open source testing solution? It’s a good question and while I don’t want to crush your dreams there are some harsh realities:
1) This gets insanely complicated, quickly. Anyone who follows this blog knows SIP itself is complicated enough.
2) Scaling becomes a concern (as noted above).
3) Who would use it?
The last question is probably the most serious - who really needs the ability to initiate 10,000 SIP channels at 100 calls per second while monitoring RTP stream quality, etc? SIP carriers? SIP equipment manufacturers? A few SIP software developers? How large is the market? What kind of investment would be required to even get the project off the ground? What does the competition look like? While I don’t have the answers to most of these questions I can answer the last one.
Commercial SIP testing equipment is available from a few vendors:
Spirent
Empirix
Ixia
...and I’m sure others. We evaluated a few of these solutions and I’ll be talking more about them in a follow-up post in the near future.
- Configuration. Which modules do you have loaded? How are they configured? If you’re using Kamailio, do you do hundreds of huge, slow, nasty DB queries for each call setup? How is your logging configured? Maybe you use Asterisk or FreeSWITCH and so several system calls, DB lookups, LUA scripts, etc? Even the slightest misstep in configuration (synchronous syslogging with Kamailio, for example) can reduce your performance by 90%.
- Features in use. Paging groups (unicast) are notorious for destroying performance on standard hardware - every call needs to be setup individually, you need to handle RTP, and some audio mixing is involved. Hardware that can’t do 10 members in a page group using Asterisk or FreeSWITCH may be capable of hundreds of sessions using Kamailio with no media.
- Standard performance metrics. “Thousands of calls” you say? How many calls per second? Are you transcoding? Maybe you’re not handling any media at all? What is the delay in call setup?
- Hardware. This may seem obvious (MORE HERTZ) but even then there are issues... If you’re handling RTP, what are you using for timing? If you have lots of RTP, which network card are you using? Does it and your kernel support MSI or MSI-X for better interrupt handling? Can you load balance IRQs across cores? How efficient (or buggy) is the driver (Realtek I’m looking at you)?!?
- The “guano” effect. As features are added to the underlying toolkit (Asterisk, FreeSWITCH, etc) and to your configuration, how is performance affected over time? Add a feature here, and a feature there - and repeat. Over the months and years (even with faster hardware) you may find that each “little” feature reduced call capacity by 5%. Or maybe your calls per second went down by two each time. Not a big deal overall yet over time this adds up - assuming no other optimizations your call capacity could be down by 50% after ten “minor” changes. It adds up - it really does.
Even when pointing out all of these issues you’d still be surprised how often one is faced with the question “Well yeah but how many calls can I handle on my dual core Dell server?”.
In almost every case the best answer is “Get your hardware, develop your solution, run sipp against it and see what happens”. That’s really about as good as we can do.
SIPP is a great example of a typical, high quality open source tool. In true “Unix philosophy” it does one thing and it does it well: SIP performance testing. SIPP can be configured to initiate (or receive) just about any conceivable SIP scenario - from simple INVITE call handling to full SIMPLE test cases. In these tests SIPP will tell you call setup time, messages received, successful dialogs, etc.
SIPP even goes a step further and includes some support for RTP. SIPP has the ability to echo RTP from the remote end or even replay RTP from a PCAP file you have saved to disk. This is where SIPP starts to show some deficiencies. Again, you can’t blame SIPP because SIPP is a SIP performance testing tool - it does that and it does it well. RTP testing leaves a lot to be desired. First of all, you’re on your own when it comes to manipulating any of the PCAP parameters. Length, content, codec, payload types, etc, etc need to be configured separately. This isn’t a problem, necesarily, as there are various open source tools to help you with some of these tasks. I won’t get into all of them here but they too leave something to be desired.
What about analyzing the quality of the RTP streams? SIPP provides mechanisms to measure various SIP “quality” metrics - SIP response times, SIP retransmits, etc. With RTP you’re on your own. Once again, sure, you could setup tshark on a SPAN port (or something) to do RTP stream analysis on every stream but this would be tedious and (once again) subject you to some of the harsh realities of processing a tremendous amount of small packets in software on commodity hardware.
Let’s face it - for a typical B2BUA handling RTP the numbers add up very quickly - let’s assume 20ms packetization for the following:
Single RTP stream = 50 packets per second (pps)
Bi-directional RTP stream = 100 pps
A-leg bi-directional RTP stream = 100 pps
B-leg bi-directional RTP stream = 100 pps
A leg + B leg = 200 pps PER CALL
What does this look like with 10,000 channels (using g711u)?
952 mbit/s (close to Gigabit wire speed) in each direction
1,000,000 (total) packets per second
Open source software is great - it provides us with the tools to (ultimately) build services and businesses. Many of us choose what to focus on (our core competency). At Star2Star we provide business grade communication services and we spend a lot of time and energy to build these services because it’s what we do. We don’t sell, manufacture, or support testing platforms.
At this point some of you may be getting an idea... Why don’t I build/design an open source testing solution? It’s a good question and while I don’t want to crush your dreams there are some harsh realities:
1) This gets insanely complicated, quickly. Anyone who follows this blog knows SIP itself is complicated enough.
2) Scaling becomes a concern (as noted above).
3) Who would use it?
The last question is probably the most serious - who really needs the ability to initiate 10,000 SIP channels at 100 calls per second while monitoring RTP stream quality, etc? SIP carriers? SIP equipment manufacturers? A few SIP software developers? How large is the market? What kind of investment would be required to even get the project off the ground? What does the competition look like? While I don’t have the answers to most of these questions I can answer the last one.
Commercial SIP testing equipment is available from a few vendors:
Spirent
Empirix
Ixia
...and I’m sure others. We evaluated a few of these solutions and I’ll be talking more about them in a follow-up post in the near future.
Stay tuned because this series is going to be good!
Friday, December 2, 2011
Star2Star Gets Noticed
Just a quick one today (and some shameless self promotion on my part)... Star2Star has been recognized on a few "lists" this year, check it out:
Inc 500
Forbes 100 "Most Promising"
I'm lucky enough to tell people the same story all of the time - when I was a little kid I played with all of this stuff because I thought it was fun and I loved it. Only later did I realize that one day I'd be getting paid for it. I certainly never thought it could come to this!
Ok, enough of that for now. I'll be getting back to some tech stuff soon...
Inc 500
Forbes 100 "Most Promising"
I'm lucky enough to tell people the same story all of the time - when I was a little kid I played with all of this stuff because I thought it was fun and I loved it. Only later did I realize that one day I'd be getting paid for it. I certainly never thought it could come to this!
Ok, enough of that for now. I'll be getting back to some tech stuff soon...
Tuesday, November 15, 2011
Building a Startup (the right way)
(Continued from Building a Startup)
Our way wasn’t working. To put it mildly our “business grade” solution didn’t perform much better than Vonage. We became to exemplify VoIP - jittery calls, dropped calls, one way calls, etc, etc, etc. Most of this was because of the lack of quality ITSPs at that time. Either way our customers didn’t care. It was us. If we went to market with what we had the first time around we were going to loose.
The problem was the other predominant architecture at the time was “hosted”. Someone hosts a PBX for you and ships you some phones. You plug them in behind your router and magically you have a phone system. They weren’t doing much better. Sure, their sales looked good but even then it was becoming obvious customer churn was quite high. People didn’t like hosted either, and for good reason. Typically they have less control over the call than we do.
As I’ve eluded before I thought there was a better way. We needed to host the voice applications where it made the most “sense”. We were primarily using Asterisk and with a little creative provisioning, a kick-ass SIP proxy, and enough Asterisk machines we could build the perfect business PBX - even if that meant virtually none of it existed at the customer premise. Or maybe all of it did. That flexibility was key. After a lot of discussions, whiteboard sessions, and late nights everyone agreed. We needed a do-over.
So we got to work and slowly our new architecture began to take shape. We added a kick-ass SIP proxy (OpenSER). OpenSER would power the core routing between various Asterisk servers each meeting different needs - IVR/Auto Attendant, Conferencing, Voicemail, remote phones (for “hosted” phones/softphones), etc. The beauty was the SIP proxy could route between all of these different systems including the original AstLinux system at the customer premise. Customer needs to call voicemail? No problem - the AstLinux system at the CPE fires an INVITE off to the proxy and the proxy figures out where their voicemail server is. The call is connected and the media goes directly between the two endpoints. Same thing for calls between any two points on the network - AstLinux CPE to AstLinux CPE, PSTN to voicemail, IVR to conference.
This is a good time to take a break and acknowledge what really made this all possible - OpenSER. While it’s difficult to explain the exact history and family tree with any piece of SER software I can tell you one thing - this company would not be possible without it. There is no question in my mind. It’s now 2011 and whether you select Kamailio or OpenSIPS for your SIP project you will not be sorry. Even after five years you will not find a more capable, flexible, scalable piece of SIP server software. It was one of the best decisions we ever made.
Need to add another server to meet demand for IVR? No problem, bring another server online, add the IP to a table and presto - you’re now taking calls on your new IVR. Eventually a new IVR lead to several new IVRs, voicemail servers, conference systems, web portals, mail servers, various monitoring systems, etc.
What about infrastructure? Starting at our small scale, regional footprint, and focus on quality we began to buy our own PRIs and running them on a couple of Cisco AS5350XM gateways. This got us past our initial issues with questionable ITSPs. Bandwidth was becoming another problem... We had an excellent colocation provider that provided blended bandwidth but still we needed more control. Here came BGP, ARIN, AS numbers, a pair of Cisco 7206VXRs w/ G2s, iBGP, multiple upstream providers, etc.
At times I would wonder - whatever happened to spending my time worrying about cross compilers? Looking back I’m not sure which was worse - GNU autoconf cross-compiling hell or SIP interop, BGP, etc. It’s fairly safe to say I’m a sadomasochist either way.
Even with all of the pain, missteps, and work we finally had an architecture to take to market. It would be the architecture that would serve us well for several years. Of course there was more work to be done...
The problem was the other predominant architecture at the time was “hosted”. Someone hosts a PBX for you and ships you some phones. You plug them in behind your router and magically you have a phone system. They weren’t doing much better. Sure, their sales looked good but even then it was becoming obvious customer churn was quite high. People didn’t like hosted either, and for good reason. Typically they have less control over the call than we do.
As I’ve eluded before I thought there was a better way. We needed to host the voice applications where it made the most “sense”. We were primarily using Asterisk and with a little creative provisioning, a kick-ass SIP proxy, and enough Asterisk machines we could build the perfect business PBX - even if that meant virtually none of it existed at the customer premise. Or maybe all of it did. That flexibility was key. After a lot of discussions, whiteboard sessions, and late nights everyone agreed. We needed a do-over.
So we got to work and slowly our new architecture began to take shape. We added a kick-ass SIP proxy (OpenSER). OpenSER would power the core routing between various Asterisk servers each meeting different needs - IVR/Auto Attendant, Conferencing, Voicemail, remote phones (for “hosted” phones/softphones), etc. The beauty was the SIP proxy could route between all of these different systems including the original AstLinux system at the customer premise. Customer needs to call voicemail? No problem - the AstLinux system at the CPE fires an INVITE off to the proxy and the proxy figures out where their voicemail server is. The call is connected and the media goes directly between the two endpoints. Same thing for calls between any two points on the network - AstLinux CPE to AstLinux CPE, PSTN to voicemail, IVR to conference.
This is a good time to take a break and acknowledge what really made this all possible - OpenSER. While it’s difficult to explain the exact history and family tree with any piece of SER software I can tell you one thing - this company would not be possible without it. There is no question in my mind. It’s now 2011 and whether you select Kamailio or OpenSIPS for your SIP project you will not be sorry. Even after five years you will not find a more capable, flexible, scalable piece of SIP server software. It was one of the best decisions we ever made.
Need to add another server to meet demand for IVR? No problem, bring another server online, add the IP to a table and presto - you’re now taking calls on your new IVR. Eventually a new IVR lead to several new IVRs, voicemail servers, conference systems, web portals, mail servers, various monitoring systems, etc.
What about infrastructure? Starting at our small scale, regional footprint, and focus on quality we began to buy our own PRIs and running them on a couple of Cisco AS5350XM gateways. This got us past our initial issues with questionable ITSPs. Bandwidth was becoming another problem... We had an excellent colocation provider that provided blended bandwidth but still we needed more control. Here came BGP, ARIN, AS numbers, a pair of Cisco 7206VXRs w/ G2s, iBGP, multiple upstream providers, etc.
At times I would wonder - whatever happened to spending my time worrying about cross compilers? Looking back I’m not sure which was worse - GNU autoconf cross-compiling hell or SIP interop, BGP, etc. It’s fairly safe to say I’m a sadomasochist either way.
Even with all of the pain, missteps, and work we finally had an architecture to take to market. It would be the architecture that would serve us well for several years. Of course there was more work to be done...
Wednesday, November 2, 2011
Building a Startup
(Continued from Starting a Startup)
After several days of meetings in Sarasota we determined:
1) I was moving to Sarasota to start a company with Norm and Joe.
2) We were going to utilize open source software wherever possible (including AstLinux, obviously).
3) The Internet was the only ubiquitous, high quality network to build a nationwide platform.
4) The Internet was only getting more ubiquitous, more reliable, and faster in coming months/years/decades/etc.
5) We were going to take advantage of as much of this as possible.
These were some pretty lofty goals. Remember, this is early 2006. Gmail was still invitation-only beta. Google docs didn’t exist. Amazon EC2 didn’t exist. “Cloud computing” hadn’t come back into fashion yet. The term itself didn’t exist. The Internet was considered (by many) to be “best effort”, “inherently unreliable”, and “unsuitable” for critical communications (such as real time business telephony). There were many naysayers who were confident this would be a miserable failure. As it turns out, they were almost right.
We thought the “secret sauce” to business grade voice over the internet was monitoring and management. If one could monitor and manage the internet connection business grade voice should be possible. Of course this is very ambiguous but it lead to several great hires. We hired
Joe had already deployed several embedded Asterisk systems to various businesses in the Sarasota area. They used an embedded version of Linux he patched together and a third party (unnamed) “carrier” to connect to the PSTN. The first step was upgrading these machines and getting them on AstLinux. Once this was accomplished we felt confident enough to proceed with our plan. This was Star2Star Communications and in the beginning of 2006 it looked something like this:
1) Soekris net4801 machines running AstLinux on the customer premise.
2) Grandstream GXP-2000 phones at each desk.
3) Connectivity to a third party “ITSP”.
4) Management/monitoring systems (check IP connectivity, phone availability, ITSP reliability, local LAN, etc).
5) Central provisioning of AstLinux systems, phones, etc.
This was Star2Star and there was something I really liked about it - it was simple. Anyone who knows me or knows of my projects (AstLinux, for example) has to know I favor simplicity whenever possible. Keep it simple, keep it simple, keep it simple (stupid).
As time went on we started to learn that maybe this was too simple. We didn’t have enough control. Out monitoring wasn’t as mature as it should be. We didn’t pick the right IP phones. These could be easily fixed. However, we soon realized our biggest mistake was architecture (or lack thereof). This wasn’t going to be an easy fix.
We couldn’t find an ITSP that offered a level of quality we considered to be acceptable. Very few ITSPs had any more experience with VoIP, SIP, and the internet than we did. More disturbing, however, was an almost complete lack of focus on quality and reliability. No process.
What we (quickly) discovered is the extremely low barrier to entry for ITSPs, especially back then. Virtually anyone could install Asterisk on a $100/mo box in a colo somewhere, buy dialtone from someone (who knows) and call themselves an ITSP. After going through several of these we discovered we needed to do it ourselves.
Even assuming we could solve the PSTN connectivity problem we discovered yet another issue. All of the monitoring and management in the world cannot make up for a terrible last mile. If the copper in the ground is rotting and the DSL modem can only negotiate 128kbps/128kbps that’s all you’re going to get. To make matters worse in the event of a cut or outage the customer would be down completely. While that may have always happened with the PSTN and an on premise PBX we considered this to be unacceptable.
So then, in the eleventh hour, just before launch I met with the original founders and posed a radical idea - scrap almost everything. There was a better way.
(Continued in Building a Startup (the right way))
1) I was moving to Sarasota to start a company with Norm and Joe.
2) We were going to utilize open source software wherever possible (including AstLinux, obviously).
3) The Internet was the only ubiquitous, high quality network to build a nationwide platform.
4) The Internet was only getting more ubiquitous, more reliable, and faster in coming months/years/decades/etc.
5) We were going to take advantage of as much of this as possible.
These were some pretty lofty goals. Remember, this is early 2006. Gmail was still invitation-only beta. Google docs didn’t exist. Amazon EC2 didn’t exist. “Cloud computing” hadn’t come back into fashion yet. The term itself didn’t exist. The Internet was considered (by many) to be “best effort”, “inherently unreliable”, and “unsuitable” for critical communications (such as real time business telephony). There were many naysayers who were confident this would be a miserable failure. As it turns out, they were almost right.
We thought the “secret sauce” to business grade voice over the internet was monitoring and management. If one could monitor and manage the internet connection business grade voice should be possible. Of course this is very ambiguous but it lead to several great hires. We hired
Joe had already deployed several embedded Asterisk systems to various businesses in the Sarasota area. They used an embedded version of Linux he patched together and a third party (unnamed) “carrier” to connect to the PSTN. The first step was upgrading these machines and getting them on AstLinux. Once this was accomplished we felt confident enough to proceed with our plan. This was Star2Star Communications and in the beginning of 2006 it looked something like this:
1) Soekris net4801 machines running AstLinux on the customer premise.
2) Grandstream GXP-2000 phones at each desk.
3) Connectivity to a third party “ITSP”.
4) Management/monitoring systems (check IP connectivity, phone availability, ITSP reliability, local LAN, etc).
5) Central provisioning of AstLinux systems, phones, etc.
This was Star2Star and there was something I really liked about it - it was simple. Anyone who knows me or knows of my projects (AstLinux, for example) has to know I favor simplicity whenever possible. Keep it simple, keep it simple, keep it simple (stupid).
As time went on we started to learn that maybe this was too simple. We didn’t have enough control. Out monitoring wasn’t as mature as it should be. We didn’t pick the right IP phones. These could be easily fixed. However, we soon realized our biggest mistake was architecture (or lack thereof). This wasn’t going to be an easy fix.
We couldn’t find an ITSP that offered a level of quality we considered to be acceptable. Very few ITSPs had any more experience with VoIP, SIP, and the internet than we did. More disturbing, however, was an almost complete lack of focus on quality and reliability. No process.
What we (quickly) discovered is the extremely low barrier to entry for ITSPs, especially back then. Virtually anyone could install Asterisk on a $100/mo box in a colo somewhere, buy dialtone from someone (who knows) and call themselves an ITSP. After going through several of these we discovered we needed to do it ourselves.
Even assuming we could solve the PSTN connectivity problem we discovered yet another issue. All of the monitoring and management in the world cannot make up for a terrible last mile. If the copper in the ground is rotting and the DSL modem can only negotiate 128kbps/128kbps that’s all you’re going to get. To make matters worse in the event of a cut or outage the customer would be down completely. While that may have always happened with the PSTN and an on premise PBX we considered this to be unacceptable.
So then, in the eleventh hour, just before launch I met with the original founders and posed a radical idea - scrap almost everything. There was a better way.
(Continued in Building a Startup (the right way))
Tuesday, October 25, 2011
Starting a Startup
I know I’ve apologized for being quiet in the past. This is not one of those times because (as you’ll soon find out) I’ve been hard at work and only now can I finally talk about it.
Six years ago I was spending most of my time working with Asterisk and AstLinux. I spent a lot of time promoting both - working the conference circuit, blogging, magazines, books, etc. Conferences are a great way to network and meet new people. I did just that. With each conference I attended came new business opportunities. Sure, not all of them were a slam dunk and eventually I started to pick and chose which conferences I considered worthy of the time and investment.
For anyone involved with Asterisk Astricon is certainly worthy of your time and energy - the mecca of the Asterisk community. Astricon was always a whirlwind and 2005 was no exception. We were in Anaheim, California and embedded Asterisk was starting to really heat up. I announced my port of AstLinux to Gumstix and announced the “World’s Smallest PBX”, leading to an interview and story in LinuxDevices. I worked a free community booth (thanks Astricon) with Dave Taht and was introduced to Captain Crunch (that’s another post for another day).
It was at Astricon in 2005 that I also met one of my soon to be business partners (although I certainly didn’t know it at the time). While I was promoting embedded Asterisk and AstLinux I met a man from Florida named Joe Rhem. Joe had come up with the idea of using embedded Asterisk systems as the cornerstone of a new way to provide business grade telephone services. Joe and I met for a few minutes and discussed the merits of embedded Asterisk. Unfortunately (and everyone already knows this) I don’t remember meeting with Joe. Like I said Astricon was always a whirlwind and I had these conversations with dozens if not hundreds of people at each show. I made my way through Astricon, made a pit stop in Santa Clara for (the now defunct) ISPCon and then returned home to Lake Geneva, WI with a stack of business cards, a few new stories, and a lot of work to finish (or start, depending on your perspective).
A couple of months later I received an e-mail from Joe Rhem discussing how he’d like to move forward with what we discussed in Anaheim. Joe had recruited another partner to lead the new venture. Norm Worthington was a successful serial entrepreneur and his offer to lead the company was the equivalent of “having General Patton lead your war effort”. After some catch up I was intrigued with Joe’s idea. A few hours on the phone later everyone was pretty comfortable with how this could work.
Now I just needed to fly to Sarasota, FL (where’s that - sounds nice, I thought) to meet with everyone, discuss terms, plan a relocation, and (most importantly) start putting the company, product, and technology together.
A short time later I found myself arriving in Sarasota. It was early January and I coming from Wisconsin I couldn’t believe how nice it was. Looking back on it I’m sure Norm and Joe were very confident I’d be joining them in Sarasota. Working with technology I love “in paradise”, how could I resist?
(Continued in Building a Startup)
Six years ago I was spending most of my time working with Asterisk and AstLinux. I spent a lot of time promoting both - working the conference circuit, blogging, magazines, books, etc. Conferences are a great way to network and meet new people. I did just that. With each conference I attended came new business opportunities. Sure, not all of them were a slam dunk and eventually I started to pick and chose which conferences I considered worthy of the time and investment.
For anyone involved with Asterisk Astricon is certainly worthy of your time and energy - the mecca of the Asterisk community. Astricon was always a whirlwind and 2005 was no exception. We were in Anaheim, California and embedded Asterisk was starting to really heat up. I announced my port of AstLinux to Gumstix and announced the “World’s Smallest PBX”, leading to an interview and story in LinuxDevices. I worked a free community booth (thanks Astricon) with Dave Taht and was introduced to Captain Crunch (that’s another post for another day).
It was at Astricon in 2005 that I also met one of my soon to be business partners (although I certainly didn’t know it at the time). While I was promoting embedded Asterisk and AstLinux I met a man from Florida named Joe Rhem. Joe had come up with the idea of using embedded Asterisk systems as the cornerstone of a new way to provide business grade telephone services. Joe and I met for a few minutes and discussed the merits of embedded Asterisk. Unfortunately (and everyone already knows this) I don’t remember meeting with Joe. Like I said Astricon was always a whirlwind and I had these conversations with dozens if not hundreds of people at each show. I made my way through Astricon, made a pit stop in Santa Clara for (the now defunct) ISPCon and then returned home to Lake Geneva, WI with a stack of business cards, a few new stories, and a lot of work to finish (or start, depending on your perspective).
A couple of months later I received an e-mail from Joe Rhem discussing how he’d like to move forward with what we discussed in Anaheim. Joe had recruited another partner to lead the new venture. Norm Worthington was a successful serial entrepreneur and his offer to lead the company was the equivalent of “having General Patton lead your war effort”. After some catch up I was intrigued with Joe’s idea. A few hours on the phone later everyone was pretty comfortable with how this could work.
Now I just needed to fly to Sarasota, FL (where’s that - sounds nice, I thought) to meet with everyone, discuss terms, plan a relocation, and (most importantly) start putting the company, product, and technology together.
A short time later I found myself arriving in Sarasota. It was early January and I coming from Wisconsin I couldn’t believe how nice it was. Looking back on it I’m sure Norm and Joe were very confident I’d be joining them in Sarasota. Working with technology I love “in paradise”, how could I resist?
(Continued in Building a Startup)
Tuesday, October 26, 2010
Breaking RFC compliance to improve monitoring
A colleague came to me today and had a troubling issue. He's using sipsak and nagios to monitor some SIP endpoints. Pretty standard so far, right? He noticed that when using UDP and checking on an endpoint that was completely offline sipsak would take over 30 seconds to finally return with an error. Meanwhile Nagios would block and wait for sipsak to return...
Without a simple command line option in sipsak that appeared to change this behavior, we had to enter the semi-complicated world of SIP timers. I feared that to change this behavior we'd have to do some things that might not necessarily be RFC compliant...
What's this? For once I'm actually suggesting you do something against the better advice of an RFC?
That's right, I am.
RFC3261 defines multiple timers and timeouts for messages and transactions. It says things like:
"If there is no final response for the original request in 64*T1 seconds"
"The UAC core considers the INVITE transaction completed 64*T1 seconds after the reception of the first 2xx response."
"The 2xx response is passed to the transport with an interval that starts at T1 seconds and doubles for each retransmission until it reaches T2 seconds"
Without even knowing what "T1" is you can start to see that it's a pretty important timing parameter and (more or less) serves as the father of all timeouts in SIP. Let's look at section 17 to find out what T1 is:
"The default value for T1 is 500 ms. T1 is an estimate of the RTT between the client and server transactions. Elements MAY (though it is NOT RECOMMENDED) use smaller values of T1 within closed, private networks that do not permit general Internet connection. T1 MAY be chosen larger, and this is RECOMMENDED if it is known in advance (such as on high latency access links) that the RTT is larger. Whatever the value of T1, the exponential backoffs on retransmissions described in this section MUST be used."
T1 is essentially a variable for RTT between two endpoints that serves as a multiplier for other timeouts. Unless we know better T1 should default to 500ms, which is quite high. Some implementations (such as Asterisk with the SIP peer qualify option) automatically send OPTIONS requests to endpoints in an effort to better determine RTT instead of using the RFC default of 500ms.
In reading through the sipsak source code it appeared to be RFC compliant for timing, using a default T1 value of 500ms and a transaction timeout value of 64*T1. This is why it was taking over 30 seconds (32 seconds to be exact) for sipsak to finally timeout and return the status code to nagios. This comes directly from the RFC:
"For any transport, the client transaction MUST start timer B with a value of 64*T1 seconds (Timer B controls transaction timeouts)."
This is all well and good but what happens when you don't have a way to dynamically determine T1 and you can't wait T1*64 (32s) for your results like my sipsak/nagios check earlier? Simple: you go renegade, throw out the RFC, and hack the sipsak source yourself!
So I had three options:
1) Change the default value of T1.
2) Change the value of T2 by changing the multiplier or setting a static timeout.
3) Some combination of both.
I decided to go with option #3 (RFC be damned). Why?
1) 500ms is crazy high for most of our endpoints. At a glance 100ms would be fine for ~90% of them. I'll pick 150ms.
2) I don't need that many retransmits. If the latency and/or packet loss is that bad I'm not going to wait (my RTP certainly isn't) and I just want to know about it that much quicker.
So I ended up with a quick easy patch to sipsak:
diff -urN sipsak-0.9.6.orig/sipsak.h sipsak-0.9.6/sipsak.h
--- sipsak-0.9.6.orig/sipsak.h 2006-01-28 16:11:50.000000000 -0500
+++ sipsak-0.9.6/sipsak.h 2010-10-26 18:38:45.000000000 -0400
@@ -102,11 +102,7 @@
# define FQDN_SIZE 100
#endif
-#ifdef HAVE_CONFIG_H
-# define SIP_T1 DEFAULT_TIMEOUT
-#else
-# define SIP_T1 500
-#endif
+#define SIP_T1 150
#define SIP_T2 8*SIP_T1
diff -urN sipsak-0.9.6.orig/transport.c sipsak-0.9.6/transport.c
--- sipsak-0.9.6.orig/transport.c 2006-01-28 16:11:34.000000000 -0500
+++ sipsak-0.9.6/transport.c 2010-10-26 18:38:51.000000000 -0400
@@ -286,7 +286,7 @@
}
}
senddiff = deltaT(&(srt->starttime), &(srt->recvtime));
- if (senddiff > (float)64 * (float)SIP_T1) {
+ if (senddiff > inv_final) {
if (timing == 0) {
if (verbose>0)
printf("*** giving up, no final response after %.3f ms\n", senddiff);
This changes the value of T1 to 150ms (more reasonable for most networks) and allows you to specify the number of retransmits (and thus the total timeout) using -D on the sipsak command line:
kkmac:sipsak-0.9.6-build kris$ ./sipsak -p 10.16.0.3 -s sip:ext_callqual@asterisk -D1 -v
** timeout after 150 ms**
*** giving up, no final response after 150.334 ms
kkmac:sipsak-0.9.6-build kris$ ./sipsak -p 10.16.0.3 -s sip:ext_callqual@asterisk -D2 -v
** timeout after 150 ms**
** timeout after 300 ms**
*** giving up, no final response after 460.612 ms
kkmac:sipsak-0.9.6-build kris$ ./sipsak -p 10.16.0.3 -s sip:ext_callqual@asterisk -D4 -v
** timeout after 150 ms**
** timeout after 300 ms**
** timeout after 600 ms**
*** giving up, no final response after 1071.137 ms
kkmac:sipsak-0.9.6-build kris$
Needless to say our monitoring situation is much improved.
Without a simple command line option in sipsak that appeared to change this behavior, we had to enter the semi-complicated world of SIP timers. I feared that to change this behavior we'd have to do some things that might not necessarily be RFC compliant...
What's this? For once I'm actually suggesting you do something against the better advice of an RFC?
That's right, I am.
RFC3261 defines multiple timers and timeouts for messages and transactions. It says things like:
"If there is no final response for the original request in 64*T1 seconds"
"The UAC core considers the INVITE transaction completed 64*T1 seconds after the reception of the first 2xx response."
"The 2xx response is passed to the transport with an interval that starts at T1 seconds and doubles for each retransmission until it reaches T2 seconds"
Without even knowing what "T1" is you can start to see that it's a pretty important timing parameter and (more or less) serves as the father of all timeouts in SIP. Let's look at section 17 to find out what T1 is:
"The default value for T1 is 500 ms. T1 is an estimate of the RTT between the client and server transactions. Elements MAY (though it is NOT RECOMMENDED) use smaller values of T1 within closed, private networks that do not permit general Internet connection. T1 MAY be chosen larger, and this is RECOMMENDED if it is known in advance (such as on high latency access links) that the RTT is larger. Whatever the value of T1, the exponential backoffs on retransmissions described in this section MUST be used."
T1 is essentially a variable for RTT between two endpoints that serves as a multiplier for other timeouts. Unless we know better T1 should default to 500ms, which is quite high. Some implementations (such as Asterisk with the SIP peer qualify option) automatically send OPTIONS requests to endpoints in an effort to better determine RTT instead of using the RFC default of 500ms.
In reading through the sipsak source code it appeared to be RFC compliant for timing, using a default T1 value of 500ms and a transaction timeout value of 64*T1. This is why it was taking over 30 seconds (32 seconds to be exact) for sipsak to finally timeout and return the status code to nagios. This comes directly from the RFC:
"For any transport, the client transaction MUST start timer B with a value of 64*T1 seconds (Timer B controls transaction timeouts)."
This is all well and good but what happens when you don't have a way to dynamically determine T1 and you can't wait T1*64 (32s) for your results like my sipsak/nagios check earlier? Simple: you go renegade, throw out the RFC, and hack the sipsak source yourself!
So I had three options:
1) Change the default value of T1.
2) Change the value of T2 by changing the multiplier or setting a static timeout.
3) Some combination of both.
I decided to go with option #3 (RFC be damned). Why?
1) 500ms is crazy high for most of our endpoints. At a glance 100ms would be fine for ~90% of them. I'll pick 150ms.
2) I don't need that many retransmits. If the latency and/or packet loss is that bad I'm not going to wait (my RTP certainly isn't) and I just want to know about it that much quicker.
So I ended up with a quick easy patch to sipsak:
diff -urN sipsak-0.9.6.orig/sipsak.h sipsak-0.9.6/sipsak.h
--- sipsak-0.9.6.orig/sipsak.h 2006-01-28 16:11:50.000000000 -0500
+++ sipsak-0.9.6/sipsak.h 2010-10-26 18:38:45.000000000 -0400
@@ -102,11 +102,7 @@
# define FQDN_SIZE 100
#endif
-#ifdef HAVE_CONFIG_H
-# define SIP_T1 DEFAULT_TIMEOUT
-#else
-# define SIP_T1 500
-#endif
+#define SIP_T1 150
#define SIP_T2 8*SIP_T1
diff -urN sipsak-0.9.6.orig/transport.c sipsak-0.9.6/transport.c
--- sipsak-0.9.6.orig/transport.c 2006-01-28 16:11:34.000000000 -0500
+++ sipsak-0.9.6/transport.c 2010-10-26 18:38:51.000000000 -0400
@@ -286,7 +286,7 @@
}
}
senddiff = deltaT(&(srt->starttime), &(srt->recvtime));
- if (senddiff > (float)64 * (float)SIP_T1) {
+ if (senddiff > inv_final) {
if (timing == 0) {
if (verbose>0)
printf("*** giving up, no final response after %.3f ms\n", senddiff);
This changes the value of T1 to 150ms (more reasonable for most networks) and allows you to specify the number of retransmits (and thus the total timeout) using -D on the sipsak command line:
kkmac:sipsak-0.9.6-build kris$ ./sipsak -p 10.16.0.3 -s sip:ext_callqual@asterisk -D1 -v
** timeout after 150 ms**
*** giving up, no final response after 150.334 ms
kkmac:sipsak-0.9.6-build kris$ ./sipsak -p 10.16.0.3 -s sip:ext_callqual@asterisk -D2 -v
** timeout after 150 ms**
** timeout after 300 ms**
*** giving up, no final response after 460.612 ms
kkmac:sipsak-0.9.6-build kris$ ./sipsak -p 10.16.0.3 -s sip:ext_callqual@asterisk -D4 -v
** timeout after 150 ms**
** timeout after 300 ms**
** timeout after 600 ms**
*** giving up, no final response after 1071.137 ms
kkmac:sipsak-0.9.6-build kris$
Needless to say our monitoring situation is much improved.
Thursday, August 5, 2010
A ClueCon Update
Cluecon is going very well this year... I spoke the first day and have spent the rest of my time here enjoying the presentations and interacting with the community.
A few highlights:
A few highlights:
- Perfect wireless provided by Meraki. I've never been to a tech conference where the wifi has kept up with the crowd. Well done.
- The Trump Tower. Phenomenal.
- FreeSWITCH HA support in Sofia! This is worthy of its own post and it will have one when I get back and play with it. In the meantime my guy Jay Binks has been working to document this exciting new feature.
- Chicago. I just LOVE this town.
Friday, May 21, 2010
A ClueCon Preview...
A while back I saw a preview for the new A-Team movie. While the movie itself looks horrible I was reminded of the original TV series with its many interesting characters and catch phrases. Among my personal favorites?
I love it when a plan comes together.
That's exactly how I feel with one of my "pet projects" from the past couple of months. Much like Hanibel and the A-Team I was up against formidable issues in trying to accomplish my task: implementing a flexible (very flexible), reasonably high performance LCR server that could be added to my existing architecture.
First I needed to select an LCR "engine". Multiple possibilities were considered but I left the final recommendation up to the DB and billing teams I work with. They selected mod_lcr from FreeSWITCH. While I was certain droute from OpenSIPS (or something similar) would have higher performance I accepted their recommendation. After playing with mod_lcr a bit I can also see its potential.
So now the question was: can FreeSWITCH respond with the proper SIP signaling (300 Multiple Choices)? Using the redirect application from mod_dptools it could not. I created a bounty to add multiple Contact/300 Multiple Choices functionality to FreeSWITCH. Tony had it implemented that day.
With the ability to respond properly I now had to get the data. Mod_lcr looked nice but it certainly wasn't designed for this application. All of the default syntax, tables, etc showed it being used with FreeSWITCH for FreeSWITCH. The tables and code used several bridge specific syntax examples. I hacked mod_lcr to return data to mod_dptools/redirect properly. A created a JIRA issue with my patch and a couple of days later Rupa had it committed.
So now FreeSWITCH could be a route server. All I needed to do was make sure OpenSIPS could route from what FreeSWITCH returned. Turns out it could not. RFC 3261 (section 21.3.1) states "...the SIP response MAY contain several Contact fields or a list of addresses in a Contact field." The Sofia stack from FreeSWITCH used multiple Contact headers, each with its own URI. OpenSIPS would only parse the first one returned. Sofia couldn't be changed easily so OpenSIPS would need to be changed (it was non-compliant anyway). Without this change there is no ability to handle multiple contacts and only the first would be used. It could be worse but obviously this wasn't good enough.
I contacted Bogdan from OpenSIPS to see what it would take to update the parser to handle multiple Contact headers. He indicated it would take four hours or so. Once he got back to me I had an OpenSIPS system that would handle multiple contact headers and create new branches from a failure route as desired.
So how did it all turn out? Well, you have two ways to hear the end of this story:
1) Attend ClueCon at the Trump Hotel in Chicago, IL in early August.
2) Wait until mid-August for an update here.
I'll make sure to post all of my materials - conference presentation, sipp scenarios for testing, OpenSIPS configuration, FreeSWITCH configuration, DB tweaks, etc.
Too late to make it to ClueCon this year? Just make sure to register next year, I'm sure I'll be there.
I love it when a plan comes together.
That's exactly how I feel with one of my "pet projects" from the past couple of months. Much like Hanibel and the A-Team I was up against formidable issues in trying to accomplish my task: implementing a flexible (very flexible), reasonably high performance LCR server that could be added to my existing architecture.
First I needed to select an LCR "engine". Multiple possibilities were considered but I left the final recommendation up to the DB and billing teams I work with. They selected mod_lcr from FreeSWITCH. While I was certain droute from OpenSIPS (or something similar) would have higher performance I accepted their recommendation. After playing with mod_lcr a bit I can also see its potential.
So now the question was: can FreeSWITCH respond with the proper SIP signaling (300 Multiple Choices)? Using the redirect application from mod_dptools it could not. I created a bounty to add multiple Contact/300 Multiple Choices functionality to FreeSWITCH. Tony had it implemented that day.
With the ability to respond properly I now had to get the data. Mod_lcr looked nice but it certainly wasn't designed for this application. All of the default syntax, tables, etc showed it being used with FreeSWITCH for FreeSWITCH. The tables and code used several bridge specific syntax examples. I hacked mod_lcr to return data to mod_dptools/redirect properly. A created a JIRA issue with my patch and a couple of days later Rupa had it committed.
So now FreeSWITCH could be a route server. All I needed to do was make sure OpenSIPS could route from what FreeSWITCH returned. Turns out it could not. RFC 3261 (section 21.3.1) states "...the SIP response MAY contain several Contact fields or a list of addresses in a Contact field." The Sofia stack from FreeSWITCH used multiple Contact headers, each with its own URI. OpenSIPS would only parse the first one returned. Sofia couldn't be changed easily so OpenSIPS would need to be changed (it was non-compliant anyway). Without this change there is no ability to handle multiple contacts and only the first would be used. It could be worse but obviously this wasn't good enough.
I contacted Bogdan from OpenSIPS to see what it would take to update the parser to handle multiple Contact headers. He indicated it would take four hours or so. Once he got back to me I had an OpenSIPS system that would handle multiple contact headers and create new branches from a failure route as desired.
So how did it all turn out? Well, you have two ways to hear the end of this story:
1) Attend ClueCon at the Trump Hotel in Chicago, IL in early August.
2) Wait until mid-August for an update here.
I'll make sure to post all of my materials - conference presentation, sipp scenarios for testing, OpenSIPS configuration, FreeSWITCH configuration, DB tweaks, etc.
Too late to make it to ClueCon this year? Just make sure to register next year, I'm sure I'll be there.
Wednesday, May 19, 2010
I've said it before but I'll say it again...
FreeSWITCH rocks!
Earlier today I wanted to play with the possibility of using FreeSWITCH as a route/LCR server for another platform. FreeSWITCH already has mod_lcr and redirect. Using these two features FreeSWITCH could be made to respond with a 302 and a single SIP URI in the Contact field.
I wanted more. I wanted a way to respond with multiple routes.
The standard way to do this (using SIP, of course) is to respond to incoming INVITEs with a 300 Multiple Choices. This response should contain a Contact header (or multiple Contact headers) with a list of SIP URIs (along with optional q values, etc) for the original system to route the call to.
As usual I wrote the FreeSWITCH-Users mailing list to make sure this functionality didn't already exist somewhere. It did not and it was suggested I create a bounty.
Creating a bounty is always tough... I don't deal with the source code of FreeSWITCH all that often. I don't know how much work this is going to take. I don't know how much C programmers make. So I did my best to come up with something that seemed fair: $250.
Less than two hours later the feature was coded, committed to FreeSWITCH, tested by me, and paid for.
Once again, Open Source for the win!
Earlier today I wanted to play with the possibility of using FreeSWITCH as a route/LCR server for another platform. FreeSWITCH already has mod_lcr and redirect. Using these two features FreeSWITCH could be made to respond with a 302 and a single SIP URI in the Contact field.
I wanted more. I wanted a way to respond with multiple routes.
The standard way to do this (using SIP, of course) is to respond to incoming INVITEs with a 300 Multiple Choices. This response should contain a Contact header (or multiple Contact headers) with a list of SIP URIs (along with optional q values, etc) for the original system to route the call to.
As usual I wrote the FreeSWITCH-Users mailing list to make sure this functionality didn't already exist somewhere. It did not and it was suggested I create a bounty.
Creating a bounty is always tough... I don't deal with the source code of FreeSWITCH all that often. I don't know how much work this is going to take. I don't know how much C programmers make. So I did my best to come up with something that seemed fair: $250.
Less than two hours later the feature was coded, committed to FreeSWITCH, tested by me, and paid for.
Once again, Open Source for the win!
Wednesday, May 12, 2010
Another SIP gotcha: Cisco
Another quick and dirty SIP interop post.
A while back I was tasked to interface a FreeSWITCH server and a Cisco Unified Communications Manager system. Once the SIP trunk was configured on the Call Manager/CUCM side they sent an INVITE over. It didn't have an SDP.
It appeared that we needed to enable 3pcc (third party call control) in FreeSWITCH. No problem. I enabled 3pcc and interop continued.
Problems arose, however, when we needed to send the Cisco ringback. Whether it be a 180 or 183 (with or without SDP for either) this was going to be tough because with 3pcc enabled the dialog looked like so:
<-- Cisco
--> FreeSWITCH
INVITE (without SDP) <--
100 Trying -->
200 OK (with SDP) -->
ACK (with SDP) <--
So... There was no opportunity to signal progress as long as we 200 OKd the call almost immediately. Sure I probably could generate some ringback after the 200 but that would just be wrong!
As I like to say, the internet to the rescue. Not having much experience with CUCM I thought I'd ask on VoiceOps. Within a few minutes a very nice gentlemen by the name of Mark Holloway mentioned "Media Termination Point Required" as a CUCM configuration option. These were the magic words. After some research it turned out that was the configuration option I needed*. Thanks Mark!
Once "Media Termination Point Required" was enabled on the Cisco side I disabled 3pcc in FreeSWITCH and all was good. Users even get ringback now!
I also brought the issue up on the FreeSWITCH-Users mailing list and found out this has been bothering people for some time. MC from FreeSWITCH was even nice enough to start a wiki page for me to document all of this there.
Sometimes with SIP it's all about the SIMPLE achievements ;).
* That research also brought up another possibility: enabling PRACK/100rel on the CallManager side instead of "MTP Required". Of course the trouble with PRACK is there are a lot of SIP implementations (Asterisk) that don't support it. FreeSWITCH does but can crash. Many SIP implementations don't support the default CUCM configuration (INVITE w/o SDP). I was looking for the most canonical, compatible configuration possible.
A while back I was tasked to interface a FreeSWITCH server and a Cisco Unified Communications Manager system. Once the SIP trunk was configured on the Call Manager/CUCM side they sent an INVITE over. It didn't have an SDP.
It appeared that we needed to enable 3pcc (third party call control) in FreeSWITCH. No problem. I enabled 3pcc and interop continued.
Problems arose, however, when we needed to send the Cisco ringback. Whether it be a 180 or 183 (with or without SDP for either) this was going to be tough because with 3pcc enabled the dialog looked like so:
<-- Cisco
--> FreeSWITCH
INVITE (without SDP) <--
100 Trying -->
200 OK (with SDP) -->
ACK (with SDP) <--
So... There was no opportunity to signal progress as long as we 200 OKd the call almost immediately. Sure I probably could generate some ringback after the 200 but that would just be wrong!
As I like to say, the internet to the rescue. Not having much experience with CUCM I thought I'd ask on VoiceOps. Within a few minutes a very nice gentlemen by the name of Mark Holloway mentioned "Media Termination Point Required" as a CUCM configuration option. These were the magic words. After some research it turned out that was the configuration option I needed*. Thanks Mark!
Once "Media Termination Point Required" was enabled on the Cisco side I disabled 3pcc in FreeSWITCH and all was good. Users even get ringback now!
I also brought the issue up on the FreeSWITCH-Users mailing list and found out this has been bothering people for some time. MC from FreeSWITCH was even nice enough to start a wiki page for me to document all of this there.
Sometimes with SIP it's all about the SIMPLE achievements ;).
* That research also brought up another possibility: enabling PRACK/100rel on the CallManager side instead of "MTP Required". Of course the trouble with PRACK is there are a lot of SIP implementations (Asterisk) that don't support it. FreeSWITCH does but can crash. Many SIP implementations don't support the default CUCM configuration (INVITE w/o SDP). I was looking for the most canonical, compatible configuration possible.
Friday, February 5, 2010
(High Quality) VoIP on the iPhone
(Regular readers will note that my excessive use of parentheses has now spilled into my titles and first sentences)!
Ahhh Apple... Ahh the iPhone. Regardless of how you feel about this company or their product you can't doubt the market impact they've made over the last couple of years (decades perhaps?). Multitouch (NOT multitasking). App Store. iTunes. There are countless other blogs that discuss these topics so I don't need to. As usual I'm here to talk about VoIP.
For the last ten months or so I've been involved (part time) in another local venture. Voalte (pronounced volt) is a startup here in Sarasota, FL founded by Trey Lauderdale. When the Apple iPhone was announced Trey was working in sales for Emergin, a healthcare IT middleware provider. Trey noticed how incredibly arcane the mobile devices used in healthcare are when compared to this new device from Apple. Once the iPhone SDK was announced Trey knew he had to develop an iPhone application for healthcare. This application became Voalte One.
Voalte One is an iPhone application that provides voice, alarms, and text for healthcare point of care providers (that's nurses to you and I). The complete Voalte One solution is comprised of the following parts:
- iPhone
- Voalte Server (XMPP, LDAP, etc)
- Voalte Voice Server (FreeSWITCH using SIP + event socket)
- An overall excellent customer/user experience (also new to healthcare)
Text messaging and alarm integration are cool but as I've already said, I do VoIP. If you'd like to know more about iPhone development, XMPP, LDAP, etc let me know and I can point you in the right direction.
VoIP in our application is interesting. It's a softphone, technically, but unlike one you've ever seen before. As everyone knows the iPhone cannot run multiple applications. It can't background applications. These are just two of the many challenges introduced when developing a user-friendly, always available, reliable non-GSM phone experience for the iPhone. Simply downloading an off the shelf softphone and installing it on the iPhone is not enough.
We're a startup and we get to do cool things. For example, one of the big differences between VoIP/voice with Voalte One on the iPhone is the voice quality. We use G.722 wideband at 16kHz as our standard voice codec. Why? Because one Saturday (after a long night out) Trey and I were having lunch. I asked him if he thought we should set ourselves apart on something as basic as sample rate. After a little explanation on my part we quickly decided - why not?
As cool as G.722 is it introduces some interesting challenges:
- The iPhone. How are we going to get 16kHz audio from the hardware?
- PJSIP (our SIP stack). Does it support G.722? How does it interface with the audio hardware?
- Hospital PBXs. Voalte One interfaces with the hospital PBX as an ordinary extension. Most of them probably don't support G.722. How/where do we resample to the standard 8kHz used in G.711?
After looking through PJSIP and the available audio drivers for the iPhone we decided we needed to write our own. There were legal and technical reasons and I'm glad we did it. Especially because I didn't have to do most of the work! ;) We also confirmed PJSIP supports G.722.
Voalte has an amazing iPhone developer - Robbie Hanson. Robbie, Ben (Voalte CTO), and I were able to look over the available audio frameworks on the iPhone and pick the best. Not only is it the best overall (it supports echo cancellation, etc) it would provide us the sampling rate of 16kHz we knew we needed.
After working with PJSIP and AudioUnit for a while Robbie was able to write an iPhone audio driver (using AudioUnit, of course) for PJSIP. While working on the audio driver Robbie (along with another contributor) also wrote an Objective C wrapper for PJSIP. These are the raw ingredients of a high quality VoIP experience on the iPhone.
In the months leading up to release we had to deal with a plethora of other issues: push notifications, local ringback, wifi, etc, etc. I won't (and probably can't) describe these issues in detail.
The good news is Voalte has done the right thing and released the core components of this solution as open source.
I'm proud to work with companies that "get it" and are willing to actively participate in the free software ecosystem.
Ahhh Apple... Ahh the iPhone. Regardless of how you feel about this company or their product you can't doubt the market impact they've made over the last couple of years (decades perhaps?). Multitouch (NOT multitasking). App Store. iTunes. There are countless other blogs that discuss these topics so I don't need to. As usual I'm here to talk about VoIP.
For the last ten months or so I've been involved (part time) in another local venture. Voalte (pronounced volt) is a startup here in Sarasota, FL founded by Trey Lauderdale. When the Apple iPhone was announced Trey was working in sales for Emergin, a healthcare IT middleware provider. Trey noticed how incredibly arcane the mobile devices used in healthcare are when compared to this new device from Apple. Once the iPhone SDK was announced Trey knew he had to develop an iPhone application for healthcare. This application became Voalte One.
Voalte One is an iPhone application that provides voice, alarms, and text for healthcare point of care providers (that's nurses to you and I). The complete Voalte One solution is comprised of the following parts:
- iPhone
- Voalte Server (XMPP, LDAP, etc)
- Voalte Voice Server (FreeSWITCH using SIP + event socket)
- An overall excellent customer/user experience (also new to healthcare)
Text messaging and alarm integration are cool but as I've already said, I do VoIP. If you'd like to know more about iPhone development, XMPP, LDAP, etc let me know and I can point you in the right direction.
VoIP in our application is interesting. It's a softphone, technically, but unlike one you've ever seen before. As everyone knows the iPhone cannot run multiple applications. It can't background applications. These are just two of the many challenges introduced when developing a user-friendly, always available, reliable non-GSM phone experience for the iPhone. Simply downloading an off the shelf softphone and installing it on the iPhone is not enough.
We're a startup and we get to do cool things. For example, one of the big differences between VoIP/voice with Voalte One on the iPhone is the voice quality. We use G.722 wideband at 16kHz as our standard voice codec. Why? Because one Saturday (after a long night out) Trey and I were having lunch. I asked him if he thought we should set ourselves apart on something as basic as sample rate. After a little explanation on my part we quickly decided - why not?
As cool as G.722 is it introduces some interesting challenges:
- The iPhone. How are we going to get 16kHz audio from the hardware?
- PJSIP (our SIP stack). Does it support G.722? How does it interface with the audio hardware?
- Hospital PBXs. Voalte One interfaces with the hospital PBX as an ordinary extension. Most of them probably don't support G.722. How/where do we resample to the standard 8kHz used in G.711?
After looking through PJSIP and the available audio drivers for the iPhone we decided we needed to write our own. There were legal and technical reasons and I'm glad we did it. Especially because I didn't have to do most of the work! ;) We also confirmed PJSIP supports G.722.
Voalte has an amazing iPhone developer - Robbie Hanson. Robbie, Ben (Voalte CTO), and I were able to look over the available audio frameworks on the iPhone and pick the best. Not only is it the best overall (it supports echo cancellation, etc) it would provide us the sampling rate of 16kHz we knew we needed.
After working with PJSIP and AudioUnit for a while Robbie was able to write an iPhone audio driver (using AudioUnit, of course) for PJSIP. While working on the audio driver Robbie (along with another contributor) also wrote an Objective C wrapper for PJSIP. These are the raw ingredients of a high quality VoIP experience on the iPhone.
In the months leading up to release we had to deal with a plethora of other issues: push notifications, local ringback, wifi, etc, etc. I won't (and probably can't) describe these issues in detail.
The good news is Voalte has done the right thing and released the core components of this solution as open source.
I'm proud to work with companies that "get it" and are willing to actively participate in the free software ecosystem.
Tuesday, February 2, 2010
Upcoming Review: Building Telephony Systems with OpenSIPS 1.6
Packt Publishing has once again asked me to review their latest work in the OpenSIPS series: Building Telephony Systems with OpenSIPS 1.6.
My review of the previous edition goes all the way back to when OpenSIPS was called OpenSER. I have another post discussing that topic...
Anyways, I should be receiving the book this week and I should have a review up by next week.
My review of the previous edition goes all the way back to when OpenSIPS was called OpenSER. I have another post discussing that topic...
Anyways, I should be receiving the book this week and I should have a review up by next week.
Wednesday, January 20, 2010
AstLinux 0.7 released and more!
The AstLinux team (of which I'm an occasional member) has released AstLinux 0.7. Darrick, Philip, Lonnie, and the rest of the community have done a great job getting this release out there. I couldn't be happier with how my little project has grown up!
In addition to getting this release out, they've also taken the time to focus on documentation and a new website.
Well done guys!
In addition to getting this release out, they've also taken the time to focus on documentation and a new website.
Well done guys!
Testing with SIPP
A quick one, I promise...
I'd been having some issues testing Asterisk with sipp. It turns out there is a fairly well known issue with sipp when using five digit port numbers for RTP. A quick Google search found a solution pretty quickly.
Just in case that link ever goes dead, here's the diff:
diff -urb sipp.svn_orig/call.cpp sipp.svn_fixed/call.cpp
--- sipp.svn_orig/call.cpp 2008-12-19 13:14:51.000000000 +0300
+++ sipp.svn_fixed/call.cpp 2008-12-19 13:16:34.000000000 +0300
@@ -192,7 +192,7 @@
/* m=audio not found */
return 0;
}
- begin += strlen(pattern) - 1;
+ begin += strlen(pattern);
end = strstr(begin, "\r\n");
if (!end)
ERROR("get_remote_port_media: no CRLF found");
More on sipp later!
I'd been having some issues testing Asterisk with sipp. It turns out there is a fairly well known issue with sipp when using five digit port numbers for RTP. A quick Google search found a solution pretty quickly.
Just in case that link ever goes dead, here's the diff:
diff -urb sipp.svn_orig/call.cpp sipp.svn_fixed/call.cpp
--- sipp.svn_orig/call.cpp 2008-12-19 13:14:51.000000000 +0300
+++ sipp.svn_fixed/call.cpp 2008-12-19 13:16:34.000000000 +0300
@@ -192,7 +192,7 @@
/* m=audio not found */
return 0;
}
- begin += strlen(pattern) - 1;
+ begin += strlen(pattern);
end = strstr(begin, "\r\n");
if (!end)
ERROR("get_remote_port_media: no CRLF found");
More on sipp later!
Friday, October 9, 2009
I don't "do" testimonials
I was (admittedly) getting a little bored at the office today when I realized that one of my favorite activities to pass time is checking the recent changes page of the FreeSWITCH wiki. Think about that for a minute...
If that doesn't make me a FreeSWITCH fanboy, I don't know what does. I guess I just have to finally admit it:
I am a FreeSWITCH fanboy.
How I got here I'm not really quite sure. What I do know is that I despise fanboys of all sorts (in no particular order):
- Linux
- Apple
- Microsoft (how?)
- Asterisk
- FreeSWITCH (self-loathing and denial are very powerful forces)
I've been using FreeSWITCH for over a year and it's the closest thing to a whirlwind romance I've ever had (nerdom solidified, thank you). While I'm sure the honeymoon period will end eventually at the moment I'm still smitten. I sat down and started to write a testimonial on the FreeSWITCH wiki when I realized that this was the perfect opportunity to return to blogging. I'm back and I want to talk about FreeSWITCH. Here's what I have to say (complete with an obligatory car analogy):
Star2Star has over 12,000 (business) endpoints in the field and we currently use FreeSWITCH for:
- SBC (multiple applications - customer and provider facing)
- Conference services
We're very quickly moving to IVR and (ultimately) voicemail.
What is most impressive about FreeSWITCH are the small details that make working with it an absolute joy:
- Sofia profiles (multiple unique SIP identities Just. Like. That.)
- XML flexibility (static, custom module, mod_xml_curl)
- Dialplan (conditions, PCRE, etc)
- Channel variables (access to remote media IP address and anything else in a CHANNEL VARIABLE!)
- proxy_media
- bypass_media (beautiful, just beautiful)
- RICH SIP/RTP support (SIPS, TCP, UDP, SCTP, SRTP, PAI, RPID). Yeah, I can talk to anything any way I need to.)
- Codec support. Resampling. Repacketization. It's just getting ridiculous.
- Event socket. Inbound/outbound. Async. API commands. Events.
It is still *amazing* to me that I can make FreeSWITCH into the ultimate signaling-only SBC just by adding bypass_media=true or that I can send PAI to a destination with sip_cid_type=pid. Just about every component of FreeSWITCH I've come across is very well thought out, completely logical, and completely predictable. It's like driving a German car - if you take driving seriously everything just works and everything just works exactly as you expect it to.
It's as if Anthony, Brian, Michael, and everyone else on the FreeSWITCH team are constantly sitting around, agonizing every detail of this awesome piece of software just as Hanz, Fritz, and Karl are in Munich agonizing over every detail for the next BMW 7 Series. While there are several differences in this analogy perhaps the biggest difference is that it doesn't cost $100,000 for one of the most well engineered products I've ever come across (obviously I'm not talking about the BMW).
Did I mention that in addition to this rich feature support it actually scales? All the features in the world are completely useless to me in my environment unless they can scale. We've seen again and again - FreeSWITCH scales. With OpenSER/OpenSIPS at our core and FreeSWITCH serving various edge and endpoint roles I can't identify a single point where we couldn't (ultimately) scale to meet growth or demand.
Star2Star is very unique. We have a unique product, unique approach, and unique architecture. While the architecture has generally evolved and scaled over the years with the number of users, the introduction of FreeSWITCH in a given role has always been accompanied by an excitement for what we'll be able to do for our company and our customers with our new favorite piece of software. It's a very exciting time.
The future at Star2Star is powered by our (quickly) growing customer base, FreeSWITCH, and our efforts to relentlessly deploy it where we can.
If that doesn't make me a FreeSWITCH fanboy, I don't know what does. I guess I just have to finally admit it:
I am a FreeSWITCH fanboy.
How I got here I'm not really quite sure. What I do know is that I despise fanboys of all sorts (in no particular order):
- Linux
- Apple
- Microsoft (how?)
- Asterisk
- FreeSWITCH (self-loathing and denial are very powerful forces)
I've been using FreeSWITCH for over a year and it's the closest thing to a whirlwind romance I've ever had (nerdom solidified, thank you). While I'm sure the honeymoon period will end eventually at the moment I'm still smitten. I sat down and started to write a testimonial on the FreeSWITCH wiki when I realized that this was the perfect opportunity to return to blogging. I'm back and I want to talk about FreeSWITCH. Here's what I have to say (complete with an obligatory car analogy):
Star2Star has over 12,000 (business) endpoints in the field and we currently use FreeSWITCH for:
- SBC (multiple applications - customer and provider facing)
- Conference services
We're very quickly moving to IVR and (ultimately) voicemail.
What is most impressive about FreeSWITCH are the small details that make working with it an absolute joy:
- Sofia profiles (multiple unique SIP identities Just. Like. That.)
- XML flexibility (static, custom module, mod_xml_curl)
- Dialplan (conditions, PCRE, etc)
- Channel variables (access to remote media IP address and anything else in a CHANNEL VARIABLE!)
- proxy_media
- bypass_media (beautiful, just beautiful)
- RICH SIP/RTP support (SIPS, TCP, UDP, SCTP, SRTP, PAI, RPID). Yeah, I can talk to anything any way I need to.)
- Codec support. Resampling. Repacketization. It's just getting ridiculous.
- Event socket. Inbound/outbound. Async. API commands. Events.
It is still *amazing* to me that I can make FreeSWITCH into the ultimate signaling-only SBC just by adding bypass_media=true or that I can send PAI to a destination with sip_cid_type=pid. Just about every component of FreeSWITCH I've come across is very well thought out, completely logical, and completely predictable. It's like driving a German car - if you take driving seriously everything just works and everything just works exactly as you expect it to.
It's as if Anthony, Brian, Michael, and everyone else on the FreeSWITCH team are constantly sitting around, agonizing every detail of this awesome piece of software just as Hanz, Fritz, and Karl are in Munich agonizing over every detail for the next BMW 7 Series. While there are several differences in this analogy perhaps the biggest difference is that it doesn't cost $100,000 for one of the most well engineered products I've ever come across (obviously I'm not talking about the BMW).
Did I mention that in addition to this rich feature support it actually scales? All the features in the world are completely useless to me in my environment unless they can scale. We've seen again and again - FreeSWITCH scales. With OpenSER/OpenSIPS at our core and FreeSWITCH serving various edge and endpoint roles I can't identify a single point where we couldn't (ultimately) scale to meet growth or demand.
Star2Star is very unique. We have a unique product, unique approach, and unique architecture. While the architecture has generally evolved and scaled over the years with the number of users, the introduction of FreeSWITCH in a given role has always been accompanied by an excitement for what we'll be able to do for our company and our customers with our new favorite piece of software. It's a very exciting time.
The future at Star2Star is powered by our (quickly) growing customer base, FreeSWITCH, and our efforts to relentlessly deploy it where we can.
Wednesday, March 25, 2009
CANCEL
I don't have a tremendous amount of time so this is going to be a short one.
There is a CANCEL related bug in Asterisk 1.4.23 versions. To be honest I'm not sure when it was introduced but I know when it was fixed:
http://bugs.digium.com/view.php?id=14431
This caused trouble for me because I often use Asterisk in tandem with OpenSIPS and FreeSWITCH. Both platforms were unable to match the CANCEL sent by Asterisk to the original INVITE. As the bug note says, this is because the CANCEL sent by Asterisk had a different branch parameter than the original INVITE. OpenSER/OpenSIPS would fail when checking t_check_trans()as long as method==CANCEL.
FreeSWITCH was a *little* easier to diagnose because it would send a 481. I suppose I could have made (and should make) my OpenSIPS configurations do this when using t_check_trans for CANCEL:
if (is_method("CANCEL")) {
if (!t_check_trans()) {
# No matching transaction, error and exit
sl_send_reply("481","Call leg/transaction does not exist");
exit;
}
# Hand it to tm
t_relay();
exit;
Anyways this has certainly been fixed in Asterisk 1.4.24. I'm looking forward to not dealing with any of this for some time...
There is a CANCEL related bug in Asterisk 1.4.23 versions. To be honest I'm not sure when it was introduced but I know when it was fixed:
http://bugs.digium.com/view.php?id=14431
This caused trouble for me because I often use Asterisk in tandem with OpenSIPS and FreeSWITCH. Both platforms were unable to match the CANCEL sent by Asterisk to the original INVITE. As the bug note says, this is because the CANCEL sent by Asterisk had a different branch parameter than the original INVITE. OpenSER/OpenSIPS would fail when checking t_check_trans()as long as method==CANCEL.
FreeSWITCH was a *little* easier to diagnose because it would send a 481. I suppose I could have made (and should make) my OpenSIPS configurations do this when using t_check_trans for CANCEL:
if (is_method("CANCEL")) {
if (!t_check_trans()) {
# No matching transaction, error and exit
sl_send_reply("481","Call leg/transaction does not exist");
exit;
}
# Hand it to tm
t_relay();
exit;
Anyways this has certainly been fixed in Asterisk 1.4.24. I'm looking forward to not dealing with any of this for some time...
Subscribe to:
Posts (Atom)