Multicast data check with Perl IO::Socket::Multicast

I spend a lot of time dealing with multicast, and one of the more tedious tasks is verifying that I’m receiving data across all of the groups I need. Whether it’s setting a baseline prior to a network change, or verifying everything works after the change, it’s an important task that gets more and more tedious as the number of groups increases.

I’m not a programmer, but I play one on TV.

I decided to try to write something in Perl, which is no small task considering I’ve never excelled when it comes to coding/scripting. Fortunately I know enough to be dangerous, and by combining some elements I’ve seen across various forums and reading the perl documentation, I was able to cobble together something that accomplishes the task of automating my tests.

In my first version I was able to successfully read in group information from a file, and wait for data on each line. The problem was that the program executed serially, so if I had a list of 50 groups, and I needed to wait a full 60 seconds for line, I could potentially wait 50 minutes for the test to complete.

The second version forks those tests out to individual child processes, dramatically decreasing the time it takes to verify a large list of groups.

I’m posting the code here in case someone out there is struggling with a similar issue, and is also not a programmer by nature. I’m sure there are lots of better ways to accomplish this in code. As I said, I’m not a programmer. But this seems to work. If you have any comments on ways to make it more efficient or handle something better, I’d love to hear about it in the comments.



use strict;
use warnings;
use IO::Socket::Multicast;

my $grouplist = $ARGV[0];

my $GROUP = '';
my $PORT = '';
my @childproc;
my $data;
print "Parsing File...\n";

##Attempt to open the file provided on the CLI
open(FILE, $grouplist) || die "can't open file: $!";

my @list = ;

print "Contents: \n", @list;

print "Processed Data:\n";
foreach (@list) {
        ($GROUP, $PORT) = split (':',$_);
        my $pid = fork();
        if ($pid) {
                push(@childproc, $pid);
        } elsif ($pid == 0) {
                datacheck ($GROUP, $PORT);
                exit 0;
        } else {
                die "Couldn't Fork: $!\n";


foreach (@childproc) {
        my $tmp = waitpid($_, 0);

print "End of Testing\n";

sub datacheck {
        my $GROUPSUB = $_[0];
        my $PORTSUB = $_[1];
        my $msock = IO::Socket::Multicast->new(Proto=>'udp',LocalAddr=>$GROUPSUB,LocalPort=>$PORTSUB,ReuseAddr=>1);
        $msock->mcast_add($GROUPSUB) || die "Couldn't set group: $!\n";

    eval {
        $SIG{ALRM} = sub { die "timeout" };
        ## The Alarm value decides how long to wait before giving up on each test

        if ($@) {
                print "$GROUPSUB:$PORTSUB -- No data received\n";
        } else {
                print "$GROUPSUB:$PORTSUB  -- Data received\n";


I’ve tested this on CentOS with Perl 5.8.8. This script requires the IO::Socket::Multicast perl module, so be sure to install that before you attempt to run it.

The script expects you to provide the name of a text file that contains a group:port mapping per line.

Update: I discovered that I could run into problems if my list ever contained more than one of the same group, or more than one of the same port. After consulting the module documentation, I realized that I needed to add the LocalAddr and the ReuseAddr options to the constructor code.

Multicast Performance on ESX

Multicast data has always been somewhat of a mystery to network engineers unless they have a very specific reason for using it. Since the financial industry is a heavy user of multicast, I have been fortunate to get my hands very dirty in it throughout my career.

One item that has always vexed our group is how we can consolidate our multicast workloads, and extend the efficiency gains of virtualization to this segment of our environment. These boxes represent a significant cost, and they often go under utilized in terms of CPU/Memory. But because of the nature of the data, it’s difficult to try anything that can degrade performance.

In ESX 5.0, Vmware introduced a new technology that is supposed to help alleviate the performance bottlenecks

  • splitRxMode

I’ll summarize the feature here as described in the Technical Whitepaper:


In previous versions of ESX, all network receive processing for a queue was performed inside a single context within the VMKernel. splitRxMode allows you to direct each vNIC (configured individually) to use a separate context for receive packet processing.

They make a special note to indicate that even though it improves receive processing for multicast, it does incur a CPU penalty due to the extra overhead per packet, so don’t enable it on every machine.


In their testing, VMWare labs reported that they observed 10-25% packet loss on a 16Kpps multicast stream once the number of subscriber VM’s went past 24. After they enabled splitRxMode, the packetloss was < 0.01% all the way up to 32 VM’s on the host.

My Take

Even though VMWare seems confident that the recent IO improvements with splitRxMode will increase multicast performance, there are some key considerations here:

  1. 0.01% is still a lot of packet loss — at 16Kpps, that’s still over 1pps
  2. The scenario they tested is for a one-to-many situation (one stream to multiple receivers). What if the packet rate is higher or the number of streams is higher, but the receiver count is low?

Obviously this requires a lot more testing on our part before we’d ever even consider rolling anything to production. If you have any experience in this regard, please feel free to comment and offer any insights/suggestions you might have.

NOTE: This entry is my first foray into technical blogging. I’ve learned a lot from the blogs I’ve read over the years, and I’ve also found that these types of blogs are the absolute best resource for solving real problems. I hope I can contribute something meaningful and perhaps repay some of what I’ve been given.