Introduction to Cisco Firepower Threat Defense (FTD) on ASA 5500-X

Introduction to Cisco Firepower Threat Defense (FTD) on ASA 5500-X

This week I’m working on testing out the new Firepower Thread Defense (FTD) 6.1 image for the ASA 5500-X, and hopefully getting familiar with how things work in the new setup. One of the things I’m most excited about is the onboard management interface — this is an HTML based interface that no longer requires ASDM, which is a huge step in the right direction, in my opinion.

I’m going to go cover the reimage process and see what a box looks like from a fresh start, as well as give some overviews of the management interface and the CLI. I’ll try not to dig too deep in this introduction but I’m hoping to provide a lot of screenshots of various screens and things I notice during the setup.

For your reference, you can find the 6.1.0 release notes here, and the Firepower Threat Defense 6.1.0 Configuration Guide here.


I’ll cover the gist of the reimage process now, but you can find the full instructions here.

The big thing to note during the reimage, is that it will wipe out everything you have on your device — configuration, ASA/ASDM images, Anyconnect packages — everything. So be sure to backup anything you want to keep.

To complete the reimage you’ll need console access to your ASA, a TFTP server, and the FTD cdisk file for your platform.

Let’s get started.

  1. Verify your hardware is the correct ROMMON version. FTD requires minimum version of 1.1.8. You can verify this using the show module command. Look at the Fw Version value.
    5515-X# sh module
    Mod  Card Type                                    Model              Serial No.
    ---- -------------------------------------------- ------------------ -----------
    0 ASA 5515-X with SW, 6 GE Data, 1 GE Mgmt, AC ASA5515            XXX
    Mod  MAC Address Range                 Hw Version   Fw Version   Sw Version
    ---- --------------------------------- ------------ ------------ ---------------
    0 84b8.022a.133f to 84b8.022a.1346  1.0          2.1(9)8      9.4(2)6
  2. Reload your ASA
  3. Interrupt the boot process by pressing ESC when prompted.
  4. In ROMMON, configure your network settings:
    rommon #0> interface gigabitethernet0/1
    rommon #1> address
    rommon #2> server
    rommon #3> gateway
    rommon #4> file ftd-boot-
    rommon #5> set
  5. Confirm your settings and commit the changes using the sync command
    rommon #6> sync
  6. Initiate the image download using the tftpdnld command:
    rommon #7> tftpdnld
  7. After the downloaded, the device will load the image and you’ll be at an FTD Boot console. From here use the setup command to configure the basic parameters for your box (Hostname, address, gateway, DNS, NTP).
  8. The last step is to download and install the actual FTD install package.
    > system install noconfirm

    The documentation says this step could take up to 30 minutes, but mine finished in less time.

Configuring the ASA using Firepower Device Manager

Once the box is back online, we’re now ready to test out the new onboard management interface, Firepower Device Manager. Browsing to the management address, we’re presented with a screen that almost brings a tear to my eyes:


Finally! After so many years of fighting with ASDM and trying to find the right Java version, we’re finally able to use a built in web interface. Ignore any limitations with the available functionality in FDM for now — just savor the moment.

The default login is admin and Admin123

After login we’re have to go through an initial setup wizard.


That’s fine, I guess, so let’s move through it. I’ll select Gig0/5 as my outside interface since I don’t have it hooked up to anything but the LAN right now.


Next we’ll setup the outside interface addresses for IPv4 and IPv6:


And the Management interface DNS. I choose DHCP during the CLI setup, so this info was already populated for me. All I did was change the hostname.


Click Next and wait patiently….


Uh oh, bold red text is bad, right?


Ok, fine. So it really wants to be able to talk to the outside world during the setup. My test box is remote, and since I only had 2 interfaces connected, I figured I would configure the box without an internet connection for now. Guess FDM had other ideas. So I’ll go back and change the outside interface to be Gig0/0, and we’ll leave the inside disconnected.

I went back through the last couple of screens after changing the outside interface, and was then asked to configure NTP:


Now we get to the licensing page. It looks like FTD will only use Smart Licenses, so I’ll be sure to familiarize myself with that in the very near future. For now I’ll use the 90-day eval license.


And bingo, we’re ready to rock and roll!


The Dashboard

After the confirmation window, we land at the device dashboard:


There’s not much to say about this. It’s got a nice clean look, and it gives you quick access to most of the basic settings on your box. I would compare it to the Device Setup and Device Management sections from ASDM.

Monitoring Menu


The System Dashboard within the monitoring menu is really similar to the ASDM landing page — you have some graphs of throughput, CPU, and Memory, as well as event counts and Disk usage. My test box doesn’t have anything connected to it, unfortunately, so I have to apologize since my screenshots won’t be showing much more than the layout.

If you look down the menu on the left side of the screen, you’ll see the Firepower categories. This is the same info you would see in the Firepower Management Center (FMC) console, or your Firepower Dashboards within ASDM if you’re running it direclty on the box.

Policies Menu

When you click on the Policies menu item, you land on the Access Control page. This is where you will build your policies for allowing/denying traffic and is analagous to the ASDM Access Rules page. There is a default rule already installed (part of the initial setup process) allowing all traffic from inside to outside.


You’ll quickly notice that much like other sections in the management interface, the access rule page feels a bit like the FMC, or even like the Palo Alto firewall interface, for those of you who are familiar with PA. The similarities are even more apparent when you add an access rule:


Clicking on the NAT item, you’ll see the default NAT rule that was also added during the intial setup:


Adding a new NAT rule is just as easy as it was in ASDM, although now you are required to create objects for everything:



Something to note here that differs from ASDM — hovering over objects does not reveal the IP address of the object, only the object name again.

The last page under the Policies menu is Identity and this is where you configure policies on obtaining user identity information.

The first thing to do here is define where you will pull identity information. You can choose between Active Directory or … Active Directory. AD is currently the only supported server type, and you’re only allowed to configure one server here.


Once you have a Directory Server configured you can add identity policy rules. The two types of Authentication available are Active and No Auth. Active Authentication is only used on HTTP traffic, per this note in the help documentation:

Keep in mind that regardless of your rule configuration, active authentication is performed on HTTP traffic only. Thus, you do not need to create rules to exclude non-HTTP traffic from active authentication. You can simply apply an active authentication rule to all sources and destinations if you want to get user identity information for all HTTP traffic.

Another thing to note from the help documentation is that Identity policies don’t actually block traffic — they’re used for gathering information only.

I won’t dig too deep into this right now, but there are two types of Active Authentication – HTTP and HTTP response, and one transparent method that uses integrated windows Authentication.

Objects Menu

Moving over to the objects menu, you’ll see that this is a very familiar space where we can define our host/network objects, port/port group objects, and security zones. The only thing to notice here is that we can configure application filter, URL, and geolocation objects for use in access control rules.


One final note about the Policies and Objects menus, is that much like ASDM, changes are queued for delivery to the device. As you begin making changes, you’ll noticean icon on the top bar with an orange dot:


Seems simple enough — queue changes to deliver them in bulk – got it. One thing I couldn’t find, however, was a cancel or reset changes button. So at this point in time it appears that changes made through the FTD interface are a one way street — better make sure you backed up your config before you started messing around with things.

On the plus side though, after the deployment is completed you can see a record of the changes that were made:



The beloved ASA CLI has also changed with the FTD image. After you first login, you can see that we are no longer in Kansas, er, in ASA land anymore. Instead, we’re running the Cisco Fire Linux OS:

    Copyright 2004-2016, Cisco and/or its affiliates. All rights reserved.
    Cisco is a registered trademark of Cisco Systems, Inc.
    All other trademarks are property of their respective owners.

    Cisco Fire Linux OS v6.1.0 (build 37)
    Cisco ASA5515-X Threat Defense v6.1.0 (build 330)


At first glance things appear pretty similar — you can still run most of your show commands, including:

  • Viewing translations
          > show xlate
            1 in use, 1 most used
            Flags: D - DNS, e - extended, I - identity, i - dynamic, r - portmap,
               s - static, T - twice, N - net-to-net
            NAT from outside: to any:
                flags sIT idle 51:12:07 timeout 0:00:00
  • connections
    > show conn all
    8 in use, 14 most used
    UDP outside NP Identity Ifc, idle 0:01:23, bytes 300, flags -
    UDP outside NP Identity Ifc, idle 0:01:07, bytes 600, flags -
    UDP outside NP Identity Ifc, idle 0:00:40, bytes 900, flags -
    UDP outside NP Identity Ifc, idle 0:00:37, bytes 900, flags -
    UDP outside NP Identity Ifc, idle 0:01:19, bytes 900, flags -
  • routes
    > show route
    Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
        D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
        N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
        E1 - OSPF external type 1, E2 - OSPF external type 2, V - VPN
        i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
        ia - IS-IS inter area, * - candidate default, U - per-user static route
        o - ODR, P - periodic downloaded static route, + - replicated route
    Gateway of last resort is to network
    S* [1/0] via, outside
    C is directly connected, outside

and much, much more!

Suffice it to say that a lot of what you’re used to seeing from the CLI is still available as it relates to viewing your setup and troubleshooting. The big gotcha, however, is that it appears you can’t easily make changes from the CLI. There is no configure terminal any more, and the configuration commands left available to you are minimal:

> configure
  disable-https-access   Disable https access
  disable-ssh-access     Disable ssh access
  firewall               Change to Firewall Configuration Mode
  high-availability      Change to Configure High-Availability Mode
  https-access-list      Configure the https access list
  log-events-to-ramdisk  Configure Logging of Events to disk
  manager                Change to Manager Configuration Mode
  network                Change to Network Configuration Mode
  password               Change password
  ssh-access-list        Configure the ssh access list
  ssl-protocol           Configure SSL protocols for https web access.
  user                   Change to User Configuration Mode

Some of the available configure commands are a bit misleading as well. For example, configure firewall does not allow you to actually change anything about the firewall other than routed or transparent mode. Clearly the goal here is to get you out of the CLI and back into the Web interface. In fact, I wasn’t able to find anything really useful to configure from the CLI — just basic items that you would use to setup the box in the first place. If I find any more useful details I’ll update this post, but for now I’ll just assume that CLI is for troubleshooting only, and all configuration should be done from the GUI.

My Observations

First of all, I love the direction this is going and have wondered for years why Cisco stayed with ASDM given that the competitors are using built-in interfaces. That being said, I also realize and acknowledge that it takes a lot of effort to move away from a management tool like ASDM. I was at Cisco Live this year in Las Vegas, and the ASDM angst was palpable. In fact, when FTD was mentioned in one of my sessions, the crowd went wild when the presenter made the comment that there was no more ASDM in FTD. Many of us have years of experience with ASA’s (or even PIX), so ASDM is very comfortable to us, but it’s hard to deny the anguish it has caused over the years.

As for Firepower Threat Defense itself, it’s a great start and I can’t wait to see what the next releases bring. I’m calmly reminding myself that this is the dot zero first 6.1 release of FTD. Things take time, and the best things take more time.

NX-OS Port Profiles

As I become more familiar with NX-OS, I frequently find features that are meant to make life easier for us network Admins and Engineers. I’ve been informed that the days of CLI jockey’s are rapidly coming to an end, and rightly so, but even with my best DevOps attempts I still find myself having to manually edit configs frequently. One of my least favorite tasks is adding a new Vlan to our ESX cluster — there are just so many interfaces to touch. There must be a better way! Turns out, there is at least one better way (of many, I’m sure) — port profiles.

Port Profile Overview

Port profiles are interface configuration templates that can be assigned to ports that have the same configuration requirements. If you’ve ever found yourself copying and pasting interface configurations on a box, then port-profiles can help you.

The limit to the number of ports that can inherit a profile is platform dependent — my Nexus 7700’s show a limit of 16384, while my Nexus 9300’s show 512.

Creating a port profile

Let’s walk through a really simple example of a port-profile

  1. First, we create the profile, and in so doing, define the type of interface to which the profile will be applied

    NX9K(config)# port-profile type ?
      ethernet          Ethernet type
      interface-vlan    Interface-Vlan type
      port-channel      Port-channel type

    For this example we’ll create a ethernet type. Please also note that on the Nexus 7K’s we can also use types of loopback and tunnel.

  2. Next we define the commands that will be applied to every interface

    NX9K(config)# port-profile type ethernet MY-TEST-PROFILE
    NX9K(config-port-prof)# switchport
    NX9K(config-port-prof)# switchport mode trunk
    NX9K(config-port-prof)# switchport trunk allowed vlan 10,20,30,40,50,100
    NX9K(config-port-prof)# spanning-tree port type edge trunk
    NX9K(config-port-prof)# no shutdown
  3. Lastly we change the state of the profile to enabled.

    NX9K(config-port-prof)# state enabled

That’s all you need to do to create a profile. We can review the configuration on our profile by using the show port-profile command:

NX9K# show port-profile


port-profile MY-TEST-PROFILE
 type: Ethernet
 status: enabled
 max-ports: 512
 config attributes:
  switchport mode trunk
  switchport trunk allowed vlan 10,20,30,40,50,100
  spanning-tree port type edge trunk
  no shutdown
evaluated config attributes:
 switchport mode trunk
 switchport trunk allowed vlan 10,20,30,40,50,100
 spanning-tree port type edge trunk
 no shutdown
assigned interfaces:

This output gives us nearly all the info we need — the type of profile we created, the commands that it contains, commands that are actually being applied (evaluated), and any interfaces that are assigned to use this profile. At this point we haven’t assigned an interface so let’s do that now.

Assigning profiles to interfaces

To assign our newly created profile, we use the inherit port-profile interface sub-command

interface eth101/1/1
  inherit port-profile MY-TEST-PROFILE

And that’s it! Very easy stuff here.

Now the best part comes days or months later when you need to modify the ports. You simply add the new command(s) to the profile, and all assigned interfaces automatically get the updated config.

Viewing interface and port profile configurations

The only thing to remember down the road is that now your interfaces won’t show the actual configuration. So your standard show interface only shows the inherit command:

interface Ethernet101/1/16
    inherit port-profile MY-TEST-PROFILE

There are two ways you can see the commands as applied to each interface. First, you can display the full interface config using the command show port-profile expand-interface name PROFILE_NAME

NX9K# sh port-profile expand-interface name MY-TEST-PROFILE

port-profile MY-TEST-PROFILE
  switchport mode trunk
  switchport trunk allowed vlan 10,20,30,40,50,100
  spanning-tree port type edge
  no shutdown

Or, you can use the command show run interface INTERFACE expand-port-profile

NX9K# sh run int eth101/1/16 expand-port-profile

interface Ethernet101/1/16
 switchport mode trunk
 switchport trunk allowed vlan 10,20,30,40,50,100
 spanning-tree port type edge
 no shutdown

The difference here is that the show port-profile expand-interface command will show you all interfaces with that profile assigned, where the show run interface is only displaying the single interface.


Another great feature of port-profiles is that they are inheritable. This allows you to modularize your configurations and reference them by profile name within other profiles. I came across a good example of this in a presentation from Cisco about using profile inheritance on the Nexus 1000V. In their example, they were applying the same switchport mode and vlan access settings but wanted to apply varying QoS policies. So in their example they had the following profiles:

port-profile WEB
 switchport mode access
 switchport access vlan 100
 no shut

port-profile WEB-GOLD
 inherit port-profile WEB
 service-policy output GOLD

port-profile WEB-SILVER
 inherit port-profile WEB
 service-policy ouput SILVER

interface Eth1/1
 inherit port-profile WEB-GOLD

interface Eth1/2
 inherit port-profile WEB-SILVER

The end result is that all assigned interfaces are configured as access ports in vlan 100, but the QoS policy differed. Only 4 levels of inheritance are supported, so don’t go too crazy here.

Things to remember

As you begin to work with profiles, there are some important things to remember as it relates to order of precedence in the commands that will take effect on the interface. Taken straight from the documentation:

The system applies the commands inherited by the interface or range of interfaces according to the following guidelines:

  • Commands that you enter under the interface mode take precedence over the port profile’s commands if there is a conflict. However, the port profile retains that command in the port profile.

  • The port profile’s commands take precedence over the default commands on the interface, unless the port-profile command is explicitly overridden by the default command.

  • When a range of interfaces inherits a second port profile, the commands of the initial port profile override the commands of the second port profile if there is a conflict.

  • After you inherit a port profile onto an interface or range of interfaces, you can override individual configuration values by entering the new value at the interface configuration level. If you remove the individual configuration values at the interface configuration level, the interface uses the values in the port profile again.

  • There are no default configurations associated with a port profile.

One other important detail from the documentation states that checkpoints are created anytime you enable, modify, or inherit a profile, this way the system can roll back to a good configuration in case of any errors. A profile will never be partially applied — if there are errors, the config is backed out.

So go out and make your life easier — try out port profiles today!

Cisco EZVPN with IOS Router and ASA

I had an interesting request come across my desk, where I needed to configure a site-to-site VPN for some internet connected devices, but the devices were not allowed to connect internally to our network. So basically, I needed to tunnel the internet traffic back to our headend without allowing access to the internal network. The remote location also wouldn’t have a static IP. Having used EZVPN in the past, I figured this would be another great use case. Unfortunately I spent way too many hours trying to find a good example of how to get this setup working, so I figured I’d share my config for anyone else who may be struggling with a similar setup.


EZVPN with IOS and ASA

IOS Router Config (EZVPN Client)

crypto ipsec client ezvpn ez
 connect auto
 group MyTunnelGroup key MySecretKey
 mode client
 username MyVPNUser password MyPassword
 xauth userid mode local
interface Fa0/0
 description WAN Interface
 ip address dhcp
 crypto ipsec client ezvpn ez
interface Fa0/1
 description LAN Interface
 ip address
 crypto ipsec client ezvpn ez inside

The first section defines the properties for the EZVPN connection, and there are 3 items that need special attention:

  1. The group and key you configure here will match the TunnelGroup name and IKEv1 key you configure on the ASA
  2. The username and password are also defined on the ASA. This is the actual user that is being authenticated.
  3. The xauth mode needs to be configured as local so the router doesn’t have to prompt for credentials.

Other items to note:

  1. There are three modes for EZVPN, Client, Network Extension, and Network Plus. If this were a true L2L VPN, I’d use Network Extension or Network Extension Plus so that there was direct IP-IP connectivity between hosts on either side of the VPN. Since I don’t need that, I’m configuring Client mode which is similar to a PAT for all client traffic.
  2. The peer IP will be the outside address of your EZVPN server.

ASA Configuration (EZVPN Server)

access-list EZVPN-ACL standard deny
access-list EZVPN-ACL standard permit any4
group-policy MyGroupPolicy internal
group-policy MyGroupPolicy attributes
 dns-server value
 vpn-access-hours none
 vpn-simultaneous-logins 3
 vpn-idle-timeout 30
 vpn-session-timeout none
 vpn-filter value EZVPN-ACL
 vpn-tunnel-protocol ikev1
 group-lock none
 split-tunnel-policy tunnelall
 split-tunnel-all-dns enable
 vlan none
 nac-settings none
username MyVPNUser password MyPassword
username MyVPNUser attributes
 vpn-group-policy MyGroupPolicy
tunnel-group MyTunnelGroup type remote-access
tunnel-group MyTunnelGroup general-attributes
 default-group-policy MyGroupPolicy
tunnel-group MyTunnelGroup ipsec-attributes
 ikev1 pre-shared-key MySecretKey

The Tunnel Group defines the preshared key for the connection that was referenced in the group MyTunnelGroup key MySecretKey command on the client. The Tunnel Group config also points to a Group Policy that will control the policy for the tunnel. I created a new policy, but you could also use the default DfltGrpPolicy if it fit your needs.


The beautiful thing about EZVPN is that all of the policy aspects are controlled at the Server side. So while the current requirement is to block access to internal resources, I could easily change that on the server side without worrying about messing up the config on the client and bringing the tunnel down.

FHRP Filtering on Cisco ASR1001 with OTV

I’m finally getting the chance to deploy OTV and LISP in a live environment and wanted to share one of the issues I’ve run into.

As I mentioned in my post about OTV Traffic Flow Considerations, using HSRP (or VRRP/GLBP) at each site has the potential to cause traffic to “trombone” through the network in a sub-optimal path. Because of this behavior, FHRP filtering should be configured on your OTV routers to ensure that the HSRP device on each side of the overlay becomes an active gateway for the network. The ASR1001 is supposed to have this built-in.

Here’s the topology:

Production OTV Diagram

The Problem

After I setup OTV and LISP, I noticed that I had spotty connectivity to my host inside the overlay. A continuous ping revealed that I was missing a ping or two almost every 60 seconds. When I looked at the route for that host, the age was always less than 1 minute. Since these routes are redistributed into OSPF, I went back to the OTV/LISP routers and tried to see what was happening.

On the OTV/LISP routers, I could see that the local Lisp routes were also being inserted and withdrawn regularly, which meant that Lisp thought the EID was moving to the other router. Since the LISP mapping system is in charge of communicating EID-to-RLOC mapping changes, I ran debug lisp control-plane map-server and observed the following output (abbreviated):

Oct  2 11:09:41.623 EDT: LISP: Processing received Map-Notify message from to
Oct  2 11:09:41.623 EDT: LISP-0: Local dynEID MOBILE-VMS IID 0 prefix, Received map notify (rlocs: 1/1).
Oct  2 11:09:41.623 EDT: LISP-0: Local dynEID MOBILE-VMS IID 0 prefix, Map-Notify contains new locator, dyn-EID moved (rlocs: 1/1).

Since I hadn’t moved the VM across the overlay, It surprised me to see that LISP thought the VM was moving. After banging my head on the wall with that issue, I started looking lower in the stack at OTV.

During normal operation, the OTV routing table on the local OTV router (router closest to the host) should look like this:

SAV-OTVRTR2#sh otv route
OTV Unicast MAC Routing Table for Overlay1

Inst VLAN BD     MAC Address    AD    Owner  Next Hops(s)
0    800  800    0000.0c07.ac4e 40    BD Eng Gi0/0/0:SI800

Note the route for 0000.0c07.ac4e , which is the MAC for HSRP group 78. This is a FHRP address, so should it even be showing up? Since it was there I assumed that the FHRP filtering must only prevent the route from being advertised to OTV neighbors.

But during one of the blips, I noticed this:

 Inst VLAN BD     MAC Address    AD    Owner  Next Hops(s)
0    800  800    0000.0c07.ac4e 30    ISIS   RAD-OTVRTR2

So not only was the HSRP MAC showing up with FHRP Filtering enabled, but it was also still being advertised across the network. This shouldn’t be.

The Solution – for now

I opened a TAC case and consulted with Cisco about the issue. They agreed that it was “odd” that the HSRP information was leaking across the overlay and recommended I put in an ACL to block FHRP information:

mac access-list extended otv_filter_fhrp
 deny   0000.0c07.ac00 0000.0000.00ff host 0000.0000.0000
 deny   0000.0c9f.f000 0000.0000.0fff host 0000.0000.0000
 deny   0007.b400.0000 0000.00ff.ffff host 0000.0000.0000
 deny   0000.5e00.0100 0000.0000.00ff host 0000.0000.0000
 permit host 0000.0000.0000 host 0000.0000.0000

…and apply the ACL to the OTV Inside interface.

You might notice that OTV automatically adds another ACL:

Extended IP access list otv_fhrp_filter_acl
    10 deny udp any any eq 1985 3222 (57416 matches)
    20 deny 112 any any
    30 permit ip any any (51921 matches)

This ACL blocks the UDP ports used for HSRP and GLBP, as well as IP Protocol 112, VRRP. This must be the portion that is added by default, but it doesn’t seem to be sufficient.


I asked Cisco about why the extra ACL was necessary when the documentation indicates that FHRP was built-in and enabled by default. As soon as I hear something I’ll provide an update. As far as I know the Nexus 7K still requires you to manually configure these ACL’s, but it seems that, for now, so do the ASR’s.


Update: I heard back from Cisco TAC about my issue and they think my problem stems from the fact that I’m trying to use the same physical hardware for both the L2 bridging and the L3 gateway:

Due to the ASR1k architecture, it is recommended that you move FHRP off the ASR. It is unlike N7k architecture where we can keep FHRP on the same device and use a mix of MACLs, VACLs, etc to filter out the virtual MAC from going across the overlay. The only way to really prevent the virtual MAC from being learned across the overlay is to prevent the ASR from ever learning it in the first place.

In regards to the default OTV FHRP filtering, TAC confirmed that the otv_fhrp_filter_acl is added when OTV is configured.  It doesn’t attempt to prevent L2 information from being learned however — it only attempts to block actual HSRP communication across the overlay.

Redistributing Anyconnect VPN addresses into OSPF on Cisco ASA

I’m a big fan of the Cisco Anyconnect VPN client due to its easy configuration, and the relative ease of deployment to end users. When you deploy an Anyconnect VPN on your ASA, one of the important tasks is to decide how to advertise the VPN assigned addresses into the rest of your network. Fortunately, this is easy to accomplish using route redistribution.

Basic Setup

In this example, my VPN pool will be assigned from the range, and I will redistribute these routes into OSPF. Notice that the ASA automatically creates a static host route for a connected client:

ASA# sh route | i 192.168.254
S [1/0] via, Outside

So we have the building blocks for what we need, now let’s look at the configuration.

There are several different ways to accomplish this task, but I’ll demonstrate what I typically use.

Redistributing into OSPF

First, we’ll create a prefix list to match the address pool for our Anyconnect clients:

prefix-list VPN_PREFIX seq 1 permit le 32

This prefix list entry matches the subnet, as well as any routes with a mask less-than or equal to 32 bits. This works great, because our routes will all be /32.

Next we’ll create a route-map that we can reference inside OSPF:

route-map VPN_POOL permit 1
    match ip address prefix-list VPN_PREFIX

And finally, we’ll add enable redistribution in OSPF:

router ospf 1
    redistribute static subnets route-map VPN_POOL

If we look the routing table on another router in our network, we should see the route:

RTR#sh ip route | i 192.168.254
O E2 [110/20] via, 00:5:03, Vlan85

Advertising the subnet instead of individual host routes

If you like to keep your routing tables uncluttered, you might be inclined to only redistribute the entire VPN prefix, instead of the /32 routes. The important thing to remember here is that OSPF will not redistribute a route that is not already in the routing table.

We’ll simply add a static route for the VPN prefix:

route outside

Without any other modifications, we will now see routes like this in our network:

RTR#sh ip route | i 192.168.254
O E2 [110/20] via, 00:07:28, Vlan85
O E2 [110/20] via, 00:07:28, Vlan85

But we want to get rid of the /32 routes. So we have two options now:

  1. Modify the prefix-list to match only the /25 route
  2. Modify the OSPF redistribution command to ignore subnets.

Option 1: Modify the prefix-list

We’ll change the prefix list so we don’t even consider subnets with different masks:

no prefix-list VPN_PREFIX seq 1 permit le 32
prefix-list VPN_PREFIX seq 1 permit

Our redistribution command still has the subnets keyword, but since the prefix list won’t even allow smaller prefix lengths, we end up with just the one route.

Option 2: Modify the OSPF redistribution command

You can also remove the subnets keyword from the redistribution command:

router ospf 1
    redistribute static route-map VPN_POOL

This way it doesn’t matter if the prefix-list matches longer routes, OSPF just won’t redistribute them.

Final Configuration

In the end we have a configuration that looks something like this:

route outside
prefix-list VPN_PREFIX seq 1 permit
route-map VPN_POOL permit 1
 match ip address prefix-list VPN_PREFIX
router ospf 100
 redistribute static route-map VPN_POOL

The ASA will still show all of the /32 routes, plus the /25 route:

ASA# sh route | i 192.168.254
S [1/0] via, Outside
S [1/0] via, Outside

But routers inside the network will only see the /25 route:

RTR#sh ip route | i 192.168.254
O E2 [110/20] via, 01:45:03, Vlan85

I didn’t talk about modifying any of the OSPF metrics as the routes are being injected, but that would be something to consider if you do this in your environment.

IOS CLI historical interface graphs

I was researching something for a project recently and came across a feature I hadn’t seen before:  historical interface graphs.

With this feature, you can enable up to 72 hours of traffic statistics on your interfaces, and you can view this data via the CLI, similar to how ‘show proc cpu history’ works.

Check it out:


I’d never seen this before, so I was quite excited. If you’re like me, and haven’t tried this out yet, here is how you configure it:

interface Gig0/0
    history {bps | pps} [filter]

The filter can include a lot of different items, including:

  • input-drops
  • input-error
  • output-drops
  • output-errors
  • overruns
  • pause-input
  • pause-output
  • crcs

and the list goes on. You can see a full table of supported filters in the IOS Command reference.  I found this worked on my 7200s, ASR1Ks, ISRs.

A glimpse into LISP Control-Plane traffic

In our lab we were able to configure LISP and verify connectivity between our two hosts. One thing I noticed was the loss of the first two ICMP packets. Let’s walk through how LISP functions and examine what was happening behind the scene.

Upon receipt of the first packet, the SITE1 router (acting as an ITR) checked the LISP map cache to see if it already had an RLOC mapping for the destination EID (

SITE1#sh ip lisp map-cache
LISP IPv4 Mapping Cache for EID-table default (IID 0), 2 entries, uptime: 00:10:16, expires: never, via static send map-request
  Negative cache entry, action: send-map-request

This negative cache entry tells the router that it needs to send a map request to see if there’s an RLOC mapping available. The router will send a map request for EID to the Map resolver at, and drop the initial data packet (ping #1) from our host:


The Map Resolver/Map Server checks the namespaces that have registered, and looks for the RLOC address of the ETR that is authoritative for the EID prefix:

MR_MS#sh lisp site name SITE2
Site name: SITE2
Allowed configured locators: any
Allowed EID-prefixes:
    First registered:     00:31:21
    Routing table tag:    0
    Origin:               Configuration
    Merge active:         No
    Proxy reply:          No
    TTL:                  1d00h
    State:                complete
    Registration errors:
    Authentication failures:   0
    Allowed locators mismatch: 0
    ETR, last registered 00:00:50, no proxy-reply, map-notify
                        TTL 1d00h, no merge, hash-function sha1, nonce 0x0524E21F-0x489614BB
                        state complete, no security-capability
                        xTR-ID 0xDC4A8044-0x87093251-0x602669CB-0x5B720F12
                        site-ID unspecified
    Locator        Local  State      Pri/Wgt  yes    up          10/50 

Once the Map Server determines the RLOC for the authoritative ETR, it forwards the Map-Request message.  The ETR receives the forwarded Map-Request, and responds with a Map-Reply:


Once the SITE1 router receives the reply, it updates the local cache:

SITE1#sh ip lisp map-cache
LISP IPv4 Mapping Cache for EID-table default (IID 0), 3 entries, uptime: 00:10:35, expires: never, via static send map-request
  Negative cache entry, action: send-map-request, uptime: 00:09:25, expires: 00:05:34, via map-reply, forward-native
  Negative cache entry, action: forward-native, uptime: 00:09:22, expires: 23:50:38, via map-reply, complete
  Locator        Uptime    State      Pri/Wgt  00:09:22  up          10/50

Now that the router has a complete LISP cache, it can encapsulate packets in LISP headers and send them on their way.


LISP Data Packet Payload

In this setup, it’s interesting to note that we lost the first two ICMP packets to the control-plane process. The first packet was dropped by the SITE1 router as it went through the Map Request/Map Reply process to build the local cache. The second packet actually made it through to the other host, but the response was dropped by the SITE2 router as it also had to build the local cache. You can see some of that below:

Ping response sequence

Once the caches have been built, subsequent attempts are 100% successful:

[root@SITE1 ~]# ping -c 5
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=255 time=1.62 ms
64 bytes from icmp_seq=2 ttl=255 time=1.41 ms
64 bytes from icmp_seq=3 ttl=255 time=1.36 ms
64 bytes from icmp_seq=4 ttl=255 time=1.48 ms
64 bytes from icmp_seq=5 ttl=255 time=1.33 ms

--- ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 1.338/1.442/1.626/0.114 ms


I think there are a few important items to remember about the LISP forwarding process. These probably seem obvious, but I still want to point them out:

  1. The mapping system is really more of a ‘director’, in that it doesn’t actually know the answers to queries, but knows who to talk to find out.
  2. LISP control-plane always uses the same source and destination ports: UDP/4342
  3. LISP data-plane packets are always destined for UDP/4341, but the source port will change.

Cisco has a website dedicated to LISP and you can find a great deal of information there, including the devices and software versions that support LISP.  I’d highly recommend checking it out:  Cisco LISP

OTV Traffic Flow Considerations

The beauty of OTV is that you are no longer limited to segregating your L2 VLAN’s based on site, or location within your network.  When used in conjunction with Virtual Machines, this means you can migrate machines between locations without having to modify IP addressing, giving you the ability to move entire server farms with only a few clicks.

Beauty has an ugly side, however. One of the not insignificant challenges with OTV is knowing how to best reach endpoints within the overlay network. Improper planning in this area can result in inefficient traffic flows through your network, and could possibly block end-end traffic altogether. Consider the following network:


Let’s say your host is in DC1, but your gateway is in DC2. How will traffic move through the network?  Often called ‘traffic tromboning,’ this is where traffic enters through one side of your network, and uses the overlay to trombone across to the opposite side before returning back through the original datacenter.


It’s ugly, but you can fix that by using an FHRP to have a gateway in each site. As we know, the ASR’s have FHRP filtering configured and enabled by default, and there is documentation on how to configure filtering for the N7K. After adding gateways to both sites, you end up with this:


Well, that might be ok — if you turn a blind eye to the traffic flowing across  your core multiple times, but it’s certainly not the most efficient.  But to add insult to injury, what if your two sites have their own path out to the internet?  How will your edge firewall respond when it receives traffic for a connection it doesn’t know about?


These are just some of the issues that need to be considered when evaluating an OTV solution.  Multiple entry/exit points, firewall placement, flow lifetime, load-balancers, etc. combine to make the overall design complicated very quickly.   Add  in endpoint mobility (the whole point, right?), and you have to ensure that new flows will know how to reach the correct endpoint, and old flows either persist or can be reestablished quickly.  In my next post, I’ll discuss one of the solutions I’m exploring to solve these issues.

Flexible Netflow on the 4500X

I’m a big fan of Solarwinds and their suite of network management products. If you’ve never seen or tried their products, head on over to their demo site and check it out. I recently added their Netflow product Network Traffic Analyzer and wanted to add netflow collection to my new 4500X switches.

The 4500X only support Flexible Netflow, aka Version 9, and doesn’t include any prebuilt flow record templates, so there are basically 4 steps to the configuration:

  • Create a flow record
  • Create a flow exporter
  • Create a flow monitor
  • Apply the monitor to an interface

Let’s look at each and go through a basic configuration.

Flow Record

The flow record defines the fields that will be used to group traffic into unique flows. Key fields are those which are used to distinguish traffic flows from each other. If a key field is different and doesn’t match an existing flow, a new flow will be added. Key fields are matched.

Non key fields aren’t used to distinguish flows from each other, but are collected as part of the data set you want to glean from each flow. Non key fields are collected.

For my flow record I used the following configuration:

flow record IPV4-FLOW-RECORD
    match ipv4 tos
    match ipv4 protocol
    match ipv4 source address
    match ipv4 destination address
    match transport source-port
    match transport destination-port
    collect interface input
    collect interface output
    collect counter bytes long
    collect counter packets long

So in my flow record, each flow will be distinguished by ToS, Protocol, Src/Dst address, src/dst port. I’m also interested in collecting the input and output interfaces, as well as the number of bytes and packets in each flow.

Flow Exporter

A flow exporter is basically a place to send the flow data you collect. By default, Cisco will send data to UDP/9995 but Orion expects it to arrive on UDP/2055. I also specified the source interface for the data so it will match the address that Orion uses to manage this node.

flow exporter Orion
    source Loopback0
    transport udp 2055

Flow Monitor

The flow monitor is where you link records and exporters together.

flow monitor IPV4-FLOW
    description Used for Monitoring IPv4 Traffic
    record IPV4-FLOW-RECORD
    exporter Orion

Once you’ve defined all the elements, it’s time to apply to an interface.

Applying the configuration

This particular 4500X install doesn’t have any routed interfaces, so my intention was to apply the flow monitor to an SVI. This resulted in the following error:

4500X-1(config-if)#ip flow monitor IPV4-FLOW input
% Flow Monitor: Flow Monitor 'IPV4-FLOW' : Configuring Flow Monitor on SVI interfaces is not allowed.
Instead configure Flow Monitor in vlan configuration mode via the command `vlan config <vlan number>'

Ok, we’ll try again:

4500X-1(config)#vlan config 2
4500X-1(config-vlan-config)#ip flow monitor IPV4-FLOW input

No problems there!

When I attempted to configure the flow monitor in the output direction, I received this error:

4500X-1(config-if)#ip flow monitor IPV4-FLOW output
% Flow Monitor: 'IPV4-FLOW' could not be added to interface due to invalid sub-traffic type: 0

I reread the Flexible netflow section in the configuration guide, and sure enough the very first limitation for 4500’s in a VSS configuration:

  1. The Catalyst 4500 series switch supports ingress flow statistics collection for switched and routed packets; it does not support Flexible Netflow on egress traffic.

Looks I won’t be able to configure collection for output statistics,at least not at this junction in the network.


Overall I thought the configuration was fairly straight forward. I ended up using the same configuration on the other routers in my network and this was the only instance where I was unable to collect output traffic statistics.

Cisco OTV – Overlay Transport Virtualization

First, let’s talk about what supports OTV — not much:

  • Nexus 7K
  • ASR 1K
  • CSR 1000V (For those of you not familiar with the Cloud Services router, I’d recommend reading this

What is OTV?

OTV is an encapsulation protocol that wraps L2 frames in IP packets in order to transport them between L2 domains. Typically this would be between remote datacenters, but it could also be within a datacenter if you needed an easy (expensive) way to extend a VLAN.

You will also see OTV referred to as ‘MAC Routing’, since the OTV devices are essentially performing routing decisions based on the destination MAC address in the L2 frame.

You might be thinking “Hey, I’ve already got this with EoMPLS and/or VPLS.” And you’d be right — you have the essence of what OTV accomplishes. What OTV adds, however, is simplicity and fault isolation.

When you configure OTV, you are defining 3 elements:

  • Join interface
    This is the interface that faces the IP core that will transport OTV encapsulated packets between sites.
  • Overlay InterfaceThis is the virtual interface that will handle the encapsulation and decapsulation of OTV packets sent between OTV edge devices.
  • Inside interface This is the interface that receives the traffic that will be sent across OTV.

What do I need before I can configure OTV?

Before you can setup OTV in your environment there are a few important details to know:

  • OTV adds 42 bytes of overhead into the packet header. This has implications if your MTU size is 1500 bytes (the default in most cases). You’ll need to either enable Jumbo frames across your core, or reduce the MTU size on your servers inside the OTV domain. UPDATE: You can enable OTV fragmentation by using the global command otv fragmentation join-interface.  I don’t know if this has any performance implications, but at least it’s an option for you if changing the MTU throughout your network is difficult.
  • With the latest code releases, I believe all platforms support either Unicast or Multicast for the OTV control-plane. If you have a multicast enabled core, use multicast — it’s really not too bad.

Topology and Configuration

For my topology I’m going to use two ASR 1K’s, a 4900M with two VRFs, and two 3550 switches. I know I could’ve left out the VRFs, but I wanted to make my topology as close as possible to real-life. So we end up with this:

Sample OTV topology

So let’s move on to the OTV configuration.

OTV Site information

Part of any OTV config will be defining the site identifiers and the Site Bridge-Domain. The site identifier is how an OTV device determines whether or not it is at the same location as another OTV device.


otv site-identifier 0001.0001.0001


otv site-identifier 0002.0002.0002

The site bridge-domain is the Vlan that the OTV edge devices at the same site will use for AED election. Since this VLAN will not be part of the overlay, we can use the same command on both routers.

otv site bridge-domain 100

The Join interface

The join interface will be the source for all OTV packets sent to remote OTV routers, and it will be the destination for OTV packets that need to come to the site. For multicast control-plane implementations you’ll need to enable Passive PIM and IGMPv3.


interface Gig0/0/1
mtu 8192
ip address
ip pim passive
ip igmp version 3

Also note that the MTU has been adjusted to accommodate the increased size of the OTV packet. This will be the same on the second OTV-RTR except for the IP address.

Overlay Interface

In the overlay interface configuration we have to specify the multicast group used for control messaging, as well as the range of multicast groups that will be used for passing multicast data within the VLAN. We will also specify which interface will be used as the join interface. This will be the same on both routers:

interface Overlay1
otv control-group
otv data-group
otv join-interface GigabitEthernet0/0/1
no shutdown

Once you turn up the Overlay interface on both sides, you should see your OTV adjacency form:

OTV-RTR1#show otv adjacency
Overlay 1 Adjacency Database
Hostname                       System-ID      Dest Addr       Up Time   State
OTV-RTR2                       c08c.6008.0f00       00:00:36  UP

At this point since there isn’t a Vlan bridged to the Overlay, there will be now OTV routing information:

OTV-RTR1#show otv route

Codes: BD - Bridge-Domain, AD - Admin-Distance,
       SI - Service Instance, * - Backup Route

OTV Unicast MAC Routing Table for Overlay1

 Inst VLAN BD     MAC Address    AD    Owner  Next Hops(s)

0 unicast routes displayed in Overlay1

0 Total Unicast Routes Displayed

Adding Vlans to the Overlay

The last step will be to add the appropriate VLAN’s to the overlay. This config assumes that the router will receive the traffic from the switch with an 802.1Q tag:

interface GigabitEthernet0/0/0
service instance 250 ethernet
encapsulation dot1q 250
bridge-domain 250
interface Overlay1
service instance 250 ethernet
encapsulation dot1q 250
bridge-domain 250


I created a Vlan interface on each switch to use as my ‘hosts’ for the ping tests.

Sw-1 VL250 = 0009.b709.4b80

Sw-2 VL250 = 0009.b717.7880

Pinging between devices is successful. Let’s look at the switches to see how it looks:


Vlan    Mac Address       Type        Ports
----    -----------       --------    -----
 250    0009.b709.4b80    DYNAMIC     Gi0/1


OTV-RTR1#sh otv route

OTV Unicast MAC Routing Table for Overlay1

Inst VLAN BD     MAC Address    AD    Owner  Next Hops(s)
 0    250  250    0009.b709.4b80 50    ISIS   OTV-RTR2
 0    250  250    0009.b716.7880 40    BD Eng Gi0/0/0:SI250

So we can see that SW-1 knows to reach Sw-2 out interface Gi0/1, which connects to OTV-RTR1. OTV-RTR1 shows that it’s learned the MAC for SW-2 via OTV(ISIS) from OTV-RTR2. So anytime it receives frames for this MAC, it knows to forward them across the overlay.


OTV-RTR2#sh otv route

OTV Unicast MAC Routing Table for Overlay1

Inst VLAN BD     MAC Address    AD    Owner  Next Hops(s)
0    250  250    0009.b709.4b80 40    BD Eng Gi0/0/0:SI250
0    250  250    0009.b716.7880 50    ISIS   OTV-RTR1

OTV-RTR2 shows that SW-2 is out the local service-instance. Any packets that come across the overlay will be decapsulated and forwarded out the local interface.

Wrap Up

Getting a basic OTV config up and running is not that difficult. Next time I’ll talk about using unicast instead of multicast, and also about AED.