Sunday, February 20, 2011

OTV Deep Dive - Part Two

Now that we've covered OTV theory and nomenclature, let's dig in to the fun stuff and talk about the CLI and what OTV looks like when it's setup. We'll be using the topology below comprised of four Nexus 7000s and eight VDCs.

We'll focus first on the minimum configuration required to get basic OTV adjacency up and working and then add in multi-homing for redundancy. First, make sure the L3 network that OTV will be traversing is multicast enabled. Today with current shipping code, neighbor discovery is done via multicast which helps facilitate easy additions and removal of sites from the OTV network. With this requirement met, we can get rolling.

A simple initial config is below and we'll dissect it.

First, we enable the feature
feature otv

Then we configure the Overlay interface
interface Overlay1

Next we configure the join interface. This is the interface that will be used for the IGMP join and will be the source IP address of all packets after encapsulation.
otv join-interface Ethernet1/7.1

Now we'll configure the control group. As its name implies the control group is the multicast group used by all OTV speakers in an Overlay network. This should be a unique multicast group in the multicast network.
otv control-group

Then we configure the data group which is used to encapsulate any L2 multicast traffic that is being extended across the Overlay. Any L3 mutlicast will be routed off of the VLAN through whatever regular multicast mechanisms exist on the network.
otv data-group

Next to last bare minimum config to add is the list of VLANs to be extended.
otv extend-vlan 31-33,100,1010,1088-1089

Finally, no shut to enable the interface.
no shutdown

We can now look at the Overlay interface but honestly, won't see much. Force of habit after a no shut on an interface. :)

show int o1
Overlay1 is up
BW 1000000 Kbit
Last clearing of "show interface" counters never
0 unicast packets 77420 multicast packets
77420 input packets 574 bits/sec 0 packets/sec
0 unicast packets 0 multicast packets
0 output packets 0 bits/sec 0 packets/sec

If we configure the other hosts in our network and multicast is working, we'll see adjacencies form as below.

champs1-OTV# show otv adj

Overlay Adjacency database

Overlay-Interface Overlay1 :
Hostname System-ID Dest Addr Up Time State
champs2-OTV 001b.54c2.41c4 2d05h UP
fresca-OTV 0026.9822.ea44 2d05h UP
pepsi-OTV f866.f206.fd44 2d05h UP


With this in place, we now have a basic config and will be able to extend VLANs between the four devices.

The last thing we'll cover in this post is how multi-homing can be enabled. First to level set on multi-homing in this context I'm referring to the ability have redundancy in each site and not have a crippling loop.

The way this is accomplished in OTV is by the use of the concept of a site VLAN. The site VLAN is a VLAN that's dedicated to OTV and NOT extended across the Overlay but is trunked between the two OTV edge devices. This VLAN doesn't need any IP addresses or SVIs created, it just needs to exist and be added to the OTV config as shown below.

otv site-vlan 99

With the simple addition of this command the OTV edge devices will discover each other locally and then use an algorithm to determine a role each edge device will assume on a per VLAN basis. This role is called the Authoritative Edge Device (AED). The AED is responsible for forwarding all traffic for a given VLAN including broadcast and multicast traffic. Today the algorithm aligns with the VLAN ID with one edge device supporting the odd numbered VLANs and the other supporting the even numbered VLANs. This can be seen by reviewing the output below.

champs1-OTV# show otv vlan

OTV Extended VLANs and Edge Device State Information (* - AED)

VLAN Auth. Edge Device Vlan State Overlay
---- ----------------------------------- ---------- -------
31* champs1-OTV active Overlay1
32 champs2-OTV inactive(Non AED) Overlay1
33* champs1-OTV active Overlay1

1000 champs2-OTV inactive(Non AED) Overlay1
1010 champs2-OTV inactive(Non AED) Overlay1
1088 champs2-OTV inactive(Non AED) Overlay1
1089* champs1-OTV active Overlay1

If we look at the output above we can see that this edge device is the AED for VLANs 31, 33 and 1098 and is the non-AED for 32,1000, 1010 and 1088. In the event of a failure of champs2, champs1 will take over and become the AED for all VLANs.

We'll explore FHRP localization and what happens across the OTV control group in the next post. As always, your thoughts, comments and feeback are welcome.


  1. Great posts. The section on the site-vlan command reminded me of a little "gotcha" that I ran into that I thought I'd pass along (at least in the initial 5.0(3) code).

    I was deploying OTV between two sites, each with only a single 7K, so I didn't use the site-vlan command. What I didn't realize is if you don't specify it, it will assume VLAN 1 as the site-vlan. I wasn't planning on extending VLAN 1 across the OTV link, so I didn't trunk it between the default VDC and the OTV VDC. Since it wasn't on the trunk and there weren't any active ports in VLAN 1, the site-vlan was considered down and the overlay wouldn't come up and pass traffic (just like a layer 3 SVI will show "down" until the VLAN has an active port in it).

    So, even when only using a single 7K at a site, the site-vlan is still important. You either need to specify the site-vlan as an active VLAN in the OTV VDC, or be sure that VLAN 1 is up.

  2. Jamie says:...

    Actually that's a valid point. You should always start the OTV configuration with the SITE VLAN first. Then you proceed with the join interface and the rest of the OTV configuration.


  3. Hi Ron,

    Great triplet of articles on OTV, I have one comment.

    I was under the impression (probably incorrect) that the VDC housing the OTV process would also be the one that would house the physical connections to the L3 network.

    What I think I understand from your schematics is that the OTV VDC's sit at the side (not inline) and that the L3 WAN connections would terminate into the default VDC. I am also assuming that hosts would connect into the default VDC which is also where VLAN SVI's would reside. The VLAN's to be extended across the OTV process would be trunked into the OTV VDC at which point they would be encapsulated and then sent back into the default VDC?

    Or am I misunderstanding your schematic completely.


  4. Nathan,
    First, thanks for reading the blog! Having the OTV VDC inline certainly is an option, but is not required. Most deployments are like what you describe and what is in the diagram.


  5. Can anyone help me understand why do we need to configure Multicast Data group as a subnet Can it be a /32 address like the one for Control group?

  6. Raj,
    The data group is a range of IPs that are mapped 1:1 for L2 multicast that needs to be carried across OTV. We recommend a range of IPs because what we've found is that while many customers don't think they have L2 mutlicast until they stretch the VLANs. You can use a /32 if you are certain you don't need it, but I'd wager you will.

    Thanks for reading!

  7. Wonderful article Ron. Even after 4 years... it is as fresh as ever.

    Allow me to be a little cynical though... There is a typo in the last para. Vlan 1098 should be vlan 1089 as per your diagram. Also 1098 being even number doesn't fall in the same category as 31 and 33 :))

  8. I was curious if you ever thought of changing the layout of your blog? Its very well written; I love what youve got to say. But maybe you could a little more in the way of content so people could connect with it better. Youve got an awful lot of text for only having one or two images. Maybe you could space it out better?