Skip to main content

How to map Docker container's eth0 interface to its host VethXXX interface

In order to troubleshoot a network issue, we will need to figure out the veth end points connecting a container to its host. There are multiple ways of going about this problem, I will show 2 simple ways and point out to a third.

# Technique-1: Quick & Simple

Get the interface details inside the container:

/ # ip addr show eth0
9: eth0@if10: mtu 1500 qdisc noqueue
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever


The output is telling us that, the veth has two end points. One end point is called eth0@if10 which is assigned to the container and on the host the other end point will be named as vethxxx@if9.

Lets check the host to confirm this.

root@botserver1:~# ip a | grep  veth
10: vetha708f89@if9: mtu 1500 qdisc noqueue master docker0 state UP group default

Cool. The host has a vetha708f89@if9 which is saying its mapped to if10 interface on the container.

# Technique-2:
This technique explores the under-hood mechanism followed by Technique-1. When we go into details, the iflink of the container is the same as the ifindex of the vethxxx. So by extracting these values we can establish the link between the container and host's interfaces.

On the container, get the iflink value by executing:
/ # cat /sys/class/net/eth0/iflink (Note: eth0 is the interface we are interested in)
10

On the host, get the ifindex of the vethxxx by executing:
root@botserver1:~# grep -l 10 /sys/class/net/veth*/ifindex
/sys/class/net/vetha708f89/ifindex

Great. We have identified the veth interface that is connected to the container.

The same in script form:

#!/bin/bash
for container in $(docker ps -q); do
    iflink=`docker exec -it $container bash -c 'cat /sys/class/net/eth0/iflink'`
    iflink=`echo $iflink|tr -d '\r'`
    veth=`grep -l $iflink /sys/class/net/veth*/ifindex`
    veth=`echo $veth|sed -e 's;^.*net/\(.*\)/ifindex$;\1;'`
    echo $container:$veth
done


# Technique-3
In case, the container does not have cat or any necessary utilities you can try out this script @ https://github.com/micahculpepper/dockerveth/blob/master/dockerveth.sh

References:
* https://superuser.com/questions/1183454/finding-out-the-veth-interface-of-a-docker-container

Comments

Popular posts from this blog

Openstack : Fixing Failed to create network. No tenant network is available for allocation issue.

Assumptions : You are using ML2 plugin configured to use Vlans If you try to create a network for a tenant and it fails with the following error: Error: Failed to create network "Test": 503-{u'NeutronError': {u'message': u'Unable to create the network. No tenant network is available for allocation.', u'type': u'NoNetworkAvailable', u'detail': u''}} The problem can be due to missing configuration in the below files: In /etc/neutron/plugins/ml2/ml2_conf.ini network_vlan_ranges =physnet1:1000:2999 (1000:2999 is the Vlan range allocation) In /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini bridge_mappings = physnet1:br-eth1 (in OVS we map the physical network to the OVS bridge) Note You should have created a bridge br-eth1 manually and mapped it to a port ovs-vsctl add-br br-eth1 ovs-vsctl add-port br-eth1 eth1 Once configuration is done, restart the neutron ovs agent on the compute node(s):

Solved: Fix for Git clone failure due to GnuTLS recv error (-9)

My devstack installation was failing with an error reported by the GnuTLS module as shown below: $ git clone https://github.com/openstack/horizon.git /opt/stack/horizon --branch master Cloning into '/opt/stack/horizon'... remote: Counting objects: 154213, done. remote: Compressing objects: 100% (11/11), done. error: RPC failed; curl 56 GnuTLS recv error (-9): A TLS packet with unexpected length was received. fatal: The remote end hung up unexpectedly fatal: early EOF fatal: index-pack failed The following Git config changes fixed the issue for me. Am hoping it will be useful for someone out there: $ git config http.sslVerify false $ git config --global http.postBuffer 1048576000

QuickBite: Tap Vs Veth

Linux supports virtual networking via various artifacts such as: Soft Switches (Linux Bridge, OpenVSwitch) Virtual Network Adapters (tun, tap, veth and a few more) In this blog, we will look at the virtual network adapters tap and veth. From a practical view point, both seem to be having the same functionality and its a bit confusing as to where to use what. A quick definition of tap/veth is as follows: TAP A TAP is a simulated interface which exists only in the kernel and has no physical component associated with it. It can be viewed as a simple Point-to-Point or Ethernet device, which instead of receiving packets from a physical media, receives them from user space program and instead of sending packets via physical media writes them to the user space program. When a user space program (in our case the VM) gets attached to the tap interface it gets hold of a file descriptor, reading from which gives it the data being sent on the tap interface. Writing to the file descri