Running F-stack DPDK executable – Unsupported Rx multi queue mode 1

I have a C++ program which does lots of stuff, but most importantly it is setup to use F-Stack, which is built on DPDK:

int main(int argc, char * argv[])
{
    ff_init(argc, argv);
    ...
}

And I run the program like this:

sudo ./main --conf /etc/f-stack.conf --proc-type=primary

This is the error message I am receiving:

virtio_dev_configure(): Unsupported Rx multi queue mode 1
Port0 dev_configure = -22
EAL: Error - exiting with code: 1
  Cause: init_port_start failed

I have not had this problem before when running this executable on a Centos 8 AWS instance. I am now running this on a Centos 8 Alibaba Cloud instance. So there’s possibly some difference when running on Alibaba.

The only other thing I can think of is that there might be a configuration problem. However, I copied /etc/f-stack.conf from my AWS instance to Alibaba and updated some IP addresses, nothing else. So nothing significant has changed.

Any idea what’s going on here and how to fix it?


Edit: here is my /etc/f-stack.conf file (without IP addresses included):

[dpdk]
# Hexadecimal bitmask of cores to run on.
lcore_mask=1

# Number of memory channels.
channel=4

# Specify base virtual address to map.
#base_virtaddr=0x7f0000000000

# Promiscuous mode of nic, defualt: enabled.
promiscuous=1
numa_on=1

# TX checksum offload skip, default: disabled.
# We need this switch enabled in the following cases:
# -> The application want to enforce wrong checksum for testing purposes
# -> Some cards advertize the offload capability. However, doesn't calculate che                                                                                                 cksum.
tx_csum_offoad_skip=0

# TCP segment offload, default: disabled.
tso=0

# HW vlan strip, default: enabled.
vlan_strip=1

# sleep when no pkts incomming
# unit: microseconds
idle_sleep=0

# sent packet delay time(0-100) while send less than 32 pkts.
# default 100 us.
# if set 0, means send pkts immediately.
# if set >100, will dealy 100 us.
# unit: microseconds
pkt_tx_delay=100

# use symmetric Receive-side Scaling(RSS) key, default: disabled.
symmetric_rss=0

# PCI device enable list.
# And driver options
#pci_whitelist=02:00.0

# enabled port list
#
# EBNF grammar:
#
#    exp      ::= num_list {"," num_list}
#    num_list ::= <num> | <range>
#    range    ::= <num>"-"<num>
#    num      ::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9'
#
# examples
#    0-3       ports 0, 1,2,3 are enabled
#    1-3,4,7   ports 1,2,3,4,7 are enabled
#
# If use bonding, shoule config the bonding port id in port_list
# and not config slave port id in port_list
# such as, port 0 and port 1 trank to a bonding port 2,
# should set `port_list=2` and config `[port2]` section

port_list=0

# Number of vdev.
nb_vdev=0

# Number of bond.
nb_bond=0

# Each core write into own pcap file, which is open one time, close one time if                                                                                                  enough.
# Support dump the first snaplen bytes of each packet.
# if pcap file is lager than savelen bytes, it will be closed and next file was                                                                                                  dumped into.
[pcap]
enable=0
snaplen=96
savelen=16777216
savepath=.

# Port config section
# Correspond to dpdk.port_list's index: port0, port1...
[port0]
addr=<ADDR>
netmask=<NETMASK>
broadcast=<BROADCAST>
gateway=<GATEWAY>

# IPv6 net addr, Optional parameters.
#addr6=ff::02
#prefix_len=64
#gateway6=ff::01

# Multi virtual IPv4/IPv6 net addr, Optional parameters.
#       `vip_ifname`: default `f-stack-x`
#       `vip_addr`: Separated by semicolons, MAX number 64;
#                   Only support netmask 255.255.255.255, broadcast x.x.x.255 no                                                                                                 w, hard code in `ff_veth_setvaddr`.
#       `vip_addr6`: Separated by semicolons, MAX number 64.
#       `vip_prefix_len`: All addr6 use the same prefix now, default 64.
#vip_ifname=lo0
#vip_addr=192.168.1.3;192.168.1.4;192.168.1.5;192.168.1.6
#vip_addr6=ff::03;ff::04;ff::05;ff::06;ff::07
#vip_prefix_len=64

# lcore list used to handle this port
# the format is same as port_list
#lcore_list=0

# bonding slave port list used to handle this port
# need to config while this port is a bonding port
# the format is same as port_list
#slave_port_list=0,1

# Vdev config section
# orrespond to dpdk.nb_vdev's index: vdev0, vdev1...
#    iface : Shouldn't set always.
#    path : The vuser device path in container. Required.
#    queues : The max queues of vuser. Optional, default 1, greater or equal to                                                                                                  the number of processes.
#    queue_size : Queue size.Optional, default 256.
#    mac : The mac address of vuser. Optional, default random, if vhost use phy                                                                                                  NIC, it should be set to the phy NIC's mac.
#    cq : Optional, if queues = 1, default 0; if queues > 1 default 1.
#[vdev0]
##iface=/usr/local/var/run/openvswitch/vhost-user0
#path=/var/run/openvswitch/vhost-user0
#queues=1
#queue_size=256
#mac=00:00:00:00:00:01
#cq=0

# bond config section
# See http://doc.dpdk.org/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
#[bond0]
#mode=4
#slave=0000:0a:00.0,slave=0000:0a:00.1
#primary=0000:0a:00.0
#mac=f0:98:38:xx:xx:xx
## opt argument
#socket_id=0
#xmit_policy=l23
#lsc_poll_period_ms=100
#up_delay=10
#down_delay=50

# Kni config: if enabled and method=reject,
# all packets that do not belong to the following tcp_port and udp_port
# will transmit to kernel; if method=accept, all packets that belong to
# the following tcp_port and udp_port will transmit to kernel.
#[kni]
#enable=1
#method=reject
# The format is same as port_list
#tcp_port=80,443
#udp_port=53

# FreeBSD network performance tuning configurations.
# Most native FreeBSD configurations are supported.
[freebsd.boot]
hz=100

# Block out a range of descriptors to avoid overlap
# with the kernel's descriptor space.
# You can increase this value according to your app.
fd_reserve=1024

kern.ipc.maxsockets=262144

net.inet.tcp.syncache.hashsize=4096
net.inet.tcp.syncache.bucketlimit=100

net.inet.tcp.tcbhashsize=65536

kern.ncallout=262144

kern.features.inet6=1
net.inet6.ip6.auto_linklocal=1
net.inet6.ip6.accept_rtadv=2
net.inet6.icmp6.rediraccept=1
net.inet6.ip6.forwarding=0

[freebsd.sysctl]
kern.ipc.somaxconn=32768
kern.ipc.maxsockbuf=16777216

net.link.ether.inet.maxhold=5

net.inet.tcp.fast_finwait2_recycle=1
net.inet.tcp.sendspace=16384
net.inet.tcp.recvspace=8192
#net.inet.tcp.nolocaltimewait=1
net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.recvbuf_auto=1
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
net.inet.tcp.sack.enable=1
net.inet.tcp.blackhole=1
net.inet.tcp.msl=2000
net.inet.tcp.delayed_ack=0

net.inet.udp.blackhole=1
net.inet.ip.redirect=0
net.inet.ip.forwarding=0

Edit 2: I added pci_whitelist=[PCIe BDF of NIC] to config and ran the following command:

enter image description here

Answer

The reason for the error is because of the check for virtio PMD in function virtio_dev_configure file [dpdk root folder]drivers/net/virtio/virtio_ethdev.c. This can be due to the current Fstack enables RSS for better flow distribution over its port-queue pair.

There 2 possible solution to fix the problem is to

  1. find the configuration parameter in f-stack.conf to disable RSS or
  2. change the FSTACK port configuration logic not to use RSS (by editing code).

File: lib/ff_dpdk_if.c edit: line 627 from port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS; to port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE; and rebuild the fstack

Note: if you use physical NIC RSS is supported in most of cases. hence there will be no error there.