summaryrefslogtreecommitdiff
path: root/INSTALL.DPDK.md
blob: 2cc76366031181b94c53199baa5f1c8f1b57f158 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
Using Open vSwitch with DPDK
============================

Open vSwitch can use Intel(R) DPDK lib to operate entirely in
userspace. This file explains how to install and use Open vSwitch in
such a mode.

The DPDK support of Open vSwitch is considered experimental.
It has not been thoroughly tested.

This version of Open vSwitch should be built manually with `configure`
and `make`.

OVS needs a system with 1GB hugepages support.

Building and Installing:
------------------------

Required DPDK 1.7

1. Configure build & install DPDK:
  1. Set `$DPDK_DIR`

     ```
     export DPDK_DIR=/usr/src/dpdk-1.7.1
     cd $DPDK_DIR
     ```

  2. Update `config/common_linuxapp` so that DPDK generate single lib file.
     (modification also required for IVSHMEM build)

     `CONFIG_RTE_BUILD_COMBINE_LIBS=y`

     Then run `make install` to build and isntall the library.
     For default install without IVSHMEM:

     `make install T=x86_64-native-linuxapp-gcc`

     To include IVSHMEM (shared memory):

     `make install T=x86_64-ivshmem-linuxapp-gcc`

     For further details refer to http://dpdk.org/

2. Configure & build the Linux kernel:

   Refer to intel-dpdk-getting-started-guide.pdf for understanding
   DPDK kernel requirement.

3. Configure & build OVS:

   * Non IVSHMEM:

     `export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/`

   * IVSHMEM:

     `export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/`

   ```
   cd $(OVS_DIR)/openvswitch
   ./boot.sh
   ./configure --with-dpdk=$DPDK_BUILD
   make
   ```

To have better performance one can enable aggressive compiler optimizations and
use the special instructions(popcnt, crc32) that may not be available on all
machines. Instead of typing `make`, type:

`make CFLAGS='-O3 -march=native'`

Refer to [INSTALL.userspace.md] for general requirements of building userspace OVS.

Using the DPDK with ovs-vswitchd:
---------------------------------

1. Setup system boot
   Add the following options to the kernel bootline:
   
   `default_hugepagesz=1GB hugepagesz=1G hugepages=1`

2. Setup DPDK devices:
   1. insert uio.ko: `modprobe uio`
   2. insert igb_uio.ko: `insmod $DPDK_BUILD/kmod/igb_uio.ko`
   3. Bind network device to igb_uio: `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1`

3. Mount the hugetable filsystem

   `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages`

   Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup.

4. Start ovsdb-server as discussed in [INSTALL.md] doc:
   1. First time only db creation (or clearing):

        ```
        mkdir -p /usr/local/etc/openvswitch
        mkdir -p /usr/local/var/run/openvswitch
        rm /usr/local/etc/openvswitch/conf.db
        cd $OVS_DIR
        ./ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db \
             ./vswitchd/vswitch.ovsschema
        ```

    2. start ovsdb-server

        ```
        cd $OVS_DIR
        ./ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
          --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
          --private-key=db:Open_vSwitch,SSL,private_key \
          --certificate=Open_vSwitch,SSL,certificate \
          --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
        ```

    3. First time after db creation, initialize:

        ```
        cd $OVS_DIR
        ./utilities/ovs-vsctl --no-wait init
        ```

5. Start vswitchd:

   DPDK configuration arguments can be passed to vswitchd via `--dpdk`
   argument. This needs to be first argument passed to vswitchd process.
   dpdk arg -c is ignored by ovs-dpdk, but it is a required parameter
   for dpdk initialization.

        export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
        ./vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach

   If allocated more than one GB hugepage (as for IVSHMEM), set amount and use NUMA
   node 0 memory:

        ./vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 \
          -- unix:$DB_SOCK --pidfile --detach

6. Add bridge & ports
          
   To use ovs-vswitchd with DPDK, create a bridge with datapath_type
   "netdev" in the configuration database.  For example:

        `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev`

   Now you can add dpdk devices. OVS expect DPDK device name start with dpdk
   and end with portid. vswitchd should print (in the log file) the number of dpdk
   devices found.

        ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
        ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk

    Once first DPDK port is added to vswitchd, it creates a Polling thread and
    polls dpdk device in continuous loop. Therefore CPU utilization
    for that thread is always 100%.

7. Add test flows

   Test flow script across NICs (assuming ovs in /usr/src/ovs):
   Execute script:

   ```
   #! /bin/sh
   # Move to command directory
   cd /usr/src/ovs/utilities/

   # Clear current flows
   ./ovs-ofctl del-flows br0

   # Add flows between port 1 (dpdk0) to port 2 (dpdk1)
   ./ovs-ofctl add-flow br0 in_port=1,action=output:2
   ./ovs-ofctl add-flow br0 in_port=2,action=output:1
   ```

8. Performance tuning

   With pmd multi-threading support, OVS creates one pmd thread for each
   numa node as default.  The pmd thread handles the I/O of all DPDK
   interfaces on the same numa node.  The following two commands can be used
   to configure the multi-threading behavior.

        ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>

   The command above asks for a CPU mask for setting the affinity of pmd threads.
   A set bit in the mask means a pmd thread is created and pinned to the
   corresponding CPU core.  For more information, please refer to
   `man ovs-vswitchd.conf.db`

        ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<integer>

   The command above sets the number of rx queues of each DPDK interface. The
   rx queues are assigned to pmd threads on the same numa node in round-robin
   fashion.  For more information, please refer to `man ovs-vswitchd.conf.db`

   Ideally for maximum throughput, the pmd thread should not be scheduled out
   which temporarily halts its execution. The following affinitization methods
   can help.

   Lets pick core 4,6,8,10 for pmd threads to run on.  Also assume a dual 8 core
   sandy bridge system with hyperthreading enabled where CPU1 has cores 0,...,7
   and 16,...,23 & CPU2 cores 8,...,15 & 24,...,31.  (A different cpu
   configuration could have different core mask requirements).

   To kernel bootline add core isolation list for cores and associated hype cores
   (e.g.  isolcpus=4,20,6,22,8,24,10,26,).  Reboot system for isolation to take
   effect, restart everything.

   Configure pmd threads on core 4,6,8,10 using 'pmd-cpu-mask':

        ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=00000550

   You should be able to check that pmd threads are pinned to the correct cores
   via:

        top -p `pidof ovs-vswitchd` -H -d1

   Note, the pmd threads on a numa node are only created if there is at least
   one DPDK interface from the numa node that has been added to OVS.

   Note, core 0 is always reserved from non-pmd threads and should never be set
   in the cpu mask.

DPDK Rings :
------------

Following the steps above to create a bridge, you can now add dpdk rings
as a port to the vswitch.  OVS will expect the DPDK ring device name to
start with dpdkr and end with a portid.

    ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr

DPDK rings client test application

Included in the test directory is a sample DPDK application for testing
the rings.  This is from the base dpdk directory and modified to work
with the ring naming used within ovs.

location tests/ovs_client

To run the client :

    cd /usr/src/ovs/tests/
    ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr"

In the case of the dpdkr example above the "port id you gave dpdkr" is 0.

It is essential to have --proc-type=secondary

The application simply receives an mbuf on the receive queue of the
ethernet ring and then places that same mbuf on the transmit ring of
the ethernet ring.  It is a trivial loopback application.

DPDK rings in VM (IVSHMEM shared memory communications)
-------------------------------------------------------

In addition to executing the client in the host, you can execute it within
a guest VM. To do so you will need a patched qemu.  You can download the
patch and getting started guide at :

https://01.org/packet-processing/downloads

A general rule of thumb for better performance is that the client
application should not be assigned the same dpdk core mask "-c" as
the vswitchd.

Restrictions:
-------------

  - This Support is for Physical NIC. I have tested with Intel NIC only.
  - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue.
  - Currently DPDK port does not make use any offload functionality.

  ivshmem:
  - The shared memory is currently restricted to the use of a 1GB
    huge pages.
  - All huge pages are shared amongst the host, clients, virtual
    machines etc.

Bug Reporting:
--------------

Please report problems to bugs@openvswitch.org.

[INSTALL.userspace.md]:INSTALL.userspace.md
[INSTALL.md]:INSTALL.md