-
Notifications
You must be signed in to change notification settings - Fork 0
/
README
201 lines (135 loc) · 5.79 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
---------------------
NXP fast path network
---------------------
NXP fast path solution is used to improve performance of socket-based network application by
taking advantage of NXP network hardware virtualization.
Existing user application is supposed to have no change to adapt this solution to get performance
improved significantly.
Build:
--------------------------------------------
1) Build dpdk from (https://github.com/NXP/dpdk/commits/22.11-qoriq/):
$ export CROSS=aarch64-linux-gnu-
$ meson arm64-build --cross-file config/arm/arm64_dpaa_linux_gcc -Dc_args="-Werror" -Dusr_def_priority_last=60000 -Dexamples=all -Dprefix=/your/dpdk/install/folder
$ ninja -C arm64-build install
2) Build dpdk-nfp:
$ cmake . -Bbuild -DCMAKE_BUILD_TYPE=release -DCMAKE_C_COMPILER=aarch64-linux-gnu-gcc -DDPDK_INSTALL=/your/dpdk/install/folder
$ make -C build
#Generate build/lib/libnfp.so
Pre-test configurations:
--------------------------------------------
Total 3 options are provided here, either of them can be choosed according to your scenario.
Option 1:
----------
Configure DPDMUX connected to dpmac.x via uplink interface and two DPNIs via downlink interfaces.
The default downlink interface is connected to kernel port and another downlink interface is connected
to DPDK port.
________
| |--dpni.a(DL)---eth(kernel)
| | dprc.1
DPMAC--| DPDMUX |-----------------------------------------
| | dprc.2
|________|--dpni.b(DL)--PMD(dpdk)
|
|-(hw)-rxq0(socket fd0)
|-(hw)-rxq1(socket fd1)
$ ls-addni --no-link
#Assume the interface created are dpni.a-ethx.
$ export DPRC=dprc.2
$ export MAX_QUEUES=8
$ export FS_ENTRIES=8
$ export MAX_TCS=1
$ source ./dynamic_dpl.sh dpmac.x
#Assume the interface created is dpmac.x-dpni.b.
$ echo dprc.2 > /sys/bus/fsl-mc/drivers/vfio-fsl-mc/unbind
$ restool dprc disconnect dprc.2 --endpoint=dpni.b
$ restool dpdmux create --default-if=1 --num-ifs=2 --method DPDMUX_METHOD_CUSTOM --manip=DPDMUX_MANIP_NONE --option=DPDMUX_OPT_CLS_MASK_SUPPORT --container=dprc.1
$ restool dprc connect dprc.1 --endpoint1=dpdmux.0.0 --endpoint2=dpmac.a
$ restool dprc connect dprc.1 --endpoint1=dpdmux.0.1 --endpoint2=dpni.a
$ restool dprc connect dprc.1 --endpoint1=dpdmux.0.2 --endpoint2=dpni.b
$ restool dprc assign dprc.1 --object=dpdmux.0 --child=dprc.2 --plugged=1
$ echo dprc.2 > /sys/bus/fsl-mc/drivers/vfio-fsl-mc/bind
Option 2:
----------
dpdk-port_fwd handles traffic from/to dpmac/PCIe EP, the default ingress traffic from dpmac/PCIe EP is directed to
kernel interface, specified ingress traffic is directed to user application.
| dprc.1
|
(hw)-dpni.c(DL)-(hw)-dpni.f--eth(kernel)
| |
PCIe EP/DPMAC(dpni.e)-(sw)-dpni.a-(hw)-dpni.b(UL) |-----------------------------------
| |
(hw)-dpni.d(DL)-(hw)-dpni.g--PMD(dpdk)
| |
| |-(hw)-rxq0(socket fd0)
dprc.2(dpdk-port_fwd) | |-(hw)-rxq1(socket fd1)
^ |
| | dprc.3(app)
|__________________________|________________|
i) Start port_fwd:
$ export ENABLE_PL_BIT=1 #For PCIe EP.
$ export LSX_PCIE2_PF1=0 #For PCIe EP. PCIe2/PF0 only
$ export LSINIC_PCIE2_PF0_DEVICE_ID=0x8d90 #For PCIe EP. Specify device ID of PCIe2/PF0.
$ export DPIO_COUNT=10
$ export DPRC=dprc.2
$ export P0_DIST_1='(0,0,2)'
$ export P4_DIST_1='(4,0,2)'
$ export PORT4_FWD=0
$ export PORT0_FWD=4
$ export MAX_QUEUES=8
$ export FS_ENTRIES=8
$ export MAX_TCS=1
$ export P1_DIST_1='(1,0,-1)'
$ export P2_DIST_1='(2,0,-1)'
$ export P3_DIST_1='(3,0,-1)'
$ source ./dynamic_dpl.sh dpni-dpni dpni dpni dpmac.x #DPMAC as external port.
or
$ source ./dynamic_dpl.sh dpni-dpni dpni dpni #PCIe EP as external port.
#Assume the DPNIs created are dpni.a-dpni.b, dpni.c, dpni.d, dpni.e(dpmac.x).
$ ls-addni dpni.c
#Assume the kernel interface created is ethx.
$ ./dpdk-port_fwd -c 0x4 -n 1 -- -p 0x1f --config="$P0_DIST_1,$P1_DIST_1,$P2_DIST_1,$P3_DIST_1,$P4_DIST_1" --direct-def="'(dpni.b, dpni.c),(dpni.c, dpni.b),(dpni.d, dpni.b)'" --direct-rsp
ii) Configure application:
$ export DPIO_COUNT=10
$ export DPRC=dprc.3
$ export MAX_QUEUES=8
$ export FS_ENTRIES=8
$ export MAX_TCS=1
$ export file_prefix=rte1
$ source ./dynamic_dpl.sh dpni.d
Option 3:
----------
Create one dedicated thread to direct traffic among dpmac(PCIe EP)/kernel/user app
____________________
| dprc.1
|
PCIe EP/DPMAC(dpni.d)-(sw)-dpni.a-(hw)-dpni.b-(hw)-dpni.c-(hw)-dpni.e--eth(kernel)
| |
PMD(dpdk) |_____________________
|
|
| -(hw)-rxq0(socket fd0)
| dprc.2(app)
| -(hw)-rxq1(socket fd1)
$ export ENABLE_PL_BIT=1 #For PCIe EP.
$ export LSX_PCIE2_PF1=0 #For PCIe EP. PCIe2/PF0 only
$ export LSINIC_PCIE2_PF0_DEVICE_ID=0x8d90 #For PCIe EP. Specify device ID of PCIe2/PF0.
$ export DPRC=dprc.2
$ export MAX_QUEUES=8
$ export FS_ENTRIES=8
$ export MAX_TCS=1
$ source ./dynamic_dpl.sh dpni-dpni dpni dpmac.x #DPMAC as external port.
or
$ source ./dynamic_dpl.sh dpni-dpni dpni #PCIe EP as external port.
#Assume the DPNIs created are dpni.a-dpni.b, dpni.c, dpni.d(dpmac.x).
$ ls-addni dpni.c
#Assume the kernel interface created is ethx.
Test application(iperf3) based on either of upon configurations:
--------------------------------------------
server mode:
----------
$ ifconfig ethx 1.1.1.1 up
$ LD_PRELOAD=./libnfp.so iperf3 -s
client mode:
----------
$ ifconfig ethx 1.1.1.2 up
$ LD_PRELOAD=./libnfp.so iperf3 -c 1.1.1.1 -i 1 -t 10 -u -b 10G -l 64