Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the running problem. #9

Open
hardwood89 opened this issue Mar 23, 2016 · 4 comments
Open

about the running problem. #9

hardwood89 opened this issue Mar 23, 2016 · 4 comments
Labels

Comments

@hardwood89
Copy link

No description provided.

@hardwood89
Copy link
Author

hey,
I have run the example MPPublisher and MPSubscriber in two servers. And i use the parameters in the example and adjust the network interface to the servers'. But there existes some problems in the receiver. Could you help me to solve this problem.
The cpuInfo(8): Intel(R) Xeon(R) CPU E5-1410 0 @ 2.80GHz. and the network interface is 10G.
The MPPublisher
Connecting transport default as MPUB-c39q
Connecting to interface eth10 on address /229.9.9.13 42044 dgramsize:15996
init send buffer for topic 1
allocating send buffer for topic 1 of 503 MByte
timer resolution:123 ns
rate limiting in window 8060928 ns (8060 micros) (8 millis)
Thu Mar 24 19:10:12 HKT 2016 ***** Stats for msg/s: 0 per second *********
Thu Mar 24 19:10:13 HKT 2016 ***** Stats for msg/s: 933594 per second *********
Thu Mar 24 19:10:14 HKT 2016 ***** Stats for msg/s: 897790 per second *********
Thu Mar 24 19:10:15 HKT 2016 ***** Stats for msg/s: 895616 per second *********
Thu Mar 24 19:10:16 HKT 2016 ***** Stats for msg/s: 896511 per second *********

MPSubscriber

v3
Connecting transport default as MSUB-hm67
Connecting to interface eth11 on address /229.9.9.13 42044 dgramsize:15996
allocating read buffer for topic '1' of 503 MByte
for sender MPUB-c39q bootstrap sequence 1 no 1
bootstrap MPUB-c39q
Thu Mar 24 19:10:13 HKT 2016 ***** Stats for receive rate: 0 per second *********
retransmission retrial at 25 count 6 highest 3141 stream 1 retrans:RetransPacket{seqNo=-1, topic=1, sender=MSUB-hm67, receiver=MPUB-c39q, retransEntries=[[ 26,2235] [ 2236,2253] [ 2254,2463] [ 2464,2471] [ 2472,2479] [ 2480,2484] [ 2485,2498] [ 2499,2506] [ 2507,2511] [ 2512,2520] [ 2521,2528] [ 2529,2538] [ 2539,2547] [ 2548,2555] [ 2556,2564] [ 2565,2573] [ 2574,2581] [ 2582,2590] [ 2591,2598] [ 2599,2607] ], retransIndex=20} delay:10
retransmission retrial at 25 count 7 highest 3315 stream 1 retrans:RetransPacket{seqNo=-1, topic=1, sender=MSUB-hm67, receiver=MPUB-c39q, retransEntries=[[ 26,2235] [ 2236,2253] [ 2254,2463] [ 2464,2471] [ 2472,2479] [ 2480,2484] [ 2485,2498] [ 2499,2506] [ 2507,2511] [ 2512,2520] [ 2521,2528] [ 2529,2538] [ 2539,2547] [ 2548,2555] [ 2556,2564] [ 2565,2573] [ 2574,2581] [ 2582,2590] [ 2591,2598] [ 2599,2607] ], retransIndex=20} delay:10
retransmission retrial at 25 count 8 highest 3490 stream 1 retrans:RetransPacket{seqNo=-1, topic=1, sender=MSUB-hm67, receiver=MPUB-c39q, retransEntries=[[ 26,2235] [ 2236,2253] [ 2254,2463] [ 2464,2471] [ 2472,2479] [ 2480,2484] [ 2485,2498] [ 2499,2506] [ 2507,2511] [ 2512,2520] [ 2521,2528] [ 2529,2538] [ 2539,2547] [ 2548,2555] [ 2556,2564] [ 2565,2573] [ 2574,2581] [ 2582,2590] [ 2591,2598] [ 2599,2607] ], retransIndex=20} delay:10
retransmission retrial at 25 count 9 highest 3677 stream 1 retrans:RetransPacket{seqNo=-1, topic=1, sender=MSUB-hm67, receiver=MPUB-c39q, retransEntries=[[ 26,2235] [ 2236,2253] [ 2254,2463] [ 2464,2471] [ 2472,2479] [ 2480,2484] [ 2485,2498] [ 2499,2506] [ 2507,2511] [ 2512,2520] [ 2521,2528] [ 2529,2538] [ 2539,2547] [ 2548,2555] [ 2556,2564] [ 2565,2573] [ 2574,2581] [ 2582,2590] [ 2591,2598] [ 2599,2607] ], retransIndex=20} delay:10

@RuedigerMoeller
Copy link
Owner

this looks like a multicast config problem or bad network (many packet losses). try reducing send rate to a level where it works.

Also its sometimes an issue for the receiver to connect to high constant message flow as initially Java runs interpreted and you run into unrecoverable message loss before the JIT kicks in. Only way to handle this is to reduce sendrate / enlarge receive (and probably also send) buffers.

As I sometimes play around with the samples (and then accidentally commit :) ), the configs might be too ambitious (set to localhost settings)

@hardwood89
Copy link
Author

I have found the problem, i have ignored the subscriber will send data back to the publisher and i didn't set the openflow rule. Now It works well with high performance.

@RuedigerMoeller
Copy link
Owner

reopen as faq

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants