Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible connection leak in DefaultPooledConnectionProvider #3458

Open
bmaassenee opened this issue Oct 7, 2024 · 15 comments
Open

Possible connection leak in DefaultPooledConnectionProvider #3458

bmaassenee opened this issue Oct 7, 2024 · 15 comments
Assignees
Labels
type/bug A general bug

Comments

@bmaassenee
Copy link

I'm investigating why i'm seeing PoolAcquireTimeoutException in my application, and came across something that makes no sense to me, and which i expect is a bug. I've configured the provider to have a max of 500 connections, and have metrics enabled for the provider. What i'm seeing is that metrics basically say nothing is going on, while the connection pool says it has reached the max connections.

Expected Behavior

I would expect that the available connections for the pool, is max - active

Actual Behavior

What i'm seeing is the following metrics:

reactor.netty.connection.provider.active.connections{id=-1196502289,name=spring-webclient,remote.address=xxx.com:443} value=1
reactor.netty.connection.provider.idle.connections{id=-1196502289,name=spring-webclient,remote.address=xxx.com:443} value=0
reactor.netty.connection.provider.pending.connections{id=-1196502289,name=spring-webclient,remote.address=xxx.com:443} value=6
reactor.netty.connection.provider.total.connections{id=-1196502289,name=spring-webclient,remote.address=xxx.com:443} value=500

I would expect this would result in 499 available connections, but when i checked the memory dump i saw:
image

Unfortunately i've got no idea how i'm getting in this state, and as such, no idea how to reproduce this issue. However where possible i'm more then willing to help out providing extra info

Your Environment

reactorNettyVersion=1.0.9
reactorVersion=3.4.28
nettyVersion=4.1.77.Final
docker container=eclipse-temurin:21.0.3_9-jdk

Connection pool provider config:

		ConnectionProvider provider = ConnectionProvider.builder( "spring-webclient" )
				// this defines the maximum number of iterable requests from flux allowed, we configure no limit, but instead a short timeout
				.pendingAcquireMaxCount( -1 )
				// allow up to 500 parallel connections
				.maxConnections( 500 )
				// Give up if we can't get a connection within 10 sec
				.pendingAcquireTimeout( Duration.ofSeconds( 10 ) )
				// close connections if not used for 30 sec
				.maxIdleTime( Duration.ofSeconds( 30 ) )
				// check connections of the pool every min to see if we can close them
				.evictInBackground( Duration.ofSeconds( 60 ) )
				// use the last returned connection first, this way we keep our pool lean
				.lifo()
				// enables metrics https://projectreactor.io/docs/netty/release/reference/index.html#_metrics_5
				.metrics( true )
				.build();
@bmaassenee bmaassenee added status/need-triage A new issue that still need to be evaluated as a whole type/bug A general bug labels Oct 7, 2024
@violetagg
Copy link
Member

@bmaassenee Your Reactor Netty version is very old (reactorNettyVersion=1.0.9) and not supported anymore. Please update to a supported version (more info here)

@violetagg violetagg self-assigned this Oct 7, 2024
@violetagg violetagg added for/user-attention This issue needs user attention (feedback, rework, etc...) and removed status/need-triage A new issue that still need to be evaluated as a whole labels Oct 7, 2024
@bmaassenee
Copy link
Author

Hi @violetagg I will update the lib version right away. Just to be sure, if i see the above again, this would qualify as a bug right? And not as me miss interpreting the metrics?

@violetagg
Copy link
Member

Hi @violetagg I will update the lib version right away. Just to be sure, if i see the above again, this would qualify as a bug right? And not as me miss interpreting the metrics?

it seems so ...

@bmaassenee
Copy link
Author

@violetagg updated to latest versions within the same minor version available, and still seeing the issue:

reactor.netty.connection.provider.active.connections{id=836186337,name=spring-webclient,remote.address=xxx.com:443} value=18
reactor.netty.connection.provider.idle.connections{id=836186337,name=spring-webclient,remote.address=xxx.com:443} value=1
reactor.netty.connection.provider.max.connections{id=836186337,name=spring-webclient,remote.address=xxx.com:443} value=500
reactor.netty.connection.provider.max.pending.connections{id=836186337,name=spring-webclient,remote.address=xxx.com:443} value=2147483647
reactor.netty.connection.provider.pending.connections{id=836186337,name=spring-webclient,remote.address=xxx.com:443} value=0
reactor.netty.connection.provider.total.connections{id=836186337,name=spring-webclient,remote.address=xxx.com:443} value=107

lib versions used:
reactorNettyVersion=1.0.48
reactorVersion=3.4.41
nettyVersion=4.1.114.Final

@violetagg
Copy link
Member

@bmaassenee Is there a reproducible example that you can provide?

@bmaassenee
Copy link
Author

bmaassenee commented Oct 8, 2024

Unfortunately not, going through logs combined with metrics to try to find some clue on what's going on. Any hints what i could look for/log levels i could set would be appreciated

@violetagg
Copy link
Member

@bmaassenee In the logs you should be able to see messages like this

Channel cleaned, now: 0 active connections, 1 inactive connections and 0 pending acquire requests.

@bmaassenee
Copy link
Author

bmaassenee commented Oct 8, 2024

So by comparing jumps in metrics:

reactor_netty_connection_provider_total_connections{kubernetes_pod_name="media-service-c000-d94589f6d-zf4n7", id="836186337"} - (reactor_netty_connection_provider_idle_connections + reactor_netty_connection_provider_active_connections) 

with the logs of our interceptor:

	@Override
	public Mono<ClientResponse> filter(	ClientRequest request, ExchangeFunction next )
	{	
		...		
		_logger.debug(
				"Starting web request: {} {} {}{}",
				request.method(),
				request.url(),
				PrivacyUtils.privacyFilter( request.headers() ),
				cookieKeys
		);
		
		return next.exchange( request )
				.doOnSuccess( response -> _logger.debug( "request {} {} responded with {} after {}",
						request.method(),
						PrivacyUtils.privacyFilter( request.url().toString() ),
						response.rawStatusCode(),
						Duration.between( start, Instant.now() )
				))
				.doOnError( ex ->  _logger.warn( "request {} {} resulted in error {} after {}",
						request.method(),
						PrivacyUtils.privacyFilter( request.url().toString() ),
						ExceptionMessageUtils.getMessage( ex ),
						Duration.between( start, Instant.now() )
				));
		
	}

I found an entry of the "Starting web request" log, but no result log for each time the metrics jumps up by 1. Which is weird as we're setting up connect and read timeouts, so i would expect something at a maximum of 60 sec after the "Starting" log.

HttpClient config:

		httpClient = httpClient
				//show raw requests in logs if you add <ASyncLogger name="reactor.netty.http.client" level="trace"/> to the log config
				.wiretap( "reactor.netty.http.client.HttpClient", LogLevel.TRACE, AdvancedByteBufFormat.TEXTUAL )
				.resolver( DefaultAddressResolverGroup.INSTANCE )//TODO this is added to debug INFRA-1358 by using a different dns resolver
				.option( ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000 )
				.headers( h -> h.set( HttpHeaders.USER_AGENT, createUserAgent() ) )
				.compress( true )
				.responseTimeout( Duration.ofMillis( 60_000 ) );

@violetagg
Copy link
Member

can you track also cancellations (doOnCancel)

@bmaassenee
Copy link
Author

@violetagg added, and its not printing.

@violetagg
Copy link
Member

@bmaassenee Without reproducible example/some scenario/detailed logs I don't know how we can proceed with this one.

@bmaassenee
Copy link
Author

can you give me the classes/packages and log levels you would require to get the detailed logs? As i can enable those

@violetagg
Copy link
Member

can you give me the classes/packages and log levels you would require to get the detailed logs? As i can enable those

May be try with reactor.netty.resources logger

@bmaassenee
Copy link
Author

getting the feeling it has to do with retries, unfortunately permits are not logged by the connection pool:

2024-10-15 10:27:26,953 [parallel-13] INFO  LoggingInterceptor {} - request GET https://xxx.com/asset/prev/image.jpeg?id=10107fd3&timestamp=20241015082721.068&asset_class=all&woei=1728980842952 was cancelled after PT4.000229537S
2024-10-15 10:27:26,953 [parallel-13] WARN  ImageServiceImpl {} - retry(1/5): Get preview image before 10107fd3 of 2024-10-15T08:27:21.068Z failed because: Did not observe any item or terminal signal within 4000ms in 'onErrorResume' (and no fallback has been configured)
2024-10-15 10:27:26,954 [parallel-13] DEBUG LoggingInterceptor {} - Starting web request: GET https://xxx.com/asset/prev/image.jpeg?id=10107fd3&timestamp=20241015082721.068&asset_class=all&woei=1728980842952 [Accept:"application/json", client:"vmsGlobalWebClient", User-Agent:"EEN Java Service: MEDIA_V3_SERVICE/s397.eencloud.com", Cookie:"XXXX", X-B3-TraceId:"dc54d44fd12d3c5b6d181bb38f1be9a0", X-B3-ParentSpanId:"ce7d655df5b66842", X-B3-SpanId:"b9220de16d18ce18", X-B3-Sampled:"1"] Cookie keys: [auth_key]
2024-10-15 10:27:26,955 [reactor-http-epoll-20] DEBUG PooledConnectionProvider {} - [f2df2460, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443] Channel acquired, now: 7 active connections, 3 inactive connections and 0 pending acquire requests.
2024-10-15 10:27:26,955 [reactor-http-epoll-20] DEBUG DefaultPooledConnectionProvider {} - [f2df2460-3, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443] onStateChange(GET{uri=/asset/prev/image.jpeg, connection=PooledConnection{channel=[id: 0xf2df2460, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443]}}, [request_prepared])
2024-10-15 10:27:26,955 [reactor-http-epoll-20] DEBUG DefaultPooledConnectionProvider {} - [f2df2460-3, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443] onStateChange(GET{uri=/asset/prev/image.jpeg, connection=PooledConnection{channel=[id: 0xf2df2460, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443]}}, [request_sent])
2024-10-15 10:27:26,960 [reactor-http-epoll-20] DEBUG DefaultPooledConnectionProvider {} - [f2df2460-3, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443] onStateChange(GET{uri=/asset/prev/image.jpeg, connection=PooledConnection{channel=[id: 0xf2df2460, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443]}}, [response_received])
2024-10-15 10:27:26,960 [reactor-http-epoll-20] DEBUG LoggingInterceptor {} - request GET https://xxx.com/asset/prev/image.jpeg?id=10107fd3&timestamp=20241015082721.068&asset_class=all&woei=1728980842952 responded with 200 after PT0.00618092S
2024-10-15 10:27:26,961 [reactor-http-epoll-20] DEBUG DefaultPooledConnectionProvider {} - [f2df2460, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443] onStateChange(GET{uri=/asset/prev/image.jpeg, connection=PooledConnection{channel=[id: 0xf2df2460, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443]}}, [response_completed])
2024-10-15 10:27:26,961 [reactor-http-epoll-20] DEBUG DefaultPooledConnectionProvider {} - [f2df2460, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443] onStateChange(GET{uri=/asset/prev/image.jpeg, connection=PooledConnection{channel=[id: 0xf2df2460, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443]}}, [disconnecting])
2024-10-15 10:27:26,961 [reactor-http-epoll-20] DEBUG DefaultPooledConnectionProvider {} - [f2df2460, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443] Releasing channel
2024-10-15 10:27:26,962 [reactor-http-epoll-20] DEBUG PooledConnectionProvider {} - [f2df2460, L:/x.x.x.x:43896 - R:xxx.com/216.245.93.73:443] Channel cleaned, now: 6 active connections, 4 inactive connections and 0 pending acquire requests.

as the metrics say that total connections for this endpoint is on 64 at this point

@violetagg violetagg removed the for/user-attention This issue needs user attention (feedback, rework, etc...) label Oct 17, 2024
@bmaassenee
Copy link
Author

bmaassenee commented Oct 18, 2024

@violetagg took me a while to get the proper logs, having all those parallel async api calls going out makes following the logs quite hard...

but think i've captured the problem, failure:

2024-10-18 14:35:08,762 [parallel-5] DEBUG LoggingInterceptor {traceId=aaa41794f0495e2240d4e8a66dc76309} - Starting web request: GET https://xxx.com/api/v3.0/cameras/xxx [Accept:"application/json", client:"v3ClusterWebClient", User-Agent:"EEN Java Service: MEDIA_V3_SERVICE/s218", Authorization:"XXXX", X-B3-TraceId:"aaa41794f0495e2240d4e8a66dc76309", X-B3-ParentSpanId:"f932c79c0d9cc818", X-B3-SpanId:"63a64b8188a259d7", X-B3-Sampled:"0"]
2024-10-18 14:35:08,762 [parallel-7] DEBUG LoggingInterceptor {traceId=aaa41794f0495e2240d4e8a66dc76309} - Starting web request: GET https://yyy.com/g/aaa/vt?t=xxx [Accept:"application/json", X-B3-TraceId:"aaa41794f0495e2240d4e8a66dc76309", X-B3-ParentSpanId:"f932c79c0d9cc818", X-B3-SpanId:"38808db5cb3a5df2", X-B3-Sampled:"0"]
2024-10-18 14:35:08,764 [reactor-http-epoll-1] DEBUG DefaultPooledConnectionProvider {traceId=aaa41794f0495e2240d4e8a66dc76309} - [02cc65fa, L:/192.40.5.106:50628 - R:xxx.com/208.81.97.184:443] Registering pool release on close event for channel
2024-10-18 14:35:08,764 [reactor-http-epoll-1] DEBUG PooledConnectionProvider {traceId=aaa41794f0495e2240d4e8a66dc76309} - [02cc65fa, L:/192.40.5.106:50628 - R:xxx.com/208.81.97.184:443] Channel connected, now: 1 active connections, 0 inactive connections and 0 pending acquire requests.
2024-10-18 14:35:08,766 [Jetty-5114] DEBUG SreIngressMetricsInterceptor {traceId=aaa41794f0495e2240d4e8a66dc76309} - Handling GET /api/v3.0/feeds with params {}.
2024-10-18 14:35:08,768 [reactor-http-epoll-1] DEBUG HttpClientConnect {traceId=aaa41794f0495e2240d4e8a66dc76309} - [02cc65fa-1, L:/192.40.5.106:50628 - R:xxx.com/208.81.97.184:443] Handler is being applied: {uri=https://xxx.com/api/v3.0/cameras/xxx, method=GET}
2024-10-18 14:35:08,768 [reactor-http-epoll-1] DEBUG DefaultPooledConnectionProvider {traceId=aaa41794f0495e2240d4e8a66dc76309} - [02cc65fa-1, L:/192.40.5.106:50628 - R:xxx.com/208.81.97.184:443] onStateChange(GET{uri=/api/v3.0/cameras/xxx, connection=PooledConnection{channel=[id: 0x02cc65fa, L:/192.40.5.106:50628 - R:xxx.com/208.81.97.184:443]}}, [request_prepared])
2024-10-18 14:35:08,768 [reactor-http-epoll-1] DEBUG DefaultPooledConnectionProvider {traceId=aaa41794f0495e2240d4e8a66dc76309} - [02cc65fa-1, L:/192.40.5.106:50628 - R:xxx.com/208.81.97.184:443] onStateChange(GET{uri=/api/v3.0/cameras/xxx, connection=PooledConnection{channel=[id: 0x02cc65fa, L:/192.40.5.106:50628 - R:xxx.com/208.81.97.184:443]}}, [request_sent])
2024-10-18 14:35:08,816 [reactor-http-epoll-1] DEBUG LoggingInterceptor {traceId=aaa41794f0495e2240d4e8a66dc76309} - request GET https://xxx.com/api/v3.0/cameras/xxx responded with 200 after PT0.054194806S
2024-10-18 14:35:08,816 [reactor-http-epoll-1] DEBUG FluxReceive {traceId=aaa41794f0495e2240d4e8a66dc76309} - [02cc65fa-1, L:/192.40.5.106:50628 - R:xxx.com/208.81.97.184:443] [terminated=false, cancelled=false, pending=0, error=null]: subscribing inbound receiver
2024-10-18 14:35:08,816 [reactor-http-epoll-1] DEBUG HttpClientOperations {traceId=aaa41794f0495e2240d4e8a66dc76309} - [02cc65fa-1, L:/192.40.5.106:50628 - R:xxx.com/208.81.97.184:443] Received last HTTP packet
2024-10-18 14:35:08,816 [reactor-http-epoll-1] DEBUG DefaultPooledConnectionProvider {traceId=aaa41794f0495e2240d4e8a66dc76309} - [02cc65fa, L:/192.40.5.106:50628 - R:xxx.com/208.81.97.184:443] onStateChange(GET{uri=/api/v3.0/cameras/xxx, connection=PooledConnection{channel=[id: 0x02cc65fa, L:/192.40.5.106:50628 - R:xxx.com/208.81.97.184:443]}}, [response_completed])
2024-10-18 14:40:08,762 [Jetty-5238] INFO  LoggingInterceptor {} - request GET https://xxx.com/api/v3.0/cameras/xxx was cancelled after PT5M0.000051037S
2024-10-18 14:40:08,762 [Jetty-5238] INFO  LoggingInterceptor {} - request GET https://yyy.com/g/aaa/vt?t=xxx was cancelled after PT4M59.999870963S
2024-10-18 14:40:08,763 [Jetty-5238] CRITICAL ApiV3CustomExceptionHandler {traceId=aaa41794f0495e2240d4e8a66dc76309} - Api v3 Request GET /api/v3.0/feeds resulted in exception: An internal error has occurred. Please try again later.
Caused by: org.springframework.web.context.request.async.AsyncRequestTimeoutException
	at org.springframework.web.context.request.async.TimeoutDeferredResultProcessingInterceptor.handleTimeout(TimeoutDeferredResultProcessingInterceptor.java:42) ~[spring-web-5.3.30.jar:5.3.30]
	at org.springframework.web.context.request.async.DeferredResultInterceptorChain.triggerAfterTimeout(DeferredResultInterceptorChain.java:79) ~[spring-web-5.3.30.jar:5.3.30]
	at org.springframework.web.context.request.async.WebAsyncManager.lambda$startDeferredResultProcessing$5(WebAsyncManager.java:438) ~[spring-web-5.3.30.jar:5.3.30]
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1596) ~[?:?]
	at org.springframework.web.context.request.async.StandardServletAsyncWebRequest.onTimeout(StandardServletAsyncWebRequest.java:151) ~[spring-web-5.3.30.jar:5.3.30]
	at org.eclipse.jetty.server.HttpChannelState$2.run(HttpChannelState.java:669) ~[jetty-server-9.4.45.v20220203.jar:9.4.45.v20220203]
	at org.eclipse.jetty.server.handler.ContextHandler.handle(ContextHandler.java:1525) ~[jetty-server-9.4.45.v20220203.jar:9.4.45.v20220203]
	at org.eclipse.jetty.server.HttpChannelState.runInContext(HttpChannelState.java:1225) ~[jetty-server-9.4.45.v20220203.jar:9.4.45.v20220203]
	at org.eclipse.jetty.server.HttpChannelState.onTimeout(HttpChannelState.java:685) ~[jetty-server-9.4.45.v20220203.jar:9.4.45.v20220203]
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:500) ~[jetty-server-9.4.45.v20220203.jar:9.4.45.v20220203]
	... 4 more
2024-10-18 14:40:08,763 [Jetty-5238] INFO  SreIngressMetricsInterceptor {traceId=aaa41794f0495e2240d4e8a66dc76309} - Handled GET /api/v3.0/feeds and responded with status code 500 after 300003 ms
2024-10-18 14:40:08,763 [Jetty-5238] ERROR SreIngressMetricsInterceptor {traceId=aaa41794f0495e2240d4e8a66dc76309} - Incoming api GET /api/v3.0/feeds took more then 10 sec to process: 300003 ms
2024-10-18 14:40:08,763 [Jetty-5238] INFO  ScrubbedNCSARequestLog {traceId=aaa41794f0495e2240d4e8a66dc76309} - {"host":"89.188.2.30","user":"-","time":"2024-10-18T14:35:08.759+0200","httpmethod":"GET","path":"/api/v3.0/feeds?deviceId=xxx&include=multipartUrl,flvUrl,webRtcUrl","code":"500","size":"215","referer":"https://zzz/","agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36","duration":"300004"}

Both calls are part of a 3 way zip:

Mono.zip(
	xxx.subscribeOn( Schedulers.parallel() ).contextWrite( accessTokenInContext ),
	yyy.subscribeOn( Schedulers.parallel() ).contextWrite( accessTokenInContext ),
	Mono.just( Credentials.builder().build() ).subscribeOn( Schedulers.parallel() )
)

From the logs it seems the api https://yyy.com/g/aaa/vt?t=xxx is started, but from there nothing is logged anymore for this call. I assume we're not even reaching the part that adds the timeout handler, which explains why it doesn't kick in, and in the end its the timeout of spring async manager that kills the chain.

a successful call produces the following logs:

2024-10-18 14:35:08,767 [parallel-6] DEBUG LoggingInterceptor {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - Starting web request: GET https://yyy.com/g/aaa/vt?t=xxx [Accept:"application/json", X-B3-TraceId:"b14f70f4d89aead2a73ea5ad3893efd1", X-B3-ParentSpanId:"b6bd29089ec724f9", X-B3-SpanId:"69ef7aba6ae86708", X-B3-Sampled:"0"]
2024-10-18 14:35:08,767 [parallel-4] DEBUG LoggingInterceptor {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - Starting web request: GET https://xxx.com/api/v3.0/cameras/xxx [Accept:"application/json", Authorization:"XXXX", X-B3-TraceId:"b14f70f4d89aead2a73ea5ad3893efd1", X-B3-ParentSpanId:"b6bd29089ec724f9", X-B3-SpanId:"12eb1ce7c74bd3c2", X-B3-Sampled:"0"]
2024-10-18 14:35:08,768 [reactor-http-epoll-7] DEBUG DefaultPooledConnectionProvider {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [502ff507, L:/192.40.5.106:50646 - R:xxx.com/208.81.97.184:443] Registering pool release on close event for channel
2024-10-18 14:35:08,768 [reactor-http-epoll-7] DEBUG PooledConnectionProvider {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [502ff507, L:/192.40.5.106:50646 - R:xxx.com/208.81.97.184:443] Channel connected, now: 3 active connections, 0 inactive connections and 0 pending acquire requests.
2024-10-18 14:35:08,772 [reactor-http-epoll-7] DEBUG HttpClientConnect {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [502ff507-1, L:/192.40.5.106:50646 - R:xxx.com/208.81.97.184:443] Handler is being applied: {uri=https://xxx.com/api/v3.0/cameras/xxx, method=GET}
2024-10-18 14:35:08,773 [reactor-http-epoll-7] DEBUG DefaultPooledConnectionProvider {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [502ff507-1, L:/192.40.5.106:50646 - R:xxx.com/208.81.97.184:443] onStateChange(GET{uri=/api/v3.0/cameras/xxx, connection=PooledConnection{channel=[id: 0x502ff507, L:/192.40.5.106:50646 - R:xxx.com/208.81.97.184:443]}}, [request_prepared])
2024-10-18 14:35:08,773 [reactor-http-epoll-7] DEBUG DefaultPooledConnectionProvider {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [502ff507-1, L:/192.40.5.106:50646 - R:xxx.com/208.81.97.184:443] onStateChange(GET{uri=/api/v3.0/cameras/xxx, connection=PooledConnection{channel=[id: 0x502ff507, L:/192.40.5.106:50646 - R:xxx.com/208.81.97.184:443]}}, [request_sent])
2024-10-18 14:35:08,778 [reactor-http-epoll-6] DEBUG DefaultPooledConnectionProvider {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [ec9bc086, L:/192.40.5.106:45796 - R:yyy.com/192.40.4.23:443] Registering pool release on close event for channel
2024-10-18 14:35:08,778 [reactor-http-epoll-6] DEBUG PooledConnectionProvider {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [ec9bc086, L:/192.40.5.106:45796 - R:yyy.com/192.40.4.23:443] Channel connected, now: 50 active connections, 1 inactive connections and 0 pending acquire requests.
2024-10-18 14:35:08,780 [reactor-http-epoll-6] DEBUG HttpClientConnect {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [ec9bc086-1, L:/192.40.5.106:45796 - R:yyy.com/192.40.4.23:443] Handler is being applied: {uri=https://yyy.com/g/aaa/vt?t=xxx, method=GET}
2024-10-18 14:35:08,780 [reactor-http-epoll-6] DEBUG DefaultPooledConnectionProvider {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [ec9bc086-1, L:/192.40.5.106:45796 - R:yyy.com/192.40.4.23:443] onStateChange(GET{uri=/g/aaa/vt, connection=PooledConnection{channel=[id: 0xec9bc086, L:/192.40.5.106:45796 - R:yyy.com/192.40.4.23:443]}}, [request_prepared])
2024-10-18 14:35:08,780 [reactor-http-epoll-6] DEBUG DefaultPooledConnectionProvider {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [ec9bc086-1, L:/192.40.5.106:45796 - R:yyy.com/192.40.4.23:443] onStateChange(GET{uri=/g/aaa/vt, connection=PooledConnection{channel=[id: 0xec9bc086, L:/192.40.5.106:45796 - R:yyy.com/192.40.4.23:443]}}, [request_sent])
2024-10-18 14:35:08,798 [reactor-http-epoll-6] DEBUG LoggingInterceptor {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - request GET https://yyy.com/g/aaa/vt?t=xxx responded with 200 after PT0.031605912S
2024-10-18 14:35:08,799 [reactor-http-epoll-6] DEBUG FluxReceive {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [ec9bc086-1, L:/192.40.5.106:45796 - R:yyy.com/192.40.4.23:443] [terminated=false, cancelled=false, pending=0, error=null]: subscribing inbound receiver
2024-10-18 14:35:08,799 [reactor-http-epoll-6] DEBUG HttpClientOperations {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [ec9bc086-1, L:/192.40.5.106:45796 - R:yyy.com/192.40.4.23:443] Received last HTTP packet
2024-10-18 14:35:08,799 [reactor-http-epoll-6] DEBUG DefaultPooledConnectionProvider {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [ec9bc086, L:/192.40.5.106:45796 - R:yyy.com/192.40.4.23:443] onStateChange(GET{uri=/g/aaa/vt, connection=PooledConnection{channel=[id: 0xec9bc086, L:/192.40.5.106:45796 - R:yyy.com/192.40.4.23:443]}}, [response_completed])
2024-10-18 14:35:08,810 [reactor-http-epoll-7] DEBUG LoggingInterceptor {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - request GET https://xxx.com/api/v3.0/cameras/xxx responded with 200 after PT0.043456299S
2024-10-18 14:35:08,811 [reactor-http-epoll-7] DEBUG FluxReceive {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [502ff507-1, L:/192.40.5.106:50646 - R:xxx.com/208.81.97.184:443] [terminated=false, cancelled=false, pending=0, error=null]: subscribing inbound receiver
2024-10-18 14:35:08,811 [reactor-http-epoll-7] DEBUG HttpClientOperations {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [502ff507-1, L:/192.40.5.106:50646 - R:xxx.com/208.81.97.184:443] Received last HTTP packet
2024-10-18 14:35:08,812 [reactor-http-epoll-7] DEBUG DefaultPooledConnectionProvider {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - [502ff507, L:/192.40.5.106:50646 - R:xxx.com/208.81.97.184:443] onStateChange(GET{uri=/api/v3.0/cameras/xxx, connection=PooledConnection{channel=[id: 0x502ff507, L:/192.40.5.106:50646 - R:xxx.com/208.81.97.184:443]}}, [response_completed])
2024-10-18 14:35:08,813 [Jetty-5240] INFO  SreIngressMetricsInterceptor {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - Handled GET /api/v3.0/feeds and responded with status code 200 after 47 ms
2024-10-18 14:35:08,813 [Jetty-5240] INFO  ScrubbedNCSARequestLog {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - {"host":"89.188.2.30","user":"-","time":"2024-10-18T14:35:08.765+0200","httpmethod":"GET","path":"/api/v3.0/feeds?deviceId=xxx&include=multipartUrl,flvUrl,webRtcUrl","code":"200","size":"264","referer":"https://zzz/","agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36","duration":"48"}
2024-10-18 14:35:09,962 [Jetty-5114] DEBUG SreIngressMetricsInterceptor {traceId=b14f70f4d89aead2a73ea5ad3893efd1} - Handling GET /media/streams/{type}/multipart with params {type=preview}.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug A general bug
Projects
None yet
Development

No branches or pull requests

2 participants