Replies: 4 comments 24 replies
-
To be more specific, I followed the code https://github.com/awslabs/aws-advanced-jdbc-wrapper/blob/main/examples/AWSDriverExample/src/main/java/software/amazon/ReadWriteSplittingPostgresExample.java where internal connection pool is commented. I will comment out the section to use internal connection pool and check if the above issue is addressed or not |
Beta Was this translation helpful? Give feedback.
-
Hi @jiyanhbo, thank you for reaching out with this problem. I had a few questions that will help clarify the problem for me.
|
Beta Was this translation helpful? Give feedback.
-
The cluster topology can be found The metrics for db connections can be found here You can see that most of db connections are from one postgres instance. |
Beta Was this translation helpful? Give feedback.
-
Hi @jiyanhbo, I believe I understand what is happening here. There are two ways to get reader load balancing:
In your situation, you are load-balancing readers using the 2nd strategy (connecting to the reader cluster endpoint). Note that this method of load-balancing is completely controlled by Aurora, not our driver. I found this Aurora documentation that mentions some reasons why you might not be seeing load-balancing when connecting to the reader cluster. From my own testing, I realized that Aurora will connect you to the same reader if you are establishing many connections within a very short time frame. If you wait 5 seconds between each connection (as shown in that Aurora doc page I linked), you should see more load-balancing happening. If not, you may want to review the other reasons on that page on why you might not be seeing load-balancing. Regarding the connection pool size not being obeyed, please note that initial connections to a cluster URL will not be pooled. This is intentional as pooling cluster URLs may be problematic because they resolve to different instances over time. The instances are not capped at 30 in your example because the connections are using the cluster URL and thus are not being pooled. Currently, the main benefit of internal pools is when setReadOnly is called. When setReadOnly is called (regardless of the initial connection URL), an internal pool will be created for the writer/reader that the plugin switches to and connections for that instance can be reused in the future. |
Beta Was this translation helpful? Give feedback.
-
I followed the code sampler to execute read & write sql statement by calling reader & write endpoints. However, the read requests are not split evenly at replica instances. The read endpoint has 3 replica instances and most of db connection for reading operations come from a single instance.
The metric for reader db connection can be found here (instance _1 is primary instance and others are replica ones)
Do I set up anything wrong? My understanding is that the wrapper will
Is that correct? if yes, Should I add connection pool initiation code explicitly to fix the issue? if not, how to split the read traffic evenly among replica instances?
Beta Was this translation helpful? Give feedback.
All reactions