Skip to content

Jmeter testing with 3 replica

Renuka Srishti edited this page Mar 8, 2022 · 14 revisions

Introduction

Here we are taking only 3 instance for each microservice and running the whole application. We are accessing all microservices by api-gateway only. We are using jetstream2 instance named as "Instance Viyad". It has CPU total 16 cores and 60 GB RAM.

Load testing

Number of users : 250 * 10 (10 requests per user, so 2500 requests)

  • Expected throughput at each microservice SummaryReport
  • Active threads over time ActiveThreadsOverTime
  • Response Times Over Time ResponseTimesOverTime
  • Transactions Per Second TransactionsPerSecond

Number of users : 350 * 12 (12 requests per user, so 4200 requests)

  • Expected throughput at each microservice Summary Report
  • Active threads over time ActiveThreadsOverTime
  • Response Times Over Time ResponseTimesOverTime
  • Transactions Per Second TransactionsPerSecond

Spike Testing

Number of users : 330

  • Thread Schedule ThreadsDetails
  • Expected throughput at each microservice SummaryReport
  • Active threads over time ActiveThreadsOverTime
  • Response Times Over Time ResponseTimesOverTime
  • Transactions Per Second TransactionsPerSecond

Number of users : 1800

  • Thread Schedule ThreadsDetails

  • Expected throughput at each microservice SummaryReport

  • Active threads over time ActiveThreadsOverTime

  • Response Times Over Time ResponseTimesOverTime

  • Transactions Per Second ResponseTimesOverTime

  • System’s capacity limits (at what load do you get significant failure rates). If we increase the number of users and requests per user and found out that system is failing at 4200 request that is 12 requests per user for 350 users. If we talk about spike testing, system started failing at 1800 threads.

  • Improvements that can demonstrate increased capacity.

Understand fault tolerance

  • Why system continues to run even with injected failures. Right now system was able to handle such number of requests due to kubernetes loadbalancer feature with such number of replicas. But to support at large scale, we need to implement, auto scaling which can live other instance if no one is available or all dead.
  • What is the impact on performance, compared to your previous load tests? We need to work to support more number of users, need to use message service so api-gateway does have much load and application can work much better.

User experience and client-side performance

User can get frustrated if lots of user access at the same time. As it does not support good number of users. It should show proper message while services are are down and redirect to customer support.