Identifying the Critical Paths – Testing Serverless Applications

Identifying the Critical Paths

A critical path is typically a user experience that is critical to the operation of your business. Examples of user requests that follow critical paths include ordering a taco, making a payment for your child’s Christmas present, donating to a charitable cause,or tracking a parcel. If these requests go wrong or don’t work as expected, it can be considered detrimental to the service your business offers to its consumers.

Identifying the critical paths in your application can help you decide how to apply the serverless square of balance and focus your engineering resources.

Critical paths

Your users are usually present at some stage of a request’s journey along a critical path. These requests are typically time-sensitive and expect a synchronous response. When it comes to critical paths, recovery from failure (or fault tolerance) is a less viable strategy for supporting the quality of your application. Retrying requests could increase latency to an unacceptable level, and your ability to retry these operations will be diminished when the user is no longer present or available to give explicit permission.

The operational quality of critical paths should be primarily supported through extensive test coverage and alerting. You must ship as few bugs as possible to these microservices.

The topic of load testing may seem redundant when it comes to serverless workloads. After all, you have chosen serverless for scalability. Yet, while your APIs and Lambda functions will usually surprise you with their effortless ability to scale to your spikiest traffic, it is still very worthwhile to conduct a series of tests that put your application under various load profiles. Load testing your critical paths in particular is essential before any user events of significant scale.

You should analyze your predicted traffic and usage patterns and design performance test scenarios based on these predictions and historical data. Pay particular attention to integration points between different AWS managed services where usage volume quotas apply (see Chapter 8) and any connections between your application and third-party APIs or internal downstream systems that may not be capable of the same scalability as your application.

Leave a Reply

Your email address will not be published. Required fields are marked *