We’re using AWS AppSync as a “back-end for front-end” solution to provide a unified GraphQL API and real-time notifications (subscriptions) that our Resolvers are calling the Upstreams (REST):
As we continue to scale, we saw that we were adding more and more pressure on our Upstream, which means we were also increasing our error number, so it was time to optimize our Upstream call numbers.
First, we have to select a use case that can be cached. It’s impossible for us to cache a Resolver that requires fresh data.
After selecting our ideal use case, we worked with our Product team to validate the pros (absorb more customers) and the cons (less fresh data, TTL defined together). From there, we agreed to implement Pre-Resolver Caching for this specific use case.
We followed the AWS documentation for the implementation. TLDR: We defined the cache key and TTL of our cache and now our infrastructure looks like this:
If you’re using
BatchInvoke, you should disable it and support both
BatchInvoke and the unit one during your next rollout. Supporting both is important so you don’t have downtime in your application during that phase. You can remove that support on the rollout + 1.
If you’re using AWS SAM, you’ll be able to configure caching and just use the
Condition on the
AWS::AppSync::ApiCache resource to enable and disable it:
Conditions: IsSandbox: !Equals [!Ref Stage, 'sandbox']
appSyncCache: # First step: enable on the Sandbox only for testing Condition: IsSandbox Type: AWS::AppSync::ApiCache
When it’s ready for Production, we recommend enabling Pre-Resolver Caching on all environments so it gives you the same experience as the Production one. As a result, you’ll be able to reproduce possible caching issues and adapt as you go.
Number of invocations
Observation: Decreased by 75% (dotted is the previous week); we’ve also observed a huge decrease of errors from the Resolver (fewer invocations)
Response time (P95)
Observation: Increased by 50% (dotted is the previous week); this is mainly due to the fact that AWS AppSync checked the cache internally (this adds latency)
Upstream (called by several Resolvers)
Number of calls
Observation: Decreased by 40% (dotted is the previous week)
Response time (P95)
Observation: Decreased by 30% (dotted is the previous week)
Pre-Resolver Caching is a low-code solution that helps us increase our ability to scale in direct exchanges with our Product team and use case per use case.
We haven’t explored Cache Invalidation yet because we didn’t choose use cases that could tolerate that. We definitely recommend enabling the Pre-Resolver Caching on your AWS AppSync.
This article is a part of our Tech Team Stories series, owned by Aircall’s team of engineers and developers. Check out some of their latest editions to learn more:
Published on January 2, 2024.