I'm working on building a NestJS API application using a monorepo approach to keep all my endpoints organized in one codebase. However, I'm facing a challenge when it comes to deploying this application to multiple AWS Lambdas, each representing a different module (like Auth, Users, Posts, etc.), and then mapping them to a unified API Gateway.
In my previous attempts, I've created separate NestJS projects for each module, which resulted in a maintenance nightmare due to shared code complexities. Now, I'm exploring the option of maintaining a single codebase for all modules. My goal is to deploy each module as an individual AWS Lambda function, while still keeping the codebase manageable.
So, here's my main question: Can I achieve this setup using the Serverless Framework? I want to have a single codebase for all the modules and deploy them to separate AWS Lambdas. Additionally, I'm aiming to integrate these Lambdas under a single API Gateway to provide a unified API access point.
Any insights, advice, or examples on how to structure my Serverless Framework configuration, how to manage the shared codebase efficiently, and how to set up the API Gateway to work with these multiple Lambdas would be greatly appreciated. Thank you in advance for your help!
Interesting. I'm trying to achieve the same. What I did in my scenario with turborepo, was installing serverless-nest-monorepo which does pretty much what you want if you check the docs. Then you can just integrate the command in your CI to deploy the lambda or deploy on your machine. However, It doesn't seem to be working with nestjs 16 properly by reading the issues...
There are also other tools such as serverless-plugin-monorepo but I had a bug using it. It seems that dependencies of dependencies are not being loaded into the lambda function.
The major problem that I'm having with trying to deploy multiple NestJS applications in a monorepo is that all their dependencies are inside the root node_modules and this means that using just serverless deploy doesn't work because it doesn't add all the dependencies necessary. And that is some of the reason why the 2 plugins I listed before exist. You can try disabling hoisting, that should do the trick, but it will make it very slow to build everything.
You could also try using AWS CDK to manage these deploys, it might be easier that handling the serverless npm module to understand your monorepo.
After some tinkering, what worked for me was the following serverless.yml file inside each application of my turborepo:
service: serverless-example
provider:
name: aws
runtime: nodejs18.x
profile: your-profile-here
region: us-east-1
plugins:
- serverless-offline
- serverless-plugin-optimize
- serverless-plugin-include-dependencies
package:
excludeDevDependencies: false #setted to false by serverless-plugin-include-dependencies suggestion
patterns:
- '!test/**'
- '!insomnia/**'
- '!docst/**'
functions:
main:
handler: ./dist/lambda.handler
events:
- http:
cors: true
path: '/'
method: any
- http:
cors: true
path: '{proxy+}'
method: any
The most important plugin I guess it the serverless-plugin-include-dependencies It decreases the bundle size by a lot, specially when having shared dependencies in packages folder.
And what I like to to do about sharing dependencies is putting the database models in the pacakges folder. Also maybe some services there. In this scenario, each Module becomes it's singular serverless application.
With this setup, test it with serverless offline first locally, running it no the folder of each application, make some calls to your lambda and then serverless deploy and everything should be all right.
And the API gateway can handle multilpe lambda functions at the same time. Just make sure that their routes don't overlap. For example, multiple lambdas with the /login route.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With