Odd request but I am run…

2 minute read

Odd request but I am running into a problem where I am running some logic in the onModuleDestroy lifecycle hook of a nest server but everytime I make a change with the code (running ng s api) it starts up faster than the old server takes to shut down so I always end up getting Starting inspector on localhost:7777 failed: address already in use anyone know a way around this?


I had a similar issue when running dev mode inside Docker. Never got to the root cause but I disabled inspection as I did not use it

Ah good call, I can do that since I don’t use it either.

Thanks !

Try killing the process before with ‘lsof -i : 7777’ for pid then kill the pid?

It is ng serve that is restarting the process on code change.

what do i set to disable inspection?

"serve": { "builder": "@nrwl/node:execute", "options": { "inspect": false, "buildTarget": "notebook:build" } }


I think this is the place where I set it

You got it! that worked, thanks a lot.


and that fixed another issue i had too, sweet.

Which issue was that?

I am hooking into SIGTERM to tell when the server is shut down, I do not allow the server to shutdown until a list of promises are done resolving that have been added and removed from async operations. We recently ran into an issue where our k8 cluster terminated pods while it was scaling down while some of the pods were running async operations which causes them not to finish. Now when k8 terms a pod the nest server will shutdown once its async ops finish.

Nest has support for listening on sigterm and stopping shutdown until a promise is reoslved so im leveraging that.

for some reason when the inspection wasn’t starting correctly the onModuleDestroy lifecycle hook wasnt called :man-shrugging:

I see, that looks like quite an edge case :slightly_smiling_face: good that this fixes it

Ya! Not sure why that would have been causing it. Something with the node process I guess. Either way glad I didn’t need to debug that

I can imagine!

Have you considered/would it make sense using a queue to keep track of the jobs that need to be done? Or is this already backed by a queue?

Ya I’ve thought about it. Haven’t implemented that yet though. It is something I would like to do soon.

I built a more general solution so now there is a service we can use to add ops the server needs to wait to finish before shutting down

I have some api’s working with Bull, feel free to shoot a message if you have any questions

great I definitely will. As we scale out i will definitely be implement a queue for some of our ops. We are still in the process of breaking out our API into Nx libs and different api

both internal and external so i figure out the arch we want ill look more into it.

Yeah, probably a good idea to not add it too soon in the process but keep it in the back of your mind while designing