In this post, we will discuss a new design pattern – the worker pattern and how to use this pattern in microservices. Previously, I have covered communication and event-driven patterns in microservices.
What is a Worker Pattern?
Let’s start with a scenario. You have a microservice that processes business logic. For a new set of business requirements, you need to build a new feature in microservice. This feature will be resource-intensive for processing a large set of data. One way to handle any resource-intensive processing of data is to do that processing asynchronously. There are different ways you can implement this for asynchronous handling. You can create a job and queue it in a queue system. OR you can publish an event for another service to process the data. Irrespective of your approach, you will need a way to control the flow of this data.
With either approach, you want to avoid your main service getting blocked for processing these asynchronous events. Let’s assume for a second if you went with a job approach, you can use Bull Queue (redis-based queuing mechanism).
Bull Queue offers a processor for each job that your service will add to the queue. Nevertheless, this processor runs in its own process and can perform the job. For resource-intensive operations, this can still have a bottleneck and can probably stop performing the way you wanted.
In such cases, you can create a standalone worker. This standalone worker will run on its own set of resources (like Kubernetes pod). This worker will process the same job that your service added to the queue.
When to use Worker Pattern?
Every use case is different. But the simple heuristic you can use to check how much CPU-heavy work the processor plans to do. If the job from the queue is CPU-heavy, then create a separate processor or worker. In the same service, you can write a separate processor that will execute the job in a separate process. It is also known as Sandbox Processor.
Worker will spin up the standalone service in its own resources and execute the processor. It will not interfere with other processes from the service since it is executing one job. Another advantage of worker patterns is to be able to scale the resources horizontally. If you use a sandbox processor, you might have to scale the services, and all scaled-up resources are equally divided into processes.
Example of Worker Pattern
Let’s look at a quick example of a worker pattern.
We have a simple endpoint that will add a job to a queue. This endpoint is part of a controller in a microservice. This will look like something like below:
export class JobController {
constructor(@InjectQueue('file-queue') private queue: Queue) {}
async createFileProcessingJob(): Promise {
const job = await this.queue.add('process-file', data);
}
}
Now, this controller is part of app.module.ts
, we will also have to register Redis for the bull queue.
@Module({
imports: [
BullModule.registerQueue({
name: 'worker-pattern-queue',
redis: {
host: 'localhost',
port: 6379,
},
}),
],
})
export class AppModule {}
To create a standalone worker, we can create a separate module and use that module to create a new application context. For example, we create a new module subscriber.module.ts
and it has our consumer (worker/processor) as a provider.
@Processor('worker-pattern-queue')
export class FileQueueProcessor {
}
The module will look like
@Module({
imports: [
BullModule.registerQueue({
name: 'worker-pattern-queue',
redis: {
host: 'localhost',
port: 6379,
},
}),
],
providers: [
FileQueueProcessor
]
})
export class SubscriberModule {}
And create a separate folder with main.ts
. This should include
const app = await NestFactory.createApplicationContext(SubscriberModule);
await app.init();
Now you can run this main.ts
as a separate worker in a docker container in Kubernetes pod. You can scale it horizontally as well.
Conclusion
In this post, I showed how to use worker pattern in microservices to handle certain requirements.
Not related to this post, but my previous post was about handling flakiness.