Building a Node.js Email Queue with Redis and BullMQ

5/2/2025Server Development8 min read

Modern Node.js applications often offload email sending to a background queue so that HTTP requests aren’t delayed by network calls. A common pattern is to use a Node.js email queue backed by Redis. In this model, email details (recipient, subject, template data, etc.) are enqueued as jobs, and dedicated workers process the queue asynchronously. Using Redis as the job store makes Redis email job queueing reliable and scalable. In this post, we’ll walk through a TypeScript MailService example that integrates Nodemailer, EJS templates, BullMQ, and Redis. We’ll explain the architectural rationale, code flow, and benefits of decoupling email dispatch from the main app. Key themes include scalability, fault tolerance, and observability of the email queue.

Why Queue Emails?

Sending emails can be a slow or unreliable operation (SMTP, network latency, retries, etc.). If you do it synchronously in your API handlers, users will experience slow responses and you risk dropped messages. Queuing emails decouples this work from the request/response flow. As BullMQ’s documentation notes, message queues “decouple your application components” and help scale by distributing load across multiple workers. In practice, this means your API can enqueue an email job and immediately respond to the user, while a background worker actually handles the send.

Queuing also adds reliability. The job data is stored durably in Redis, and you can configure retries or delays if an SMTP server is busy or fails. This gives “high guarantees that the email will be sent”, since failed jobs can automatically be retried or handled by another worker. During traffic spikes, you can simply spin up more workers or even multiple servers to consume the queue. As one BullMQ tutorial emphasizes, you can “add huge amounts of mails to the queue” and process them in parallel by increasing worker count. In summary, queuing email sends makes your application more responsive and resilient under load.

Introducing BullMQ with Redis

BullMQ is a fast, robust job queue library for Node.js, built on Redis. It is designed for high performance (over 100k jobs/sec) and horizontal scalability. Because BullMQ stores all job state in Redis, you can run multiple worker processes across servers and they will all pull from the same queue. The library supports useful features like delayed jobs, retries, job priorities, and rate limiting.

BullMQ Snapshot: “BullMQ is a lightweight, robust, and fast NodeJS library for creating background jobs and sending messages using queues... backed by Redis, which makes it easy to scale horizontally”. It’s even used in production for email-sending services.

Using BullMQ, we define a Queue in our application (the producer) and one or more Workers (the consumers). Behind the scenes Redis holds a list of pending email jobs. This setup adds fault tolerance: if a worker crashes, the job stays in Redis and another worker can pick it up. BullMQ also emits events and metrics for observability. Recent updates introduced telemetry support to “track the performance of queues and workers in real-time”, which helps diagnose issues in production.

Code Flow Breakdown

Let’s examine how a TypeScript MailService might enqueue and process emails. We’ll break down the steps for adding jobs, processing them, rendering templates, and handling errors.

Queueing Email Jobs

The MailService creates a BullMQ queue (e.g. called "mailQueue") connected to Redis. Whenever you need to send an email, you push a job into this queue. For example:

await this.mailQueue.add('sendEmail', {
  to: user.email,
  subject: 'Welcome to MyApp!',
  template: 'welcome.ejs',
  templateData: { userName: user.name, loginUrl }
});

Here we call queue.add() with a job name ('sendEmail') and a payload object containing the recipient, subject, and any data needed to render the email template. This enqueues the email task in Redis for later processing. For reference, one BullMQ example shows adding jobs similarly (e.g. queue.add('send-simple', { to, subject, text })). The queued job stays in Redis until a worker processes it.

Email Worker Processing

A Worker is then set up to consume jobs from the same queue. In BullMQ you can create a worker by pointing it to the queue name and providing a processor function:

const worker = new Worker(
  'mailQueue',
  async (job) => {
    const { to, subject, template, templateData } = job.data;
    // ... render template and send email ...
  },
  { connection: redisConnection, concurrency: 3 }
);

This worker will pull jobs off "mailQueue" and run the async processor function for each job, up to the specified concurrency. (In our code, we might store connection info and concurrency in a config.) Inside the processor, we extract job.data (the payload we enqueued). The worker function then handles sending the email. We might delegate this to another helper or do it inline. For example, the Taskforce BullMQ tutorial shows creating a Worker with a file-based processor and configurable concurrency. In our case, the function receives the job details and proceeds to render and send the email.

Template Rendering

Our MailService uses EJS templates for HTML email bodies. In the worker, we take the template name and data from the job and render the HTML before sending. For example:

const emailHtml = await ejs.renderFile(
  path.join(__dirname, 'templates', `${template}.ejs`),
  templateData
);
await transporter.sendMail({ to, subject, html: emailHtml });

Here, template might be something like "welcome" or "reset-password", and templateData contains variables (e.g. username, links). The code calls ejs.renderFile() to produce the HTML string, then passes it to Nodemailer’s sendMail. This pattern matches common examples (e.g. see how an email body is rendered and sent using EJS and Nodemailer). The key point is that the heavy work of templating and contacting the SMTP server happens inside the queue worker, not in the main app.

Logging and Error Handling

Since the email job is running in a worker, we can add logging and retries. In the worker function, it’s good practice to wrap the send in a try/catch to handle any exceptions (or let BullMQ automatically handle retries based on job options). For overall job status, we attach listeners to the worker:

worker.on('completed', (job) => 
  console.log(`Email job ${job.id} completed`)
);
worker.on('failed', (job, err) => 
  console.error(`Email job ${job.id} failed:`, err)
);

These event handlers log success or failure for each job. The BullMQ docs show this pattern for queue events. Combined with BullMQ’s retry settings, this ensures fault tolerance: if a send fails (e.g. SMTP down), the job can be retried automatically. Observability is also improved: you can see exactly which jobs succeeded or failed, and even integrate BullMQ’s metrics/telemetry to monitor queue length and processing rate in real time.

Benefits Recap

  • Decoupling: Offloading emails to a queue means API requests return immediately, and the email logic is isolated. This keeps your application responsive and better organized.

  • Scalability: You can enqueue thousands of emails and process them in parallel. Simply add more worker processes or servers to handle high volume. BullMQ’s Redis backend allows horizontal scaling across machines.

  • Reliability: Jobs are durably stored in Redis. You can configure retries and delays so that temporary failures don’t lose emails. As noted by BullMQ’s author, you gain “high guarantees that the email will be sent” thanks to retries and multiple workers.

  • Observability: BullMQ emits rich job events and now includes telemetry/metrics. Tools like Bull Board or BullMQ’s dashboard can show queued, active, completed, and failed jobs. This makes it easy to monitor email delivery health.

  • Flexibility: You get features like delayed scheduling (e.g. send a newsletter at 2am) and rate limiting (throttle sends to avoid provider limits). These advanced options are hard to implement correctly from scratch, but BullMQ provides them out of the box.

Conclusion and Takeaways

Using a Redis-backed BullMQ queue for email sends helps your Node.js application become more scalable and robust. Our MailService example shows how easy it is to enqueue email jobs (queue.add(...)) and process them in a dedicated worker. By rendering EJS templates and sending via Nodemailer inside the worker, the main app thread stays free. With retries and worker events, failed sends are handled gracefully and logged for visibility. In short, a Node.js email queue built on Redis and BullMQ lets you offload and monitor email dispatch, yielding a more performant, fault-tolerant system. With this pattern, you can effortlessly scale out your email sending capacity and gain peace of mind that messages won’t get lost even if external email services hiccup.

Sources: Concepts and code patterns are adapted from the BullMQ documentation and example tutorials. The BullMQ docs highlight how message queues “decouple components” and improve reliability, and a BullMQ email microservice tutorial notes the scalability and fault-tolerance benefits. The snippets above illustrate these ideas in practice.

If you're interested in seeing a complete, production-ready implementation of the email notification system with EJS templates, environment-driven configs, and a Redis-compatible queue-ready structure, you can explore my open-source Express boilerplate project:

📂 View the full MailService.ts file on GitHub

This codebase is actively maintained and includes many best practices around security, scalability, and clean architecture for Node.js backend services.

Comments (0)

Newsletter

Stay updated! Get all the latest and greatest posts delivered straight to your inbox

© 2026 Kuray Karaaslan. All rights reserved.