Upcoming book:

Modern .NET Development with Azure and DevOps

Dependency Injection with Hangfire Microservices

Introduction

I was searching for examples for using Hangfire with Dependency Injection (DI) using the .NET Core DI library (Microsoft.Extensions.DependencyInjection* APIs). The official documentation does not, or at least did not, have any useful sample, and the only relevant StackOverflow question wasn't good enough.

So, this originally started as an answer to that StackOverflow question back in 2020, and I had pending writing a full sample, which this post aims to provide.

In this post I will show you how to configure and use Hangfire to be used in two separate applications: an ASP.NET Core client application that schedules jobs, and a Console application that hosts the Hangfire server and contains the logic to run the jobs.

Set up

For this tutorial, we will build the following:

  1. A shared library (or NuGet package), that contains interfaces and models used by both the API and the processor.
  2. A console application processor, which runs the jobs.
  3. And an API, which schedules the jobs.

For simplicity, all three projects will be on the same solution. In production code, I would normally have three different solutions (and Git repos) instead.

Shared Library

We will start with the shared library because it is simple and it is required by both applications.

We need a model class for passing whatever data we need to run the job:

public class CallApiExampleData
{
    public string Url { get; set; }
    public string SomeOtherImportantData { get; set; }
}

And we need the interface that will represent our contract:

public interface ICallApiExampleJob
{
    Task Execute(CallApiExampleData data);
}

Do notice that the methods in the interface can only return void and Task. Since the jobs run asynchronously, there is no data to return.

Processor

This application can be just a Console application, which makes it a perfect candidate for deployment as a Docker container. The .NET Host takes care of keeping the application alive while the Hangfire services take care of running the jobs. We need 2 important things here: the first is the configuration for Hangfire; and the second is the implementation of our interface.

Let's implement the interface first:

internal class CallApiExampleJob : ICallApiExampleJob
{
    private readonly HttpClient _httpClient;

    public CallApiExampleJob(HttpClient httpClient)
    {
        _httpClient = httpClient;
    }

    public async Task Execute(CallApiExampleData data)
    {
        var response = await _httpClient.PostAsync(data.Url, new StringContent(data.SomeOtherImportantData, Encoding.UTF8, "application/json"));

        response.EnsureSuccessStatusCode();
    }
}

This simple implementation just uses the data that was saved when the job was scheduled to make an HTTP POST call to some URL.

Now that we have an implementation of the interface, we can configure Hangfire and the Host. Using .NET 6's minimal applications, that would look like this:

using IHost host = Host.CreateDefaultBuilder(args)
    .ConfigureServices((builder, services) =>
        services
            .AddScoped<ICallApiExampleJob, CallApiExampleJob>(sp => new CallApiExampleJob(new HttpClient()))
            .AddHangfire(configuration =>
            {
                configuration.UseSqlServerStorage(connectionString);
            })
            .AddHangfireServer())
    .Build();

await host.RunAsync();

The first .AddHangfire call allows us to configure Hangfire itself (for example, which backend storage it uses and how to connect to it), while the second .AddHangfireServer registers the services needed to run jobs (notably, the service BackgroundJobServerHostedService).

Just like in ASP.NET Core applications, the host.RunAsync call takes care of starting the host with its services and waiting for the application to shut down.

API

Finally, the last piece that we need is the API. This is arguably the biggest part of the code, due to the amount of boilerplate code needed. For this sample, I will be using .NET 6 minimal API with controllers.

Let's start with the Hangfire setup. This should be obvious, but the backing storage and connection string must be the same as for the Processor for the communication to work.

builder.Services.AddHangfire(configuration =>
{
    configuration.UseSqlServerStorage(connectionString);
});

We need a Controller to accept the requests:

[Route("example")]
public class ApiExampleController : Controller
{
    private readonly IApiExampleService _apiExampleService;

    public ApiExampleController(IApiExampleService apiExampleService)
    {
        _apiExampleService = apiExampleService;
    }

    [HttpPost("schedule-simple")]
    public IActionResult ScheduleSimpleCall([FromBody]ScheduleSimpleCallModel model)
    {
        var jobId = _apiExampleService.ScheduleCall(model);

        return Ok(new { JobId = jobId });
    }

    [HttpPost("schedule-recurring")]
    public IActionResult ScheduleRecurringCall([FromBody] ScheduleRecurringCallModel model)
    {
        var recurrenceId = _apiExampleService.ScheduleRecurringCall(model);

        return Ok(new { RecurrenceId = recurrenceId });
    }
}

This has 2 endpoints just for showing 2 different types of jobs in Hangfire: one for a "simple" job (a job that is executed only once), and another for a "recurring" job (a job that gets triggered based on a CRON expression).

And the corresponding service:

public class ApiExampleService : IApiExampleService
{
    private readonly IBackgroundJobClient _backgroundJobClient;
    private readonly IRecurringJobManager _recurringJobManager;

    public ApiExampleService(IBackgroundJobClient backgroundJobClient, IRecurringJobManager recurringJobManager)
    {
        _backgroundJobClient = backgroundJobClient;
        _recurringJobManager = recurringJobManager;
    }

    public string ScheduleCall(ScheduleSimpleCallModel model)
    {
        var jobModel = new CallApiExampleData
        {
            Url = "http://localhost:5122/example/run-something",
            SomeOtherImportantData = model.SomeImportantData
        };

        var jobId = _backgroundJobClient.Schedule<ICallApiExampleJob>(x => x.Execute(jobModel), TimeSpan.FromSeconds(2));

        return jobId;
    }

    public string ScheduleRecurringCall(ScheduleRecurringCallModel model)
    {
        var jobModel = new CallApiExampleData
        {
            Url = "http://localhost:5122/example/run-something",
            SomeOtherImportantData = model.SomeImportantData
        };

        var recurrenceId = Guid.NewGuid().ToString();
        _recurringJobManager.AddOrUpdate<ICallApiExampleJob>(recurrenceId, x => x.Execute(jobModel), model.CronExpression);

        return recurrenceId;
    }
}

IBackgroundJobClient and IRecurringJobManager are services that the AddHangfire call adds for us to the DI container. In the cases of simple jobs, Hangfire generates a GUID for the JobId; but for recurrences, we need to pass the id.

Seeing the results

That, more or less, sums up everything that is needed to have .NET microservices using Hangfire to schedule (and offload) jobs.

If we run the sample solution (follow the GitHub link below), and schedule a recurring job like this:

{
  "someImportantData": "Hello",
  "cronExpression": "* * * * *"
}

We will get back a response like this:

{
  "recurrenceId": "99d99b13-ed6b-4715-b837-2bae44317aa4"
}

And, we can confirm in the Hangfire dashboard that the recurrence was scheduled (and we can trigger it or delete it):

HangFire recurring jobs list

We can also look at the real-time graph and execute the endpoint to trigger a simple job, which looks like this:

HangFire dashboard

Summary

This post should provide a good idea of how to include Hangfire to schedule jobs as well as to offload jobs for processing in the background. This can be expanded well beyond this simple sample, but it should provide a good starting point for using Hangfire with Dependency Injection to allow for unit testing and good separation of concerns.

If you want to see the final code or run the sample, I provide a solution in my GitHub repository that uses a SQL Server LocalDB for storage here: GitHub repository.