We are leaving Russia. We are against aggression and war with Ukraine. It is a tragedy for our nations, it is a nightmare

rss feed Blog

Hangfire.Pro.Redis 2.8.16

This maintenance release fixes the order in which enqueued jobs are displayed on Queues and Enqueued Jobs pages in Dashboard UI or returned from Queues and EnqueuedJobs methods from the Monitoring API component. Now jobs that will be dequeued first are displayed first as expected. Also decreased maximum number of commands per LUA command execution to 200K, others are moved to a subsequent command (previous value was 1M).

Continue Reading →

Hangfire.Pro.Redis 2.8.15

This release fixes a regression appeared in the previous version 2.8.14 when multiple jobs storages are used – background jobs started to be processed one by one in this case.

Continue Reading →

New Company, Hangfire OÜ

Starting from Mar 8, 2022 Hangfire is owned by Hangfire OÜ (private limited), an Estonian company. This change relates only to company’s residence and structure and it’s still owned by me. I was planning the relocation the whole last year after reading Solzhenitsyn’s The Gulag Archipelago, that’s why I was able to do it so quickly. The transition is still in progress, but that’s only because residence change is a slow and complex process – I’m far away and will not return for any reason.

Continue Reading →

Hangfire.Pro.Redis 2.8.14

This release fixes “too many results to unpack” regression appeared in 2.8.X when using batch continuations with a lot of background jobs.

Continue Reading →

Hangfire 1.7.28

This release contains important fixes for SQL Server-based job storage to work better with sub-second polling (including TimeSpan.Zero for QueuePollingInterval) and properly send heartbeats for long-running jobs even if CLR’s Thread Pool is starved for long periods of time. It is also very likely that some problems related to high CPU usage and high number of fetching queries occurring after deploying new application version to IIS were also fixed (they are almost impossible to catch up, but same workaround already helped in the past).

Continue Reading →