Benchmarks

Trax.Core adds overhead to every train invocation. This page presents honest numbers so you can make informed decisions about where it's appropriate to use.

All benchmarks use BenchmarkDotNet and are located in tests/Trax.Core.Tests.Benchmarks/.

Test Environment

BenchmarkDotNet v0.14.0, CachyOS
AMD Ryzen 7 7840U w/ Radeon 780M Graphics, 1 CPU, 16 logical and 8 physical cores
.NET SDK 10.0.103
  [Host] : .NET 10.0.3, X64 RyuJIT AVX-512F+CD+BW+DQ+VL+VBMI

What's Being Measured

Three execution modes are compared for identical workloads:

ModeDescription
SerialPlain function calls — no framework, no abstractions
BaseTrainTrain<TIn, TOut> — the core Chain/Resolve pipeline with Either error handling
ServiceTrain (no effects)ServiceTrain<TIn, TOut> — full DI-resolved train with effect runner lifecycle, but no effect providers registered

Train Overhead

How much does Trax.Core cost for different kinds of work?

MethodMeanAllocated
Serial — Add 10.23 ns
BaseTrain — Add 11,564 ns3,688 B
ServiceTrain — Add 17,061 ns7,176 B
Serial — Add 3 (3 junctions)0.23 ns
BaseTrain — Add 31,966 ns4,536 B
ServiceTrain — Add 37,696 ns8,024 B
Serial — Transform (DTO → entity)19.7 ns152 B
BaseTrain — Transform1,340 ns1,648 B
ServiceTrain — Transform6,889 ns5,232 B
Serial — Simulated I/O (3× Task.Yield)1,053 ns112 B
BaseTrain — Simulated I/O3,747 ns4,992 B
ServiceTrain — Simulated I/O9,516 ns8,479 B

Reading the Numbers

For trivial arithmetic, the framework overhead looks enormous in relative terms — a BaseTrain is ~6,900× slower than input + 1. But that comparison is misleading because input + 1 completes in a fraction of a nanosecond.

In absolute terms:

  • BaseTrain adds roughly 1.5 μs of fixed overhead per invocation.
  • ServiceTrain (no effects) adds roughly 7 μs per invocation, covering DI scope creation and effect runner lifecycle.

Once the junctions do real work, the overhead shrinks dramatically. With simulated I/O (Task.Yield), the BaseTrain is only 3.6× the serial cost instead of 6,900×.

For a train junction that makes a database call (~1–10 ms) or an HTTP request (~50–500 ms), the 1.5–7 μs framework overhead is less than 0.01% of total execution time.

Scaling with Junction Count

How does overhead grow as you chain more junctions?

JunctionsSerialBaseTrainServiceTrainBase Overhead/JunctionEffect Overhead/Junction
14.7 ns1,630 ns7,186 ns
34.7 ns1,987 ns7,622 ns~179 ns~218 ns
55.1 ns2,468 ns8,079 ns~210 ns~223 ns
1010.7 ns3,440 ns9,016 ns~201 ns~203 ns

Each additional junction adds roughly 200 ns of overhead in both train modes. This covers junction instantiation, type mapping, and Either propagation through the chain.

Memory Scaling

JunctionsBaseTrainServiceTrain
13,688 B7,176 B
34,536 B8,024 B
55,384 B8,872 B
107,720 B11,352 B

Each additional junction allocates roughly ~424 B (BaseTrain) or ~464 B (ServiceTrain).

Where the Overhead Comes From

SourceApproximate Cost
Train base class instantiation + Either wrapping~1.3 μs
Per-junction: type resolution, Chain<T> dispatch, Either bind~200 ns/junction
DI scope creation (CreateScope)~1 μs
Effect runner lifecycle (initialize + save, no providers)~4.5 μs

Guidance

Trax.Core is not designed for hot-path, sub-microsecond operations. It's designed for business train orchestration where each junction does meaningful work — database queries, API calls, file I/O, domain logic.

Use Trax.Core when:

  • Junctions perform I/O or non-trivial computation (the ~7 μs overhead is noise)
  • You value error propagation, observability, and composability over raw throughput
  • You're building trains that run at request-level granularity (tens to hundreds per second), not tight inner loops

Don't use Trax.Core for:

  • Per-element processing in large collections (use LINQ or loops)
  • Anything that needs to run millions of times per second
  • Pure computation where every nanosecond matters

Running the Benchmarks Yourself

cd tests/Trax.Core.Tests.Benchmarks/
 
# Run all benchmarks
dotnet run -c Release -- --filter '*'
 
# Run a specific suite
dotnet run -c Release -- --filter '*TrainOverhead*'
dotnet run -c Release -- --filter '*Scaling*'
 
# List available benchmarks
dotnet run -c Release -- --list flat

Results will vary by hardware. The relative ratios between serial, base, and effect trains are more meaningful than the absolute numbers.