#[repr(align(64))]pub struct FeedStats {
pub messages_processed: u64,
pub messages_per_second: f64,
pub avg_process_latency_ns: u64,
pub max_process_latency_ns: u64,
pub p99_process_latency_ns: u64,
pub dropped_messages: u64,
pub last_update_time: u64,
pub memory_usage_bytes: usize,
/* private fields */
}Expand description
Performance statistics for feed processing in HFT applications
This structure provides comprehensive statistics for monitoring, tuning, and debugging high-frequency trading data feeds. It tracks latency, throughput, and resource usage metrics that are critical for HFT operations.
Key features:
- Zero-allocation percentile calculations using fixed-size arrays
- Efficient EWMA (Exponentially Weighted Moving Average) for smooth metrics
- Cache-line alignment for optimal CPU cache efficiency
- Detailed memory usage tracking for resource monitoring
- Thread-safe and lock-free metrics updates
§Performance Characteristics
The statistics collection is designed to have minimal impact on the critical path:
- O(1) update operations for most metrics
- O(n log n) for percentile calculations but limited to small fixed-size arrays
- Zero heap allocations during normal operation
- Minimal cache contention through 64-byte alignment
§Example Usage
use rusty_feeder::feeder::FeedStats;
// Create new statistics tracker
let mut stats = FeedStats::default();
// Update with a latency sample
stats.add_latency_sample(250); // 250 nanoseconds
// Track memory usage
stats.update_memory_usage(1024); // 1KB
// Print current statistics
println!("Avg latency: {}ns, P99: {}ns, Max: {}ns",
stats.avg_process_latency_ns,
stats.p99_process_latency_ns,
stats.max_process_latency_ns);Fields§
§messages_processed: u64Total messages processed
messages_per_second: f64Messages processed per second (smoothed)
avg_process_latency_ns: u64Average processing latency (nanoseconds)
max_process_latency_ns: u64Maximum processing latency observed (nanoseconds)
p99_process_latency_ns: u6499th percentile processing latency (nanoseconds)
dropped_messages: u64Total dropped messages
last_update_time: u64Last update timestamp (nanoseconds)
memory_usage_bytes: usizeMemory usage for this feed (bytes)
Implementations§
Source§impl FeedStats
impl FeedStats
Sourcepub fn add_latency_sample(&mut self, latency_ns: u64)
pub fn add_latency_sample(&mut self, latency_ns: u64)
Add a new latency sample and update all latency statistics
This method efficiently updates all latency-related metrics in one call:
- Exponentially Weighted Moving Average (EWMA) for stable average latency
- Maximum observed latency
- 99th percentile latency using a rolling window
Latency metrics are critical for high-frequency trading systems where microsecond or even nanosecond differences can impact trading performance.
§Parameters
latency_ns- The measured latency in nanoseconds
§Performance Characteristics
- Updating avg and max latency is O(1)
- P99 calculation is O(n log n) but is only performed when enough samples are collected
- Uses fixed-size arrays (zero heap allocation)
- The implementation uses an insertion sort for small arrays which is more efficient than quicksort for arrays of size ~100 elements
Sourcepub const fn increment_dropped(&mut self)
pub const fn increment_dropped(&mut self)
Increment the count of dropped messages
This method should be called whenever a message is dropped due to errors, buffer overflow, or other exceptional conditions. Tracking dropped messages is critical for high-frequency trading systems to detect data quality issues.
§Performance Impact
This operation is O(1) and has minimal performance impact.
Sourcepub const fn increment_processed(&mut self)
pub const fn increment_processed(&mut self)
Increment the count of processed messages
This method should be called for every successfully processed message. The counter is used to calculate throughput and other performance metrics.
§Performance Impact
This operation is O(1) and has minimal performance impact.
Sourcepub const fn update_memory_usage(&mut self, new_bytes: usize)
pub const fn update_memory_usage(&mut self, new_bytes: usize)
Update the memory usage estimate for resource monitoring
This method applies an exponentially weighted moving average (EWMA) to produce a stable estimate of memory usage over time, which is useful for detecting memory leaks and tracking resource utilization.
§Parameters
new_bytes- The latest memory usage sample in bytes
§Notes
The EWMA formula used gives 90% weight to historical data and 10% to the new sample, which provides good stability while still responding to significant changes. This approach is particularly useful for high-frequency trading applications where small variations in memory usage are expected but sustained trends are important to track.
Memory usage is tracked per feed, allowing for granular monitoring of different market data streams.