Automated Detection of Red Light Violations: A Signal-Processing Tutorial
signal processingcomputer visionautomotive

Automated Detection of Red Light Violations: A Signal-Processing Tutorial

UUnknown
2026-03-02
10 min read
Advertisement

Build a compact, auditable red-light violation detector: HSV thresholding, vehicle tracking, Kalman fusion, and a simple classifier — practical steps for 2026 labs.

Hook: Why students, engineers, and instructors should build a red-light violation detector now

Automated driving systems (ADS) and driver-assist features are under intense scrutiny in 2026 — recent probes into Full Self-Driving (FSD) behavior highlighted by regulators have shown why a clear, testable signal-processing pipeline for red light detection matters. If you are learning signal processing, building prototype safety sensors, or teaching applied classification, a small end-to-end project that fuses camera data with motion cues is a perfect, tractable exercise.

This tutorial walks you through a compact, reproducible pipeline — from raw camera frames to a decision that flags a potential red-light violation. You will learn practical thresholding, simple classification, sensor fusion, and evaluation steps that are directly applicable to homework, lab projects, and prototype systems used in safety analyses.

Executive summary (most important points first)

  • Goal: detect scenarios where a vehicle crosses an intersection or stop-line while the traffic signal is red.
  • Approach: combine simple computer vision thresholding for traffic-light color with vehicle tracking and a lightweight classifier to generate violation hypotheses.
  • Sensors: monocular camera (primary), optional radar/inductive loop/GNSS for improved reliability.
  • Algorithms: HSV thresholding, morphological filtering, Kalman filter for tracking, logistic regression / decision rule for classification.
  • Outcomes: Prototype runs in real-time on edge devices; evaluation uses standard metrics (precision, recall, F1).

System overview: sensors, signals, and outputs

Sensors & data streams

Keep the prototype minimal and interpretable. You will need:

  • Monocular camera mounted to observe the intersection and approach lane(s). Frame rate >= 15 fps preferred.
  • Vehicle speed estimate: either derived from optical flow/GPS or from an auxiliary sensor (radar / loop detector).
  • Optional: timestamped vehicle presence sensor (inductive loop) to corroborate crossing events.

High-level pipeline

  1. Preprocess frames (resize, denoise).
  2. Detect and classify traffic light state (RED / YELLOW / GREEN) using color thresholding + morphological filtering.
  3. Detect vehicles and track their trajectories across frames (bounding boxes, centroids).
  4. Detect crossing of a virtual stop line and compute timing relative to red onset.
  5. Fuse signals into features and run a simple classifier or rule-based decision to flag a violation.

Step-by-step prototype walkthrough

1. Traffic-light detection using color thresholding

We favor HSV-space thresholding for robustness to illumination. Convert each frame from RGB to HSV and extract the region around the known traffic-light ROI.

Basic algorithmic steps:

  1. Crop ROI where the signal head sits (can be manual or detected with a lightweight detector).
  2. Blur ROI: apply Gaussian smoothing to reduce noise. Example: 5x5 kernel.
  3. Convert ROI to HSV and threshold for red. Note: red wraps the hue circle — combine two ranges.

Key threshold expression (conceptual):

R_presence = (H in [0, H1] OR H in [H2, 180]) AND S > S_min AND V > V_min

Typical HSV bounds (tune for your camera):

  • H1 ≈ 10, H2 ≈ 170 (OpenCV hue scale 0–180)
  • S_min ≈ 100, V_min ≈ 100

After thresholding, apply morphological opening and closing to remove speckle. Compute the binary mask area A_R. Declare the light red when A_R > A_thresh.

Rule: red_on = (A_R(t) > A_thresh) for at least T_stable ms to avoid flicker.

2. Vehicle detection and tracking

Use an off-the-shelf detector (YOLOv5/YOLOv8, MobileNet-SSD) or a simple background-subtraction model for low-traffic scenes. For classroom prototypes, bounding-box detectors are straightforward and run on small GPUs or edge devices.

Extract vehicle centroid c(t) and bounding box width. Use a Kalman filter to smooth the centroid and estimate velocity v(t). The constant-velocity Kalman filter uses state x = [px, vx]^T for 1D motion along lane direction; equations:

Prediction: x_{k|k-1} = F x_{k-1|k-1}, P_{k|k-1} = F P_{k-1|k-1} F^T + Q

with F = [[1, dt], [0, 1]]. Update step uses measured position z_k = px_meas.

Estimate speed: v(t) ≈ vx from Kalman state. For sanity, also compute finite-difference speed: v_fd = (px(t) - px(t-1)) / dt and low-pass filter it.

3. Crossing detection: stop line geometry and timing

Define a virtual stop line in image coordinates — a horizontal or slanted line aligned with the intersection stop bar. A vehicle crosses when its centroid passes this line.

Let y_stop be the image coordinate of the stop line. A crossing event occurs at time t_cross when centroid y(t_cross) <= y_stop (assuming decreasing y moves into intersection).

Measure time delta relative to red onset: Δt = t_cross − t_red_onset. A violation hypothesis if Δt > 0 and within a short window after red onset (for example, Δt < T_window like 5 s) — but ground truth rules vary by jurisdiction, so tune T_window.

4. Feature design and simple classifier

Construct a compact feature vector for each candidate event at the moment of crossing:

  • f1 = red_on (binary)
  • f2 = Δt (seconds, negative if crossed before red)
  • f3 = v_cross (vehicle speed at crossing)
  • f4 = distance_to_stop_at_red (how far from line when light turned red)
  • f5 = occlusion_flag (if detection confidence low)

Use logistic regression to map features to probability of violation. The logistic model is:

P(violation|f) = 1 / (1 + exp(−w^T f − b))

Train w and b on labeled events (see data section). As a simple operational rule, declare violation if P > 0.5 or use a higher threshold to prioritize precision.

5. Combining thresholds with classifier (hybrid approach)

For interpretability, combine a strict rule and the classifier:

  • Immediate flag rule: if red_on AND vehicle crossed and v_cross > v_high (e.g., 2 m/s), raise high-confidence violation.
  • Otherwise, compute logistic P and flag if P > τ (tunable).

This hybrid approach mirrors how many safety systems emphasize simple, auditable rules for high-risk conditions while using classifiers for ambiguous cases.

Evaluation: metrics, datasets, and labeling

Metrics

Use precision/recall/F1 to balance false alarms vs missed violations. For safety analysis, also report:

  • False Positive Rate (FPR): rate of flagged violations that are not true.
  • False Negative Rate (FNR): missed violations.
  • Latency: time from crossing to flag (must be near real-time for accountability).

Data collection and labeling

Collect video from intersections with consent and appropriate privacy protections. Label events at frame-level: light state changes, vehicle IDs, crossing frames, ground-truth violation labels (by human review). Public datasets like BDD100k or Waymo Open Dataset offer vehicle and traffic light labels; synthetic datasets (CARLA, LGSVL) are convenient for rare-event augmentation.

Practical considerations for real prototypes

Time sync and timestamping

All streams must be timestamped with a common clock. Misalignment between camera and other sensors is the top source of false positives. If using smartphone GPS, beware of coarse temporal resolution; prefer NTP-synced cameras or hardware triggers.

Latency and edge deployment

For a lab prototype, a laptop/GPU is fine. In field prototypes, use a Jetson-class device or Coral TPU. Optimize by:

  • Running a tiny detection model (MobileNet / YOLO-Nano).
  • Processing only ROIs for signal-head detection.
  • Skipping frames adaptively when traffic is sparse.

Robustness to lighting and occlusion

Thresholding fails under strong sun glare, night-time, and dirt. Mitigations:

  • Use adaptive thresholds based on ambient brightness (estimate V channel histogram).
  • Use temporal voting: require red to be stable for T_stable before declaring red_on.
  • Fuse a non-visual sensor (inductive loop / radar) to confirm vehicle presence when camera confidence is low.

Multimodal sensor fusion

As of 2025–2026, the field moved toward meaningful safety evidence via multimodal fusion. Combining camera, radar, and V2X messages yields much stronger hypotheses about violations. A Kalman filter or factor-graph fusion (for more complex setups) can jointly estimate vehicle trajectory and signal state.

Synthetic data & domain adaptation

Simulation tools (CARLA, NVIDIA DRIVE Sim) and photorealistic synthetic datasets became mainstream by late 2025 for augmenting rare violation cases. Use domain-adaptive techniques (style transfer, fine-tuning on a small real set) to bridge sim-to-real gaps.

Federated learning & privacy-preserving evidence collection

Regulators and manufacturers increasingly prefer privacy-sensitive pipelines. In 2026, federated learning and on-device inference help collect model improvements without centralizing sensitive video. For auditing, store compact event descriptors and hashed frames rather than raw video.

Explainability and auditable rules

Regulatory interest (e.g., probes into FSD behavior) has stressed traceability and explainability. Hybrid systems (rules + interpretable classifier) are preferred for audit logs: timestamps, frames, mask overlays, and feature values allow investigators to replay decisions.

Prototype detectors are powerful tools for research and accountability, but they carry responsibilities. When evaluating or publishing results:

  • Comply with local privacy and data-collection laws.
  • Ensure human review of flagged incidents before public claims.
  • Provide reproducible documentation for datasets and thresholds.
"Auditable, interpretable pipelines are essential when safety and public trust are at stake." — practical design principle

Context note: NHTSA investigations into automated driving behavior in 2025 highlighted the need for testable, transparent pipelines that can reproduce why a system may have ignored a red signal. A small-school or lab prototype you build using the steps below will help you understand the same failure modes regulators examine.

Actionable checklist: build this prototype in a weekend

  1. Gather tools: camera (30 fps), laptop, optional radar/GPS, OpenCV, PyTorch or TensorFlow, lightweight detector (YOLOv5/8).
  2. Collect 30–60 minutes of intersection video under varying light conditions or use CARLA for simulated footage.
  3. Annotate 50–200 crossing events with labels: red onset frames, crossing frames, violation=yes/no.
  4. Implement HSV thresholding for signal ROI; tune HSV and area thresholds until red_on detection accuracy > 90% on held-out clips.
  5. Add vehicle detector and Kalman tracker; verify centroid crossing works on test clips.
  6. Compute features and train logistic regression; validate with cross-validation and compute precision/recall.
  7. Deploy on edge and measure latency; tune thresholds for desired precision/recall tradeoff.

Minimal pseudocode (Python-like)

for frame in video_stream:
    t = frame.timestamp
    roi = crop_signal_head(frame)
    red_mask = hsv_threshold(roi)
    red_on = area(red_mask) > A_thresh and red_stable_last_T

    detections = vehicle_detector(frame)
    tracks = kalman_tracker.update(detections)

    for track in tracks:
        if crosses_stop_line(track.centroid, y_stop):
            delta_t = t - red_onset_time
            features = [red_on, delta_t, track.v, track.conf]
            p = logistic.predict_proba(features)
            if rule_high_confidence(red_on, track.v) or p > tau:
                flag_violation(track.id, t, p)
  

Evaluation & reporting for classroom or lab reports

Report example table rows: true positives, false positives, false negatives, latency distribution, and per-condition breakdown (day/night, occlusion, wet pavement). For reproducibility, include model weights, threshold values, and a small representative video sample.

Final takeaways and next steps

Building a red-light violation detector is an ideal exercise to combine classical signal processing (thresholding, smoothing), estimation (Kalman filtering), and classification (logistic regression) with practical computer vision. The hybrid, auditable pipeline presented here balances interpretability and performance and aligns with 2026 trends toward multimodal fusion, synthetic augmentation, and privacy-aware on-device learning.

If you're preparing coursework, a lab project, or an audit-ready prototype, start small: get reliable red detection and crossing detection working first, then layer in fusion and learned classifiers. Keep logs and visual overlays for every flagged event — those artifacts are the core of evidence when systems are investigated.

Call to action

Ready to prototype? Download our starter code and annotated sample dataset to jumpstart your project (edge-ready scripts, detector configs, and evaluation notebooks). If you want a step-by-step guided lab or classroom assignment, request the lab pack — we include grading rubrics, dataset splits, and MATLAB/Python solutions. Build, test, and help make ADS behavior auditable and safer.

Advertisement

Related Topics

#signal processing#computer vision#automotive
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-03-02T00:50:07.509Z