Autonomous vehicles are moving beyond simply avoiding obstacles to understanding and following traffic regulations with human-like reasoning. This evolution addresses a critical gap in automated driving systems: the ability to make legally compliant decisions that can be justified in accident scenarios or legal proceedings.
The central finding from this comprehensive survey reveals that significant advances occur at the intersection of reliable perception, legal compliance, and justifiable decision-making. Researchers have developed systematic approaches that enable self-driving cars to incorporate explicit legal norms into their operational frameworks, creating systems that are both technically robust and legally defensible.
Methodologically, the research combines neural networks with symbolic logic systems through several innovative approaches. Neural-Symbolic Energy-Based Models (NeSy-EBMs) integrate domain knowledge directly into the learning process, scoring the compatibility between neural network predictions and logical constraints. For traffic sign recognition, Robust Logic-infused Learning (RLDL) automatically extracts high-level features like shape and color through Inductive Logic Programming, then incorporates these into the neural network's training process. These hybrid approaches maintain performance while significantly improving robustness against adversarial attacks.
Results demonstrate practical improvements across multiple domains. RLDL methodology shows significant performance advantages over baseline convolutional neural networks when subjected to adversarial attacks while maintaining comparable performance on normal images. Knowledge-Refined Sets (KRPS) reduce prediction set sizes by up to 80% while increasing semantic consistency by 30% across related perception tasks. In object detection, Multi-label Object Detection Constrained Loss (MODCL) frameworks show improvements in precision, recall, and F1-score compared to baseline methods.
The real-world implications extend beyond technical performance to legal accountability. Systems can now formalize traffic laws using temporal logics like Linear Temporal Logic (LTL) and Signal Temporal Logic (STL), which capture time-dependent requirements such as "if the light is red, the vehicle must eventually stop before the intersection." These formalizations enable automated checking of whether vehicle trajectories satisfy encoded legal rules. The development of "rulebooks" that prioritize legal requirements helps resolve conflicts when rules cannot be satisfied simultaneously, similar to how human drivers choose the "lesser evil" in complex situations.
Current limitations highlight areas needing further development. Most systems handle only subsets of traffic regulations rather than complete legal codes, and scalability remains challenging for full jurisdictional coverage. Adaptation to different driving cultures and regional variations presents ongoing difficulties, as overly strict rule-following may disrupt traffic flow while flexibility risks non-compliance. Cross-jurisdiction operations require seamless switching between different legal frameworks, and human-vehicle interaction challenges emerge when autonomous vehicles must anticipate and react to human drivers who don't always follow rules perfectly.
As autonomous vehicles approach widespread deployment, these legal reasoning capabilities become essential not just for technical operation but for public trust and regulatory acceptance. The ability to explain decisions in legal terms and maintain auditable records of compliance decisions represents a crucial step toward accountable autonomous systems that can operate safely within existing legal frameworks.
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn