In an era where digital control systems power everything from smart homes to industrial robots, managing limited communication and computational resources has become a critical . Most control systems now run on digital hardware, requiring careful management of when to sample sensors, update control actions, and transmit data. This work develops a self-triggered control approach for linear systems where sensors update independently, addressing the fundamental question of how to optimize sensor utilization without compromising stability. The controller computes optimal horizons at each sampling instant, selecting which sensor to read over the next several time steps to maximize intervals between readings while maintaining system performance.
The researchers discovered that by actively scheduling sensor readings rather than sampling them periodically, they could dramatically reduce sensor utilization while guaranteeing system stability. The key finding is that this approach achieves 59-74% reductions in sensor usage compared to traditional periodic sampling, as demonstrated in simulations. For unperturbed systems, ensures exponential stability, while for systems with bounded disturbances, it guarantees global uniform ultimate boundedness—meaning the system remains within acceptable bounds despite external influences. This represents a significant advancement in resource-aware control, particularly for networked systems where communication constraints are paramount.
Ology involves two distinct implementations to address computational complexity. In the online version, the controller solves an optimization problem at each sampling instant to determine the optimal sensor selection sequence, achieving theoretical optimality but requiring higher computational resources. The offline version precomputes optimal horizons using conic partitioning of the state space, reducing online computation to a simple lookup table operation. Both approaches use a quadratic Lyapunov function to ensure stability, with the controller calculating sequences that maximize the average sampling intervals for each sensor based on the current system state. The system model considers linear time-invariant control systems with asynchronous measurements, where sensors operate on independent schedules rather than synchronized clocks.
Simulation validate the effectiveness of both implementations. For the perturbation-free case using the online procedure with a system described by specific matrices and initial conditions, sensor utilization was reduced by 73.58% compared to using sensors at every time step. The offline procedure achieved a 70.27% reduction under similar conditions. In the perturbed case with bounded disturbances, the online implementation reduced sensor usage by 68.94%, while the offline version achieved a 59.21% reduction. These , detailed in Figures 6-17 of the paper, demonstrate consistent performance improvements across different scenarios. The data shows that maintains system stability while significantly extending the intervals between sensor readings, as evidenced by the evolution of system states and Lyapunov functions in the simulations.
Of this research are substantial for real-world applications where energy efficiency and resource constraints are critical. By reducing sensor utilization by up to 74%, this approach can extend battery life in wireless sensor networks, decrease communication overhead in industrial control systems, and improve efficiency in networked robotics. The framework enables resource-aware control in systems with limited computational and communication capabilities, making it particularly valuable for Internet of Things devices, autonomous vehicles, and smart infrastructure. The ability to maintain stability with significantly fewer sensor readings means systems can operate more efficiently without sacrificing reliability or performance.
Despite these advances, the work acknowledges several limitations. The research was conducted in 2017-2018, and the literature review has not been updated to reflect subsequent developments in the field. The paper explicitly notes that three extensions merit investigation: adaptive triggering thresholds that adjust to observed disturbance patterns, optimization of sensor subsets allowing multiple sensors per update, and extension to nonlinear systems. Additionally, the computational complexity of the online implementation may be prohibitive for some real-time applications, though the offline version addresses this through precomputation. The framework currently focuses on linear systems with bounded disturbances, leaving open questions about its applicability to more complex nonlinear dynamics or unbounded perturbations.
Original Source
Read the complete research paper
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn