A longstanding mathematical problem that has hindered progress in robotics, control systems, and machine learning has finally been solved. Researchers have developed a new generalized matrix inverse that remains consistent when variables change units—such as from imperial to metric or liters-per-hour to liters-per-minute—addressing a critical gap in linear algebra tools. This breakthrough completes a trilogy of generalized inverses, alongside the Drazin inverse for similarity transformations and the Moore-Penrose inverse for unitary transformations, providing a comprehensive framework for handling singular matrices in practical applications. The new inverse ensures that calculations in fields like tracking and data fusion yield reliable regardless of measurement scales, moving beyond the arbitrary criteria often imposed by existing s.
The key finding is a unit-consistent generalized matrix inverse, denoted A-U, which satisfies specific algebraic properties while preserving consistency under diagonal transformations. For any nonsingular diagonal matrices D and E, the inverse obeys (DAE)-U = E-1 A-U D-1, meaning it correctly adjusts when units are changed on both sides of a matrix equation. This contrasts with the widely used Moore-Penrose inverse, which only guarantees consistency under unitary transformations like rotations, not unit changes. The paper demonstrates this with a concrete example: given matrices A and D, the Moore-Penrose inverse fails to satisfy (DAD-1)-P = DA-P D-1, while the new inverse achieves (DAD-1)-U = DA-U D-1, ensuring predictions remain valid across different measurement systems.
Ology builds on scaling functions that adjust matrices to be invariant under unit changes. For an m × n matrix A, the researchers define left and right diagonal scale functions, DUL[A] and DUR[A], which transform A into a scaled matrix S = DUL[A] · A · DUR[A]. This scaling ensures that the product of nonzero elements in each row and column is 1, making S unit-invariant. The unit-consistent inverse is then computed as A-U = DUR[A] · S-P · DUL[A], where S-P is the Moore-Penrose inverse of S. The approach leverages matrix scaling theory, particularly Rothblum and Zenios's Program II, to handle matrices with zeros, and uses functions like the geometric mean of nonzero elements to achieve convergence in iterative scaling processes.
Show that the new inverse meets all required generalized inverse properties: AA-U A = A, A-U AA-U = A-U, and rank[A-U] = rank[A]. In tests, such as with the example matrices A and D from the paper, the unit-consistent inverse correctly yields (DAD-1)-U = DA-U D-1, while the Moore-Penrose inverse does not. The paper also extends to develop unit-invariant alternatives to the singular value decomposition (UI-SVD), where A = D · USV* · E, with D and E derived from the scaling functions. This produces unit-invariant singular values that are robust to diagonal transformations, unlike conventional SVD values, which are only invariant to unitary changes. Implementations in Octave/Matlab confirm the inverse's practicality, with complexity dominated by Moore-Penrose calculations.
Are significant for real-world applications where unit consistency is crucial. In robotics and control systems, as noted in the paper, changes in state variable units can disrupt calculations if using the Moore-Penrose inverse, leading to unreliable outputs. The new inverse ensures that solutions to linear models, like ŷ = A · θ̂, remain valid under unit transformations, preventing errors in parameter estimation. For machine learning, it enables unit-consistent gradient descent and deep learning optimizations, moving beyond arbitrary least-squares assumptions. In data fusion and image processing, the UI-SVD can create signatures robust to scanning variations, improving tasks like passport verification or database searches where amplitude changes occur.
Limitations include the need for matrices with full support—no all-zero rows or columns—for certain scaling constructions, though the general solution handles zeros via matrix scaling theory. The inverse is not always unique if alternative scaling functions are used, but the paper proves uniqueness under the specified construction. Additionally, may not suit applications requiring blow-up behavior for singular elements, as algebraic consistency can suppress infinite values, potentially leading to counter-intuitive in some practical scenarios. The paper cautions against blindly applying any generalized inverse, emphasizing that choice should depend on preserving application-specific properties, with the new inverse tailored for unit consistency.
Original Source
Read the complete research paper
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn