The Vergence Automation Mission
Creating the Autonomous Navigation System of the Future
We realize the boldness of our mission, especially as a coming-out-of-stealth-mode start-up. Principals at Vergence Automation have been in the sensor, mapping and infrastructure business for over twenty years. Not only have we been-there, done-that, we have worked with entities that are the best-of breed in their respective core areas. It is with Vergence Automation that we will bring forward the capabilities of these strategic partners to create the autonomous navigation system of the future.
The first innovation to the future autonomous navigation technology stack is the 4D Camera. Visitors at conferences and presentations will ultimately ask “Why is it called 4D?” The answer to that can be revealed with our roadway imagery video. The video shows 3D imagery collected along a roadway, whereby each pixel of each image has a depth component to it. The imagery is presented in the video by only showing the intensity value. It makes for a boring and bland video when we only show intensity in the imagery. We could add some flashy effects like colorizing each pixel according to depth, according to vertical location, or some other effect to make humans think the output is more than it really is.
However, we like boring. We like the boring intensity-only output. We like it because the 4D Camera will produce the same boring intensity values independent of the amount of ambient scene lighting and throughout a variety of atmospheric conditions. Humans get bored by non-varying imagery, but machines love it. Boring is good because now the industry has access to a sensor that will ensure the same, boring imagery in all conditions.
The 4D Camera produces images in what we call the Lighting-invariant Imaging Model (once again, both a boring name and a boring acronym LiIM). LiIM (pronounced like “lime” for true acronym junkies) ensures that roadway scenes will be imaged reliably and consistently in all conditions. LiIM also signals the end of machine learning and deep learning for autonomous vehicle control. The need to train neural networks how to recognize roadway features in various lighting conditions and scenarios will become a thing of the past – simplifying on-board software for future systems. In addition, HAD (highly automated driving) maps that utilize LiIM will be capable of sensing infrastructure changes and deterioration when utilized with the 4D Camera in an autonomous vehicle navigation system.