By: Chris Jacobs, Vice President, Autonomous Traffic and Automotive Safety, ADI
Grandma-era technology is a thing of the past
Radar was first used by Christian Huelsmeyer to detect ships in 1904, more than a century ago. Common applications are military radar, civil air traffic control, and, of course, speed points for private vehicles. But there is now a misconception that the technology is mature and that there is little development in the field. Both imaging radar and collaborative radar are undergoing disruptive new innovations. How Analog Devices (ADI) implements radar in automotive applications and introduces unique software and algorithms is what makes it special.
ADI has been active in the automotive industry for the past 25 years and its products are used in both passive and active safety applications. For the past 15 years, ADI has had a presence in the automotive radar supply chain with DSPs and data converters, most recently offering 24 GHz and 77 GHz/79 GHz radar chipsets.
“Advanced driver assistance systems are here, autonomous driving is on the horizon, and road safety is paramount,” said Chris Jacobs, vice president of autonomous transportation and safety at ADI. “So my engineers and I are committed to using advanced features and technologies to achieve higher performance and autonomy, thereby saving lives. We estimate that automotive sensors based on our products save 8 lives per day.”
To protect drivers, passengers and pedestrians, a lot of innovation is required in both hardware and software. A more efficient and optimized radar technology must be developed that provides the same high performance, functionality and reliability as systems in the aerospace and defense industry, and translates to a size and cost suitable for the private vehicle market.
“While the cost of a $250,000 high-resolution imaging radar system is nothing compared to the total price of a multi-million dollar military tank, it is comparable to the average price of $30,000,” said Mike Keaveney, technical director of ADI’s Autonomous Traffic and Automotive Safety Group. Dollar family cars are ridiculously expensive. We’re exploring how to customize, miniaturize, harden, reduce cost, size, weight and power requirements so that it can be used in every car.”
The transfer and adoption of high-cost, high-performance radar technology for military and aerospace, and its installation in automobiles, presents significant technical, aesthetic, and economic challenges. The key challenge is not only to reduce size, weight and power (SWaP), but also to increase performance while reducing cost. Radar must not only be capable of object detection, but also object classification. This requires a higher resolution of radar images than current state-of-the-art systems.
These are the goals that ADI is committed to achieving to advance technology, ensure safety, and bring cost-effective automotive radar to consumers.
u Increase angular resolution to the level required for highly autonomous driving without increasing size, cost, and power budgets.
u Increases the number of reflection points emitted by low reflectivity targets.
u Significantly reduces detection latency, especially for laterally moving objects, which improves response time and allows the vehicle to take evasive action in emergency situations.
u Optimize form factor (size, weight and power) while maintaining high performance.
u Maintain the aesthetics of the system without affecting the industrial design of the vehicle.
u Enables high-resolution radar at a price and form factor acceptable to mass-market automotive cost constraints.
u Keep costs within the price-sensitive range of car buyers, because they are the ones paying for it all.
u must also continue to comply with government-mandated advanced driver assistance system (ADAS) safety features (such as the 2022 US Autonomous Emergency Braking Directive). Radar will no longer be an option but standard. Therefore, the key is to continually reduce system cost to a price point that is acceptable to both consumers and OEMs, while still maintaining the performance required for these challenging ADAS applications.
Today’s automotive radar units are smaller than a cell phone and can detect large obstacles in blind spots in front of, behind, or to the side of you. But that’s not enough.
The concept of imaging radar and achieving a higher level of angular resolution is a desirable feature, especially for robo-taxi. High resolution supports not only object detection (there is something in front) but also object classification (bike, car, person or child in front).
To achieve higher resolution, imaging radars utilize high bandwidth signal processing, digital beamforming, and phased array techniques. All of this relies on a lot of hardware and processing power, where the antenna size is scaled with the desired angular resolution and the channel count is increased to cover the desired antenna area with the channel. “Just throwing in more expensive hardware to solve this problem is a ‘brute force’ way of using higher resolutions as a solution,” says Chris Jacobs.
Today, ADI is working closely with leading OEMs and Tier 1 suppliers to develop new breakthrough ways to improve radar and meet its modern challenges. The radars used in today’s cars are not very high-resolution and can only see a clump of objects. It can detect the presence of an object around the car, it could be a motorcycle, it could be a person or a large truck, but it cannot confirm what the object is. Driven by advances in hardware detection technology and software algorithms, radar has improved resolution and can distinguish the properties of detected objects, bringing us one step closer to safe fully autonomous vehicles.
Resolution Issues and Challenges Regarding Object Discrimination
Existing conventional automotive radars provide a horizontal angular resolution of about 10° to 20° over a large field of view.
Figure 1. Low-resolution radar and hidden objects. The angular resolution of the existing non-imaging radar is generally 10° to 20°, and it will treat 3 pedestrians as one object.
Figure 2. High-resolution imaging radar can reveal hidden objects.
The angular resolution of imaging radar is 1° to 2°, which is 10 times that of non-imaging radar. The data box collects information at 1° to 2° resolution to help distinguish and locate the 3 pedestrians.
The cost you pay for a higher resolution will give you more data, and as the resolution increases, so does the amount of data, which requires more computing power. This is why advanced modes that process all data efficiently are critical to managing large amounts of data and low power consumption. Efficient central processing or edge processing will be the foundation of future radar.
Next Steps: Collaborative Radar and Communication Requirements
Mike Keaveney said: “Cooperative radar that leverages existing vehicle radar sensor hardware is the future of the automotive space. Coherent radar is the need for coherence and recognition to work together to create a high-resolution coherent image, in this case detection radar. Once the economics of collaborative radar are achieved, there are many advantages to enjoy.”
Collaborative radar provides imaging radar performance without significantly increasing the size of individual existing radar systems in the vehicle. This is because the effective aperture is now set by the distance between two (or more) distributed radar sensors with overlapping fields of view, rather than being predetermined by the physical size of either sensor.
Figure 3. Narrow aperture of primary radar.
Primary radars are now commonly used in automobiles.
The radar signal from each emission source is reflected off an object and transmitted back to the origin. Aperture, or primary radar performance, is the width in inches of the radar transmitter itself.
Larger aperture for collaborative radar/SuperRADAR
SuperRADAR is ADI’s method of implementing a coherence algorithm with multiple radar beams with overlapping fields of view.
SuperRADAR-based cooperative radars use low-speed links for coarse timing between radar sources. Each sensor sends data to the central processor, or possibly from one radar to another, and processes it at the edge sensor, which is more economical.
Chris Jacobs said: “Traditional cooperative radar systems are not easy to implement because of the need to run high-frequency links between radars. The hardware overhead and cost of achieving this coherence is very high.”
For automotive radar, it is necessary to improve the cost-effectiveness of collaborative radar. “The traditional way of adding hardware to the car is not the solution, we have to look at the problem differently,” Jacobs said. “We can use the same hardware in the system to improve the combined system performance in an intelligent way, using algorithms to combine these technologies.” The SuperRADAR approach allows the radar system to produce a coherent superposition of multiple incoherent images.”
Figure 4. Larger aperture of synergistic radar.
How does collaborative radar work? The signal from each source bounces off an object and is captured by two radar receivers. So the same target has 2 appearances (or two different views) and 2x the time on the target, while the primary radar has only one appearance and 1x the time. Also, since the two radars work together, the radar aperture (proportional to performance) is the size of the front of the car, and the distance between the two corner radars (about 4 feet) is nothing like the primary radar’s inches.
This approach allows for cost-effective sensor designs that place sensors at multiple points around the vehicle, enabling excellent object detection and classification.
Advantages of SuperRADAR: 1 + 1 > 2
Not only does SuperRADAR reduce size, weight and power consumption, it also brings more functionality to the system, resulting in higher resolution while significantly reducing hardware and increasing application performance at a more reasonable cost.
More reflection points: 2x the time on the target
SuperRADAR delivers twice the performance using the same amount of hardware. Alternatively, use half the radar channels to maintain the same performance. “With SuperRADAR, we get twice the resolution of a single radar,” Chris Jacobs said. “Additional processing power may be required, but the roadmap for automotive-grade DSP/MCUs is sufficient for those processing needs.”
SuperRADAR is actually radar fusion. We will fuse two separate radar views, so the resulting resolution is better than if it were done individually. “Convergence will be the standard way of implementing ADAS in the future,” Jacobs said.
Lower Latency: Quickly Calculating Lateral Velocity Saves Lives
An important point of a vehicle imaging system is the ability to quickly calculate lateral velocity, the speed at which an object moves orthogonally (at right angles) to the direction the vehicle is traveling. However, to achieve a sufficiently low false positive rate, even a good, mostly camera-based machine learning algorithm still needs around 300 ms for lateral movement detection. For a pedestrian walking in front of a vehicle traveling at 60 miles per hour, the difference in milliseconds is the severity of the injury, so response time is critical.
The 300 ms delay is due to the time the system takes to perform delta vector computations from 10 consecutive video frames, the number required for reliable detection with an acceptably low false positive rate. However, because of SuperRADAR’s wide effective aperture, and the way it coherently combines images from two or more sensors, it is able to accurately calculate the tangential and radial components of velocity within a 30 ms measurement period ( This latency is 10 times faster than current state-of-the-art systems). This low-latency detection is less than the 100 ms reaction time of an F1 driver, far less than the average driver’s reaction time!
Figure 5. Today’s imaging systems have 300 ms of latency and 10 frames of images to detect orthogonal motion.
With today’s common imaging radar technology, if someone is crossing the road, multiple camera images are needed to show what is moving. Each camera image takes 30 ms. 10 images takes 300 ms. During this time, the car moves several feet.
Figure 6. The SuperRADAR system has a 30 ms frame delay to detect orthogonal motion.
The two radars work together to allow triangulation to capture objects in motion because both radar sources are offset. Just map the person from position 1 with radar beam 1 first, and then map from position 2 with radar beam 2 after 30 ms. This lets the car know where the person is moving.
SuperRADAR can identify moving objects across the road in one-tenth the time using conventional imaging radar.
Economics of SuperRADAR
The SuperRADAR concept is not only an effective way to reduce overall system cost, but also to meet performance demands and bring greater value to the end application.
Chris Jacobs said, “What we want is the performance of imaging radar, which can only be found in expensive robo-taxi applications today, but also to remove all the expensive hardware and bring the price down to a level that the individual car owner can afford. This is exactly what Where SuperRADAR comes into play, it produces twice the performance with the least hardware footprint and software running on the hardware.”
The future of the car
As we look to the future of the automotive space, we see that existing systems may need to be fundamentally rebuilt. Existing car platforms are very different from future car platforms.
With extensive experience and expertise in verticals, ADI is uniquely positioned to optimize the radar processing needs of tomorrow’s vehicles through a combination of hardware and software products, bringing more value to the end application. This algorithm directly addresses the current and future total cost of ownership (TOCO) challenges facing automakers.
SuperRADAR has great potential and is still in the preliminary exploration stage. Not only is this technology a higher-performance, more cost-effective solution to advance ADAS, it will ultimately save lives.
About the Author
Chris Jacobs joined Analog Devices in 1995. During his tenure, Chris has held various design engineering, design management and business leadership roles in the consumer electronics, communications, industrial and automotive sectors. He is currently vice president of the Autonomous Transportation and Vehicle Safety Group at Analog Devices. Chris holds a BS in Computer Engineering from Clarkson University, an MS in Electrical Engineering from Northeastern University, and an MBA from Boston University. Contact information:[email protected]