All drivers have experienced some manner of driver monitoring: a spouse inquiring if you needed a break when driving long distances, a friend asking if you’re okay to drive or a companion who alerts you of some danger ahead. As helpful as your companions may be for alerting you to obstacles or potential threats, driver monitoring by another person in the vehicle is impractical. Nevertheless, it is at these moments of distraction when we are inclined to make critical mistakes, which could result in an accident.
According to a Euro New Car Assessment Program (NCAP) 2025 roadmap report, an estimated ninety percent of annual road accidents are caused by human error. The report goes on to say that, “In general, two kinds of mistakes can be observed: violations, of which speeding and driving under the influence of alcohol or drugs are most common; and human ‘errors,’ in which the driver state – inattentiveness, fatigue, distraction – and inexperience play an important role. In an aging society, sudden medical incapacitation is also a growing cause of road crashes.”
Drivers, not equipment, are the most common failure point in vehicle-related accidents, stimulating more research and Euro NCAP safety initiatives in an attempt to eliminate common driver errors.
Although semi-autonomous and autonomous operation functions may help reduce driver errors, recent news has shown that unpredictable road conditions and road obstructions (i.e. pedestrians on bicycles, debris, etc.) still contribute to vehicle collisions. Until autonomous driving modes of operation can overcome these unknowns, the driver is required to remain in the vehicle control loop.
A Vision-based driver monitoring system (DMS) provides a significant level of feedback to advanced driver assistance systems (ADAS), electronic control units (ECUs), and autonomous driving systems to compensate for and help correct common errors introduced by drivers. An example would be a scenario where the driver is distracted and the vehicle alerts the driver or maneuvers to avoid a collision.
DMS solutions are comprised of one or more cameras equipped with infrared illuminators directed toward the driver; enabling quality images to be captured and processed in suboptimal lighting condition. A TDA3 automotive processor can stream camera images to a computing unit at a rate between 20 to 60 frames per second and run vision algorithms to detect the presence of the driver and key facial markers such as eye and mouth aperture, eye gaze and head position (see Figure 1). Drowsiness, for example, can be detected by the rate at which points surrounding the eyes move up and down or the orientation of the head; either up, right, or tilted.
Figure 1: Driver’s face in DMS
As the vision algorithm continuously collects and analyzes key facial markers, it outputs perceptive indicators of the driver’s state such as level of vigilance, fatigue, distraction, visual focus area, etc. These indicators, when analyzed with ADAS ECUs, help control automatic maneuvering and braking decisions to minimize or reduce collisions. Additionally, autonomous driving functions can be deactivated or activated depending on the driver’s attentiveness.
Feedback from the DMS improves the conveniences that autonomous driving offers (such as hands-off wheel operation) by interpreting whether autonomous modes are still operating within an acceptable boundary condition, such as driver alertness, and if not, take the necessary steps to alert the driver that control is being transferred back to them. These systems can also do the opposite in case of an emergency, i.e. transfer control to the autonomous system that may be able to help maneuver the vehicle to a safe resting location if the driver does fall asleep or becomes incapacitated.
DMS serves a pivotal role to deliver precise and real-time feedback to vehicle steering and control systems. The efficiency and precision with which the DMS feedback may be acted on require a high-performance compute platform. Yet the system must fit within the tight space constraints of the vehicle. Our next blog will provide some guidance on system level requirements to develop a driver monitoring system.
All content and materials on this site are provided "as is". TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with regard to these materials, including but not limited to all implied warranties and conditions of merchantability, fitness for a particular purpose, title and non-infringement of any third party intellectual property right. TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with respect to these materials. No license, either express or implied, by estoppel or otherwise, is granted by TI. Use of the information on this site may require a license from a third party, or a license from TI.
TI is a global semiconductor design and manufacturing company. Innovate with 100,000+ analog ICs andembedded processors, along with software, tools and the industry’s largest sales/support staff.