Smarter In-Car Cameras Can Detect Every Dumb Thing You're Doing While Driving

Instead of just monitoring for fatigue, this advanced camera system even knows when you're on the phone or texting while behind the wheel.

We may earn a commission from links on this page.
“In addition to the body poses of all passengers, the occupant monitoring system developed by Fraunhofer IOSB also detects activities and associated objects.”
“In addition to the body poses of all passengers, the occupant monitoring system developed by Fraunhofer IOSB also detects activities and associated objects.”
Image: M. Zentsch/Fraunhofer IOSB

Several automakers offer features where a camera inside a vehicle monitors the driver and sets off alerts when it detects them starting to fall asleep, but researchers at the Fraunhofer Institute have developed an even smarter in-car camera system that can figure out exactly what a driver is doing, potentially improving the safety of semi-autonomous driving features.

There isn’t a publicly available self-driving system that can handle every road situation all on its own. Unplanned interruptions like construction or accidents, and even driving in the crowded downtown section of a large city, usually require the driver to take control of the wheel again. But that hand off can be tricky. In an ideal world, the driver in a semi-autonomous vehicle would always be paying attention to the road ahead, and ready to step in at a moment’s notice to take over control of the vehicle. But humans will be humans, and there’s always the chance that an autonomous system will try to hand over the controls of a vehicle while the driver is distracted with a phone call, or while taking a sip of coffee, which potentially sets up a dangerous situation.

Advertisement

Those situations are what prompted researchers from the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation to take the capabilities of in-car cameras even further. Instead of simply providing alerts when it appears as if the driver is falling asleep, the new system uses AI-powered image recognition to construct a digital skeleton of the driver that looks a lot like a stick figure doodle. Although it’s a basic representation of a driver’s current pose, the digital skeleton provides enough details for the system to interpret what exactly the driver is doing, while additional object recognition keeps tabs on the location of items like smartphones or coffee cups.

When the two systems are paired, a vehicle is able to determine if a driver is paying attention to the road, or distracted with other activities like texting, eating, or even interacting with other passengers in the vehicle. By keeping tabs on what the driver is doing, a semi-autonomous driving system can determine how distracted they are, and potentially how long it will take them to return their focus to driving, and take that into account before handing control of the vehicle back to them.

Advertisement

Making semi-autonomous driving safer isn’t the only application of this technology. Many vehicles are able to park themselves, but a driver can’t simply use a voice command like “park over there” and expect the car to find the spot all by itself. There’s lots of important context missing from vague voice commands like that, but when paired with a camera system that can tell where a driver is looking and potentially even pointing their finger, suddenly other instructional cues can be taken into account, and knowing exactly what parking spot a driver is vaguely referring to is a possibility. On top of that, the smarter camera systems could even assist seat belt detection systems that, at the moment, simply determine if the buckle is secured, and not if a driver or passenger is wearing the belt properly.