In a nutshell: Artificial intelligence researchers have devised a method of hacking passwords by listening to users type on a keyboard. They showed their AI algorithm can learn to recognize typed letters by their sounds when struck on a keyboard. Testing using multiple recording sources revealed the technique is highly accurate.

Durham University researchers in the UK have developed (PDF) a deep-learning model that malicious actors could use to steal passwords remotely. The researchers trained the AI on the sounds of characters typed on keyboards from various distances and angles to create sound profiles for each key. They tested the model using multiple methods, all producing accuracy results above 90 percent.

The most precise technique was using a smartphone's microphone to "listen" to someone tapping away on a MacBook Pro. In addition to this method being the most accurate (95 percent), it is the easiest way for a hacker to log the keystrokes of a target. Imagine it being used in a coffee shop setting, for example.

"When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95 percent, the highest accuracy seen without the use of a language model," the study reads.

The team also tested it using telecommuting apps Zoom and Skype since their use has risen dramatically in hybrid work scenarios. The AI was 93 percent accurate when monitoring Zoom calls and 92 percent with Skype.

The model records the patterns and differences of each keypress on a keyboard. For example, the lowercase 'k' keystroke sounds slightly different than the capital 'K' (shift+K). These subtle pattern differences, coupled with timing and proximity (the stroke volume), allow the AI to make educated guesses at the typed keys.

The student researchers attribute the AI's ability and precision to advancements in the quality of recording equipment over the last decade and a growth in the number of microphones within the auditory range of computing devices in contemporary settings.

The one caveat is that its accuracy falls off dramatically when analyzing keystrokes on a keyboard that was not part of its training, which makes complete sense. Not all keyboards are made equally, and each has a unique profile of sounds it can make. Of course, more training with a wide variety of keyboards and laptops can vastly increase the model's accuracy over time.

Mitigation for these types of attacks is limited. Mainly the researchers suggest varying your typing style. They noted that touch typing reduced the model's precision by 40 to 64 percent. Having more complicated passwords helps too. The team suggests passwords that use several case switches (upper and lower) also tend to foul up the AI's guesswork.

The study has not been peer-reviewed yet, but a pre-print version titled "A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards" is up on Cornell University's arXiv for those interested in the full details.