Skip to main content

AI Weekly: Qualcomm’s AI research and development efforts

A sign on the Qualcomm campus is seen in San Diego, California, U.S. November 6, 2017.
A sign on the Qualcomm campus is seen in San Diego, California, U.S. November 6, 2017.
Image Credit: REUTERS/Mike Blake

This week marked the start of the International Conference on Learning Representations (ICLR) 2021, an event dedicated to research in deep learning, a subfield of AI inspired by the structure of the brain. One of the world’s largest machine learning conferences, it accepted 860 research papers from thousands of participants this year, up from 687 papers in 2020.

One of the participating researchers is Jilei Hou, VP of engineering at Qualcomm. He heads up Qualcomm’s AI Research division, which focuses on advancing AI to bring its core capabilities — including perception, reasoning, and action — to Qualcomm’s portfolio of hardware products. Together with his colleagues at the company, Hou presented new papers at ICLR in the areas of power and energy efficiency, computer vision, natural language processing, and machine learning fundamentals.

Qualcomm’s research, while in some cases preliminary, is impactful by nature of the company’s market footprint. In the second quarter of 2020, Qualcomm accounted for 32% of global smartphone application processor revenue, according to Statista. And as of January 2017, the company had shipped more than a billion chips for the internet of things alone.

Improved efficiency

An important research direction for Qualcomm is learning representation, which would allow AI systems to learn with high data efficiency, as well as generalizability. At ICLR, Hou detailed the company’s work in unsupervised learning, where an algorithm is subjected to “unknown” data for which previously defined categories or labels don’t exist. The machine learning system must teach itself to classify the data, processing the unlabeled data to learn from its inherent structure.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

Hou says his team achieved state-of-the-art performance with end-to-end learning for video compression, a key use case for Qualcomm’s mobile device customers. Beyond this, he and coauthors have explored “neural augmentation,” or the concept that classical algorithms and neural network architectures can be combined to incorporate scientific knowledge.

Neural augmentation is essentially the marriage of neural networks and symbolic AI, which involves embedding facts and behavior rules into models. As opposed to neural networks, which map data inputs and outputs, symbolic AI can encode knowledge or programs. The neural networks help identify subtle patterns that may be too complex to model explicitly.

Hou believes that neural augmentation could result in compact neural network model sizes and superior efficiency while training. His team has already seen success within the areas of wireless, multimedia, and systems design.

More recently, Hou and colleagues investigated using machine learning as a design methodology for combinatorial optimization problems like vehicle traffic routing and chip design placement. They claim to have trained specialized models with unlabeled data and reinforcement learning, which deals with learning via interaction and continuous feedback. “We believe that the intersection of machine learning and combinatorial optimization will produce profound interest in the machine learning research community, as well as toward industrial impact,” Hou told VentureBeat.

Computer vision and data privacy

In the computer vision domain, several of Hou’s projects target segmentation. Object segmentation is used in tasks ranging from swapping out the background of a video chat to teaching robots that navigate through a factory. But it’s considered among the hardest challenges in computer vision because it requires an AI to understand what’s in an image.

A Qualcomm-authored paper details improvements in the accuracy of segmentation, and another describes the fastest video segmentation to date on Qualcomm’s Snapdragon chipsets. Hou and colleagues also created a model that improves the consistency of segmentation while allowing fine-tuning on a mobile device.

One of the ways Hou aims to attain performance gains is through neural architecture search (NAS) techniques. NAS teases out top model architectures for tasks by testing candidate models’ overall performance, dispensing with manual fine-tuning. In a complementary effort, Hou says Qualcomm is investing in personalization and federated learning technologies that allow neural network models to continually learn on-device while keeping data with users, in the interests of privacy.

“The mission of Qualcomm AI Research is to make AI ubiquitous,” Hou said. “Qualcomm AI Research is taking a holistic approach to model efficiency research via research efforts in quantization, compression, NAS, and compilation … By creating these projects and making it easy for developers to use them, we are empowering the ecosystem to run complex AI workloads efficiently. [They’re] already helping the wider AI ecosystem and having real-world impact on a variety of industry verticals.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.