How to Drive Competitive Advantage from Next-Gen Computing

BrandPost By Romain Groleau and Jai Bagmar
Jun 30, 2022
Artificial IntelligenceMachine LearningQuantum Computing

Businesses and organizations can harness advances in high-performance computers and machines to handle next generation workloads.

Credit: Accenture

We’ve seen accelerated growth and maturation of digital businesses. First came those driven by cloud, mobile, and advanced security. Then came the arrival of 5G, edge, and the Internet of Things (IoT). Now, it’s the metaverse.

This rapid growth opens a whole universe of opportunities. But it also introduces a new set of challenges for the enterprise’s IT infrastructure, not least the need for more powerful tools to process workloads and data faster and more efficiently. For instance, IDC found that 84 ZB of data was created, captured, or replicated in 2021, but just 10% of that data could have been used for analysis or artificial intelligence (AI) and Machine Learning (ML) models1. And only about 44% of that was actually used2.

What does this tell us? That businesses are failing to capture the full value of their data. This challenge will become more urgent, as IDC predicts the amount of data created will grow to 221 ZB by 20263.  

We’ll look at some of the ways organizations can unlock value from huge data volumes. This is key to gaining a competitive advantage in the post-digital era.

High-performance computing and supercomputing

The answer to this massive data conundrum is found in high-performance computing (HPC), more colloquially known as supercomputing. As the technology matures, many companies are replacing older CPUs with newer chip architectures, such as GPUs and field-programmable gate arrays (FPGAs).

This has several advantages. For instance, GPUs are more energy-efficient than CPUs because their memory architecture specializes in supporting high-speed data streaming for intensive applications. This helps companies work towards meeting their sustainability goals.

Meanwhile, FPGAs offer high computing power at a low cost, along with greater scalability to cope with massive data volumes. As an example, Intel FPGAs are being used to improve the throughput, response time, and energy efficiency of 5G applications, HPC, and advanced driver assistance systems (ADAS). They’re a real game-changer for edge computing.

Specialized AI and ML services

Each major cloud platform offers enterprise customers a long list of specialized AI and ML services, along with CPUs, GPUs, and FPGAs designed for HPC. Mastercard, for instance, is using ML algorithms on HPC systems to detect anomalies and identify fraud.[4]  It’s processing 165 million transactions per hour and applying 1.9 million rules to examine each one—all in a few seconds.

If you’re looking for help, there are companies that specialize in enabling enterprises to transform and unlock competitive advantage using AI, through a holistic approach spanning people, process, technology, and data science. For example, most enterprises lean on an HPC advisor to launch their first cloud-based HPC project and avoid unnecessary cost escalation 4.

Quantum and beyond

The single biggest watershed moment for computing will be when quantum computers solve problems that were previously considered intractable. In other words, they’ll make the impossible possible.

Computing infrastructures will move beyond data processing and problem-solving to become increasingly customized. For instance, some social platforms have designed application-specific integrated circuit (ASIC) infrastructures specifically for their apps. Manufacturers are also co-innovating with industry leaders to develop sensors for IoT and edge scenarios. At the same time, we’ve even seen the emergence of specialized bitcoin-mining hardware. 4

The Tesla Dojo supercomputer is another good example of the direction of travel from here. Having amassed massive amounts of driving data from its cars, Tesla was on the lookout for an efficient infrastructure that could handle it all. Because existing off-the-shelf chips couldn’t meet the company’s requirements, it sets out to design the D1 Dojo chip, built specifically to run the computer vision neural networks that underpin Tesla’s self-driving technology. 5

In a similar fashion, enterprises can expect new technologies to combine different architectures. For instance, as quantum computing evolves, it may require an integrated hardware approach using conventional hardware such as traditional CPUs to enable qubits—or quantum bits—to be controlled, programmed, and read out.

Thanks to the evolution of AI chips, edge computing and endpoint devices can handle complex AI applications like streaming video analysis, industrial automation, and office automation. AI applications are computing intensive, so common CPUs aren’t powerful enough. You need AI chips like GPUs, FPGAs, and ASICs for inferencing, training, and a variety of specialized needs6.

The skills that make it happen

Enterprises are already facing a technology skills shortage. Many now have fewer people, as the introduction of cloud-optimized operating models has led to smaller infrastructure teams. As companies embrace next-gen tech, talent scarcity may become even more acute.

From now on, enterprises will have to invest in skill sets that are uniquely designed to handle their infrastructures. This is imperative if they’re to take advantage of the differentiation that next-gen computing can provide.

Getting started

So, how can organizations get started on the journey to unlock competitive advantage from next-gen computing? We recommend:

  • Building a knowledge graph (for managerial level and upwards) on next-gen computing technologies, the workloads they support, and the value they bring to the enterprise
  • Ensuring close collaboration between the business and the CIO/CTO team as they build the knowledge graph and run ideation workshops, key to identifying the best use cases
  • Develop proofs of concept for these use cases, and move fast to scale the ones that are most successful and impactful
  • Develop skills in the IT organization to identify and incubate next-gen technologies, work with ecosystem partners to leverage existing solutions, and co-create tailored industry-specific solutions

————–

About the authors

Romain Groleau is a Managing Director at Accenture and is the Cloud First Sales and Solution Lead for Asia Pacific and Africa. Linkedin: https://www.linkedin.com/in/romaingroleau/ | Email: romain.groleau@accenture.com

Jai Bagmar is a Cloud Research Manager at Accenture.

Linkedin: https://www.linkedin.com/in/jai-bagmar-7709066/ | Email: jai.bagmar@accenture.com

The authors would like to thank Accenture Research Specialist Swati Sah for her contributions to this research.


[1] © Copyright IDC. Worldwide Global DataSphere Forecast, 2022-2026: Enterprise Organizations Driving Most of the Data Growth, May 2022
[2] © Copyright IDC. Worldwide Global DataSphere Volume of Data Analyzed and Fed into AI Forecast, 2021-2025, August 2021.
[3] © Copyright IDC. Worldwide Global DataSphere Forecast, 2022-2026:   Enterprise Organizations Driving Most of the Data Growth, May 2022
[5] Raden, N. (2021, September 28). Tesla’s Dojo supercomputer – sorting out fact from hype. Diginomica: https://diginomica.com/teslas-dojo-supercomputer-sorting-out-fact-hype
[6] © Copyright Forrester Research, Inc. Source: Optimize Your Artificial Intelligence Infrastructure With Processing Gravity, 10 November 2021