How is Generative AI Forcing Software Development to Evolve?

AI’s revolution is upon us. Here’s what it means for IT leaders.

September 25, 2023

Software Development In The Age Of Generative AI

Generative AI has taken the world by storm with its ability to create useful content that can make our lives easier. IT leaders need to ensure technology is used responsibly as it becomes more ingrained in the workplace, says  Tim Jones of Advanced.

Since its launch in November 2022, ChatGPT has polarized nearly every sector with debates over what is right and wrong regarding artificial intelligence (AI). The latest version, GPT-4, boasts even greater capabilities, and the OpenAI product is far from the only generative AI tool people use. 

According to the World Economic Forum’s Future of Jobs ReportOpens a new window , nearly 75% of organizations globally are likely to adopt AI in the next five years. Like it or not, AI is going to have an impact.

Will it redefine how we work, communicate, and interact with technology forever? And what risks do we face in tapping into its promise?

As the IT industry evolves and matures, we must address these concerns head-on and take a responsible, vendor-neutral approach to using generative AI. At its core, generative AI has the potential to revolutionize the way we live and work, but many are skeptical. A survey by the Pew Research Center showsOpens a new window that 32% of workers believe AI will hurt more than it will help in the workplace, and another 32% believe it will hurt and help equally. 

To ensure we can harness AI’s full potential, we need to approach this technology thoughtfully and deeply understand the risks and benefits. This is especially important in software development, where technology evolved quickly even before AI’s coming out party. By doing so, we can help shape the future of the IT industry and ensure that generative AI becomes a force for good in our world.

Identifying Potential

We’ve already seen proof that generative AI can be a transformative asset. It can write and debug code, analyze data, and more. This technology can serve as an excellent assistant for software development apprentices who can take on some of the more menial tasks throughout the day, like testing and analysis. Although powerful, the technology shouldn’t be seen as a threat to jobs in the field. 

We’ll still need human programmers, developers, and testers, but generative AI may ultimately change some roles and stand to make development teams more efficient. For example, according to Stanford University’s Artificial Intelligence Index Report, 88% of developers using GitHub’s Copilot tool reported efficiency gains, and 74% said the tool allowed them to focus on more satisfying work. The report also highlighted an experiment where developers who used the tool completed their task in 56% less time than those who did not.

Even though we don’t know what the next chapter of generative AI looks like, decision-makers expect it to be a big deal. A KPMG survey shows that 78% of U.S. executives are convinced that generative AI will unleash a significant and even transformative impact on driving innovation. Brace yourself for the rise of a new influential role in the corporate world: Chief AI Officer. The emerging CAIO position is set to spearhead the future of organizations by taking the reins on how they harness the power of this rapidly evolving technology. The effect of generative AI is so profound that it will even make its mark on the highest decision-making levels, redefining how businesses operate.

See More: The Absolute Must-haves for Software Development Teams

Understanding Risks and Implications

It’s tempting to dive in head first and use generative AI as much as possible. Unfortunately, it’s not that easy. Innovations are happening so quickly that there’s concern for privacy and safety issues. More than 27,000 signatures endorse an open letter calling for a six-month AI experimentation pause.

Some of AI’s leaders, including “godfather of AI” Geoffrey Hinton and OpenAI CEO Sam Altman, warn that AI poses a “risk of extinction … on par with pandemics and nuclear wars.”

While this is a bold statement for AI creators to make, and there’s no putting this genie back in the bottle, we need to account for the risks and implications this technology brings as we move forward. Inaccuracies, which AI developers call “hallucinations,” have plagued these models and given skeptics a reason to fear misleading, potentially harmful content, even if created unintentionally.

Data privacy has been another big worry as developers input proprietary code, which the AI’s large language model may use as training data in the future. After the company lifted a ban, Samsung developers were also recently caught using ChatGPT to debug code

OpenAI introduced the ability for users to disable chat historyOpens a new window , thereby keeping their data from being used as training fodder. This is all moving quickly, and as these issues arise, platforms like OpenAI will need to provide solutions to a Whac-a-Mole of ethical considerations that users will continue to have.

See More: Why Risk Management is Essential in Enterprise Operations

Taking a Responsible Approach

OpenAI gets the majority of the headlines, but other platforms like Google’s Bard and Microsoft’s Bing Chat (which uses ChatGPT) are big players, too. In other mediums, Midjourney produces images, Wonder Studio generates video, and Nvidia’s Magic3D creates quickly evolving renderings. We need broad oversight of these new technologies with many companies and platforms in the landscape. 

Altman recently went to Capitol Hill, asking legislators to start taking action. His hearing was unlike most tech leaders, who are typically on the defensive as lawmakers grill them. Altman recognizes that regulation is the best way to ensure AI is developed and used responsibly, regardless of which vendor produces it. 

Successful regulation would be enforced ubiquitously and include firm penalties for non-compliance. It should also be flexible enough to adapt as technology and risk evolve. Regulation should include accountability by the vendor for the content it generates, ethical guidelines for using such systems, and human oversight to ensure quality standards are met. It should also require rigorous testing before deployment to ensure user safety is a priority.

There’s been so much work done behind the scenes. Years of innovation went into producing these revolutionary models, and there’s plenty more to do to ensure generative AI is used with good intentions. Industry experts, policymakers, and IT leaders must collaborate to truly embrace this life-changing technology and realize its promise instead of avoiding it and hoping it is a mere fad. 

How is generative AI poised to transform industries, and why is responsible adoption crucial? Let us know on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON RESPONSIBLE AI

Tim Jones
Tim Jones

Managing Director, Application Modernization, Advanced

Tim is Managing Director of Application Modernization at Advanced. In his role, he helps organizations maximize their investment in critical legacy applications through transformation to modern operating environments -- ensuring they remain competitive and ready to take advantage of new and emerging technologies. With 20+ years of IT experience, Tim has a strong track record in business growth and developing high-performing teams who are positioned for success.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.