Skip to main content

GPT-3 comes to the enterprise with Microsoft’s Azure OpenAI Service

Azure OpenAI Service uses GPT-3 to convert transcripts of live television commentary during a women’s basketball game into short game summaries.
Azure OpenAI Service uses GPT-3 to convert transcripts of live television commentary during a women’s basketball game into short game summaries.
Image Credit: Microsoft

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


During its Ignite conference this week, Microsoft unveiled the Azure OpenAI Service, a new offering designed to give enterprises access to OpenAI’s GPT-3 language model and its derivatives along with security, compliance, governance, and other business-focused features. Initially invite-only as a part of Azure Cognitive Services, the service will allow access to OpenAI’s API through the Azure platform for use cases like language translation, code generation, and text autocompletion.

According to Microsoft corporate VP for Azure AI Eric Boyd, companies can leverage the Azure OpenAI Service for marketing purposes, like helping teams brainstorm ideas for social media posts or blogs. They could also use it to summarizing common complaints in customer service logs or assist developers with coding by minimizing the need to stop and search for examples.

“We are just in the beginning stages of figuring out what the power and potential of GPT-3 is, which is what makes it so interesting,” he added in a statement. “Now we are taking what OpenAI has released and making it available with all the enterprise promises that businesses need to move into production.”

Large language models

Built by OpenAI, GPT-3 and its fine-tuned derivatives, like Codex, can be customized to handle applications that require a deep understanding of language, from converting natural language into software code to summarizing large amounts of text and generating answers to questions. People have used it to automatically write emails and articles, compose poetry and recipes, create website layouts, and create code for deep learning in a dozen programming languages.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

GPT-3 has been publicly available since 2020 through the OpenAI API; OpenAI has said that GPT-3 is now being used in more than 300 different apps by “tens of thousands” of developers and producing 4.5 billion words per day. But according to Microsoft corporate VP of AI platform John Montgomery, who spoke recently with VentureBeat in an interview, the Azure OpenAI Service enables companies to deploy GPT-3 in a way that complies with the laws, regulations, and technical requirements (for example, scaling capacity, private networking, and access management) unique to their business or industry.

“When you’re operating a national company, sometimes, your data can’t [be used] in a particular geographic region, for example. The Azure OpenAI Service can basically put the model in the region that you need for you,” Montgomery said. “For [our business customers,] it comes down to question like, ‘How do you handle our security requirements?’ and ‘How do you handle things like virtual networks?’ Some of them need all of their API endpoints to be centrally managed or use customer-supplied keys for encryption … What the Azure OpenAI Service does is it folds all of these Azure backplane capabilities [for] large enterprise customers [into a] true production deployment to open the GPT-3 technology.”

Montgomery also points out that the Azure OpenAI Service makes billing more convenient by charging for model usage under a single Azure bill, versus separately under the OpenAI API. “That makes it a bit simpler for customers to pay and consume,” he said. “Because at this point, it’s one Azure bill.”

Enterprises are indeed increasing their investments in natural language processing (NLP), the subfield of linguistics, computer science, and AI concerned with how algorithms analyze large amounts of language. According to a 2021 survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their NLP budgets grew by at least 10% compared to 2020, while a third — 33% — said that their spending climbed by more than 30%.

Customization and safety

As with the OpenAI API, the Azure OpenAI Service will allow customers to tune GPT-3 to meet specific business needs using examples from their own data. It’ll also provide “direct access” to GPT-3 in a format designed to be intuitive for developers to use, yet robust enough for data scientists to work with the model as they wish, Boyd says.

“It really is a new paradigm where this very large model is now itself the platform. So companies can just use it and give it a couple of examples and get the results they need without needing a whole data science team and thousands of GPUs and all the resources to train the model,” he said. “I think that’s why we see the huge amount of interest around businesses wanting to use GPT-3 — it’s both very powerful and very simple.”

Of course, it’s well-established that models like GPT-3 are far from technically perfect. GPT-3 was trained on more than 600GB of text from the web, a portion of which came from communities with pervasive gender, race, physical, and religious prejudices. Studies show that it, like other large language models, amplifies the biases in data on which it was trained.

In a paper, the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism claimed that GPT-3 can generate “informational” and “influential” text that might radicalize people into far-right extremist ideologies and behaviors. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of bias from some of the most popular open source models, such as Google’s BERT and XLNet and Facebook’s RoBERTa.

Even fine-tuned models struggle to shed prejudice and other potentially harmful characteristics. For example, Codex can be prompted to generate racist and otherwise objectionable outputs as executable code. When writing code comments with the prompt “Islam,” Codex outputs the word “terrorist” and “violent” at a greater rate than with other religious groups.

More recent research suggests that toxic language models deployed into production might struggle to understand aspects of minority languages and dialects. This could force people using the models to switch to “white-aligned English” to ensure the models work better for them, or discourage minority speakers from engaging with the models at all.

OpenAI claims to have developed techniques to mitigate bias and toxicity in GPT-3 and its derivatives, including code review, documentation, user interface design, content controls, and toxicity filters. And Microsoft says it will only make the Azure OpenAI Service available to companies who plan to implement “well-defined” use cases that incorporate its responsible principles and strategies for AI technologies.

Beyond this, Microsoft will deliver safety monitoring and analysis to identify possible cases of abuse or misuse as well as new tools to filter and moderate content. Customers will be able to customize those filters according to their business needs, Boyd says, while receiving guidance from Microsoft on using the Azure OpenAI Service “successfully and fairly.”

“This is a really critical area for AI generally and with GPT-3 pushing the boundaries of what’s possible with AI, we need to make sure we’re right there on the forefront to make sure we are using it responsibly,” Boyd said. “We expect to learn with our customers, and we expect the responsible AI areas to be places where we learn what things need more polish.”

OpenAI and Microsoft

OpenAI’s deepening partnership with Microsoft reflects the economic realities that the company faces. It’s an open secret that AI is a capital-intensive field — in 2019, OpenAI became a for-profit company called to secure additional funding while staying controlled by a nonprofit, having previously been a 501(c)(3) organization. And in July, OpenAI disbanded its robotics team after years of research into machines that can learn to perform tasks like solving a Rubik’s Cube.

Roughly a year ago, Microsoft announced it would invest $1 billion in San Francisco-based OpenAI to jointly develop new technologies for Microsoft’s Azure cloud platform. In exchange, OpenAI agreed to license some of its intellectual property to Microsoft, which the company would then package and sell to partners, and to train and run AI models on Azure as OpenAI worked to develop next-generation computing hardware.

In the months that followed, OpenAI released a Microsoft Azure-powered API — OpenAI API — that allows developers to explore GPT-3’s capabilities. In May during its Build 2020 developer conference, Microsoft unveiled what it calls the AI Supercomputer, an Azure-hosted machine co-designed by OpenAI that contains over 285,000 processor cores and 10,000 graphics cards. And toward the end of 2020, Microsoft announced that it would exclusively license GPT-3 to develop and deliver AI solutions for customers, as well as creating new products that harness the power of natural language generation, like Codex.

Microsoft last year announced that GPT-3 will be integrated “deeply” with Power Apps, its low-code app development platform — specifically for formula generation. The AI-powered features will allow a user building an ecommerce app, for example, to describe a programming goal using conversational language like “find products where the name starts with ‘kids.'” More recently, Microsoft-owned GitHub launched a feature called Copilot that’s powered by OpenAI’s Codex code generation model, which GitHub says is now being used to write as much as 30% of new code on its network.

Certainly, the big winners in the NLP boom are cloud service providers like Azure. According to the John Snow Labs survey, 83% of companies already use NLP APIs from Google Cloud, Amazon Web Services, Azure, and IBM in addition to open source libraries. This represents a sizeable chunk of change, considering the fact that the global NLP market is expected to climb in value from $11.6 billion in 2020 to $35.1 billion by 2026. In 2019, IBM generated $303.8 million in revenue alone from its AI software platforms.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.