Benjamin Wolf delivered a keynote about the DevSecOps journey of the stock exchange when it migrated to the cloud and sought greater efficiency.

Joao-Pierre S. Ruth, Senior Editor

November 15, 2022

4 Min Read
Joao-Pierre S. Ruth

The importance of DevOps and the benefits of automating of DevSecOps were at the epicenter of a keynote during last Thursday’s All Day DevOps conference. Benjamin Wolf, CTO of capital access platforms with Nasdaq, spoke on the “Journey to Auto-DevSecOps at Nasdaq” for the online event, which was hosted by Sonatype.

Wolf said he asks himself and his teams why DevOps is important when thinking about what projects to do. Explaining something like DevOps and why it is important can be hard though, he said, because it is built on decades of advanced infrastructure. “It can actually be pretty complicated to answer what can seem a pretty obvious thing to us,” Wolf said.

He told the audience that Nasdaq functions as a global tech company, which includes running its stock market in New York City. Wolf said Nasdaq serves as a platform that provides liquidity to markets and also has an entire part of its business dedicated to fighting financial crime. Wolf says his role includes focusing on data, analytics, and creating new insights to create transparency.

He summed up the value of DevOps as the development and delivery of solutions to users while also managing and operating complex infrastructure, which can be a challenge. That makes efficiency, reliability, and safety essential to DevOps, Wolf said.

The DevOps Path at Nasdaq

Nasdaq’s development and operations journey has included its share of pivots. Some 10 years ago, Nasdaq had not yet moved to the cloud, he said, and operated on manually configured, static servers in data centers. Nasdaq did look for ways to try to automate the process, which included in-place deployments on existing infrastructure to chase efficiency through automation and take some burden off the development team. “This was an incredibly powerful first step for this organization,” Wolf said.

At that time, things were automated so product managers and owners could choose the parts of the software they wanted to deploy. Things ran well enough, he said, but the next evolution brought on thinking about cloud migration and debates on how to do that.

Cloud and Infrastructure as Code

With its cloud migration, Nasdaq chased scalability, elasticity, cost efficiency, and reliability, Wolf said. The debate became about whether to move everything to the cloud and then work on infrastructure as code, or to work on infrastructure as code in the data center and then move to the cloud. “We made the decision to do them both at the same time,” he said. “One of the best decisions that we have ever made. Once you experience 100% infrastructure as code and immutability, you will ask yourself how you ever did without it.”

Wolf said that by turning all infrastructure into code, his teams were able to create and test the cloud migration thousands of times. After getting it right, they still did some practice runs before the full cutover, which went flawlessly, he said. “We never would have been to accomplish that with a complex infrastructure system with millions of configurations and hundreds of thousands of deployed assets.”

There can be downsides to DevOps methodology, he said, particularly if workloads become skewed while operating under the new paradigm. For example, DevOps staff might get flooded with development issues when they are trying to operate as a software delivery team, Wolf said. “Developers became dependent on the DevOps team, who became the bottleneck,” he said. “This was an issue we had to solve.”

Changes that Nasdaq implemented next included further moves on infrastructure as code and other shifts. Striving for greater efficiency, they also moved to a distributed DevOps model, Wolf said. Developers were struggling with the empowerment of visibility, he said, as they could not see the logs and monitors they needed to see. Distributed DevOps solved such observability problems, Wolf said, including metrics, logs, errors, and app performance monitor tests. Combined with development teams certified in cloud and able to control their own destiny, they saw a roughly 50% improvement in deployments per capita, he said.

Even with those gains, new cracks emerged three years later, Wolf said, so Nasdaq pivoted again. The trouble was though teams were productive, a lot of divergence in terms of authoring emerged along with flaws in disaster recovery testing.

Going Faster, Getting More Automated

The latest evolution at Nasdaq introduced automated DevSecOps pipelines to improve productivity and address divergence, Wolf said. The pipelines were standardized to look for marker files in applications, he said, and to also narrow variabilities, add code scanning for vulnerabilities, and other forms of monitoring. “There’s just too much surface area to deploy this stuff and then hope our infosec team can come over the top later on and monitor and scan things,” Wolf said. “The world is too dangerous; it’s getting more dangerous.”

What to Read Next:

Spotting DevSecOps Warning Signs and Responding to Failures

Is It Time to Rethink DevSecOps After Major Security Breaches?

DevOps and Security Takeaways From Twitter Whistleblower Claims

About the Author(s)

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Follow him on Twitter: @jpruth.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights