Is It Time for a New Enterprise Architecture for Databases?

Why a new enterprise architecture for databases may be necessary. Know more.

December 19, 2022

For today’s enterprises, databases have become synonymous with data silos. Even database vendors seem to admit that freely. The typical client-server architecture forms a shield around the database stack: connectors, tools and utilities create a dependency on proprietary software that is just hard to break. Mike Waas, founder & CEO of Datometry, discusses why it may be time to rethink enterprise database architecture.

Although databases have long been a hotbed of research, the adjacent area of data access has seen little to no innovation in decades. Instead, over the past 40 years, the common architecture just made the vendor lock-in even stronger.

As always, with vendor lock-in, customers are ultimately bearing the cross. To the best of our knowledge, analyst firms have yet to quantify the impact of this kind of vendor lock-in in terms of opportunity cost, maintenance, or even price gouging.

In this article, let’s look at the biggest operational challenges IT leaders must overcome in the next decade and what requirements this would spell out for their database architecture.

See More: Achieving Good Data and Database Modernization To Enable Successful Strategies

Migrating to More Cost-effective Systems

With the economy heading into a recession, IT is under pressure to replace many of its expensive legacy systems with more cost-effective cloud databases. Hence, a wave of database migrations will probably wash over the industry in the next few years. 

Analysts have long agreed that moving from one vendor to another is fraught with extreme risk. Most migrations fail. But even if they come to a conclusion, there is still the risk of corrupting existing business logic in the processes and, of course, the risk of considerable cost and schedule overruns.

The root cause is the vendor lock-in of these systems. Simply put, applications work only with the database for which they were originally designed. Moving them to a different database requires rewriting of code. Depending on the business logic, the changes required can be quite extensive.

Naturally, the larger the changes, the higher the probability that things will go wrong. Redesigns take way longer than planned, and rewrites introduce defects. Worse, many of these defects can go unnoticed for months or even years, thus putting the business at risk of working with incorrect data.

We posit the situation calls for the abstraction of databases that goes beyond the long-standing and insufficient attempts of standardizing query languages or the support of basic access APIs such as ODBC. This abstraction needs to be able to “speak” the legacy SQL instead of imposing a new language on applications.

Consolidating Redundant Database Technology

Another area that IT leaders lose sleep over is the ever-growing fleet of databases the business expects them to support. However, because of vendor lock-in, deprecating older models is just as hard as adopting new technology. 

Adding insult to injury, IT finds themselves in a situation where they need to support competing technologies, either because the systems were acquired in the course of M&A activity or because different business units made independent yet redundant technology decisions.

In their defense, bringing on competing technology is often necessary to avoid giving a vendor a dominant position. In other words, adopting redundant technology is sometimes viewed as an insurance policy for IT to rein in their vendors.

Again, an abstraction of all underlying databases is urgently needed. It will allow IT to consolidate the ever-increasing sprawl of databases. Moreover, it liberates IT to make technology choices more effectively because they won’t be held back by existing business applications. Not to mention, such an abstraction will protect the business and strengthen its position in negotiations, too.

Democratizing Access to Data – All Data

There is not a single enterprise customer who doesn’t want every one of their business units to have full access to all data while implementing well-defined security, of course. Yet, enterprises today end up implementing kludgy data transfer mechanisms between different data silos to make the data accessible to different business users.

The only other alternative seems to be capitulating to a single vendor and rebuilding the entire data management landscape based on a single product or product suite. For the reasons above, this just doesn’t seem to be a good idea and certainly one the decision makers will come to regret over time.

Interestingly, this situation is effectively a combination of the migration and consolidation scenarios described above. What’s needed is an abstraction that makes applications interoperable with any number of different databases while allowing IT to replace and consolidate the fleet of databases. If done right, it will do so without imposing downtime on its business users and can establish the kind of democratization of data that has been elusive for so long.

The Road Ahead

We believe the above scenarios make for a compelling case for developing strong abstractions that can cleanly separate databases and client applications. It all stands and falls with the ability for the abstraction to be a run-time, sitting between the application and the database, to accept legacy SQL, and translate it on the fly to the SQL of new destination systems.

We believe this is technically quite feasible. How did we arrive at that assessment? Well, all areas of IT have been redefined in the past 20 years by abstractions like the one we outlined above. In other areas, we call this kind of abstraction simply virtualization. What started with server virtualization quickly carried over to storage and network. Ultimately, virtualization made cloud computing possible – or put differently, by abstractions originally introduced for portability and migrations.

Until recently, databases have resisted this industry trend, but several vendors are already developing products in this space. The technical challenges are extensive but quite manageable. Better still, first implementations have been successful at seamlessly moving entire workloads from on-prem systems to the cloud.

Is it time to for a new enterprise architecture for databases? Just like with server virtualization, there will be initial hesitation. And just like then, some programmers will gripe that any extra layer will be suboptimal. However, the benefits of a proper run-time abstraction of the database would be such a fundamental boost in productivity and even more so economically that moving toward the fully virtualized database system may one day seem as inevitable as server virtualization.

Are you considering a different enterprise architecture for databases? Share with us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window .

MORE ON DATABASE

Mike Waas
Mike Waas is the founder and CEO of Datometry. Mike has held senior engineering positions at Microsoft, Amazon, EMC, and Pivotal and is the architect of Greenplum’s ORCA query optimizer. He has authored over 40 peer-reviewed scientific publications in various areas of database research and holds over 50 patents.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.