APIs to Ease the Way: Charting A New Course in Cross-System Navigation

Here’s how businesses can use data searches to navigate unfamiliar territory without hurting productivity.

Last Updated: November 30, 2022

Archives are where data goes to die – and that’s why professionals who need to search through that data must navigate unfamiliar technologies, dramatically slowing the entire process. Tibi Popp, co-founder and CTO at Archive360, discusses why it’s time for a new approach that respects business users’ specific needs.  

With our relentless focus on technology advances, we sometimes forget that it’s really about something else: the data. Each new application or other innovation only delivers value when it helps collate the data, store the data, secure the data and, most importantly, give us access to it when we need it – and in this day and age, it prevents the wrong people from accessing it. 

But if that’s the goal, we’ve got a long way to go. 

Come at this from the professional user’s point of view. There’s the data scientist looking to aggregate diverse pieces of information from various sources in various formats. There’s the line of business executives contemplating new target markets and extracting every nugget of information that might inform the decision-making. There’s the legal counsel tasked with identifying every bit of data in-house (or in the cloud) to ensure compliance. For eDiscovery, this means collating all data – with more coming in—involved in litigation. Data analysts, meanwhile, need to track down data locked in a custom database managed by a single user or buried in spreadsheets shuttled back and forth over email. 

Jumping Between Technologies

The processes guiding these searches are complex, cumbersome and costly. In fact -and most IT professionals will testify to this – the critical tasks of search, extraction and collation, all while ensuring compliance, usually require help from the IT department. That’s because it means careening between different technologies and formats, each of which serves a different purpose. 

It’s almost as if, despite all the hype, the data isn’t seen as dynamic, playing a vital role in current business. When it comes to data repositories or archives, the situation is even more dire: The assumption is that archives are the places where data goes to die. 

In reality, it should be the opposite. Consider one specific example: Data Subject Access Request (DSAR), a prominent aspect of data privacy laws. This provision gives consumers the right to ask companies what information they have on that individual and (with variations) the right to access, correct and/or delete that information, all within a short timeframe. 

It sounds routine, but it’s estimated that complying with each request can cost a company up to $1,400. The relevant information almost never resides within a single repository; it can be found inside a plethora of systems that represent different touchpoints with a customer. Sales, CRM, order information, shipping, finance, logistics, phone support and more—each generates and retains particular fragments of information and no more. 

Again, each of these technologies is crucial but distinct. They don’t really need to come together; in fact, industry guidelines and privacy controls sometimes require they be kept apart. Specialists in particular fields – auditing, data science, lines of business – can seek common threads, but in most instances, there isn’t a need to aggregate data on a particular subject. . .until that’s exactly the need. 

Consider healthcare: The average facility develops medical data in a complex variety of specific disciplines, from MRIs and X-rays to cat scans and more. Government regulations tightly restrict access to this data until authorized medical professionals need a complete portrait of a patient. That’s when the appropriate parties have to access disparate systems and aggregate all relevant data. 

And all this is even before we get into the sheer volumes and disparate types of data involved. For very good reasons, the companies offering vital services – such as eDiscovery or internal/external audits – are not usually in the business of archiving and managing vast amounts of information. Organizations that need specific information to offer those services want solutions that are cloud-neutral and fully configurable, avoid vendor lock-in by storing customer data in its native format and enable customer-controlled on-premises and cloud encryption. They also have stringent security requirements. 

See More: APIs: The Key to Modernizing Fintech

Challenges of Tailoring Data Technologies to Organizational Needs

Meanwhile, the organizations best equipped to store and manage petabytes of data – such as information archiving providers – have technologies to suit their particular needs. Connecting these two worlds has always been a major challenge. Among other reasons: 

  • There’s the time required to reach out to multiple data sources to find and gather the necessary data so that it can be imported into the eDiscovery, audit or data analytics applications.
  • There’s the quality of data gathered: No executive wants to weed through irrelevant information; the more control they have over the collation process, the more productive they can be
  • There’s the volume: With the ever-increasing number of communication channels and data sources, there’s more data than ever before, and manual processes should be obsolete.

Even tech-savvy professionals want a seamless connection between what they need and where and how to find it. In the ideal scenario, they stay within the applications that best suit their goals and search databases far outside their domain. Unfortunately, the ideal is usually far from the reality. 

It’s not easy for the developers involved—they’re working with conflicting priorities, seeking the right balance between security, compliance, access and company-specific navigation. Also, keeping things easy on the front end requires a huge effort on the back end. 

Still, it can be done. For example, developer programs that leverage APIs to give (mostly) non-IT users immediate access to the data they need without IT’s help; automate the process using the tools in the enterprise (indexing, classification, topic modeling, AI, machine learning, natural language processing, etc.) to identify and extract relevant data quickly; and quickly process petabytes of data. Perhaps best of all, they don’t have to jump between applications – they can stay on the ones they’re accustomed to. 

The advantages are simple but far-reaching. Business professionals in every field gain authorized entry to a different world with massive volumes of relevant data – and the whole process is seamless, secure and compliant. 

So here’s the challenge to the industry. Issues such as scalability, security and specific functionality have become excuses for bad navigation. What (more) can we do to change that equation? 

How are you balancing scalability and security to ensure functional data navigation? Tell us on  FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window .

MORE ON API:

Image Source: Shutterstock

Tibi Popp
Tibi Popp

Co-founder and CTO, Archive360

Archive360 co-founder and CTO Tibi Popp has built a stellar track record in leveraging advances in enterprise technology to solve critical business problems and gain a competitive market advantage. He currently leads technology development at Archive360, which offers Intelligent Enterprise Information Management and Data Migration.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.