Beyond microservices; Software architecture driven by machine learning

It's not a question of if, it's a question of when and how AI and machine learning will change our programming and software development paradigms.

Today's coding models are based on data storage, business logic, services, UX, and presentation. A full stack developer elects to build a three-tiered web architecture using an MVC framework. An IoT application calls for an event-driven architecture with services processing events and broadcasting state changes. 

These two architecture paradigms converge with microservice architectures where user interfaces are just one type of interaction node fulfilling high level functions by interfacing with many services.

Architecture today driven by data, rules, and experience


Our architectures today largely reflect two dominant interaction models. The primary one is driven by users navigating the physical and digital world through devices like mobile phones, voice assistants, wearables, and occasionally a laptop. The other interaction mode facilitates machine to machine interactions and is largely dominated by APIs, services, buses, data streaming, and choices between edge versus cloud infrastructure. 

Both these models typically interface with a centralized intelligence that runs explicitly defined business rules and machine learning models on larger, faster, and more complex data sets.

To code today means identifying the business rules, defining and understanding the underlying data model, crafting reusable services, and designing user experiences.

When machine learning drives the experience


The next generation coding paradigms will look a lot different. When machine learning reaches a critical mass in capability, skill, and connectivity, out coding models will reflect this intelligence. It may look something like: 

  • Programmers fit a business need into an existing machine learning model. They select from a catalog of available algorithms with defined input, processing, and decision attributes.
  • A parallel sequence of one or more machine learning algorithms defines multiple result vectors, or results of interest for a downstream person or system to process. 
  • The machine learning algorithms then selects the destination and formats for downstream interactions. In other words, machine learning decides who and how should the result vector be communicated. If it's a person, it will decide who and it will select the user interface/experience.   
Meanwhile, a second group of learning engineers will iterate to improve the accuracy of models and develop new ones.

This changes our thinking considerably. Machines select the user experience; it's no longer prescriptive and defined up front by agile product owners and UX designers. Microservices evolve to micromodels that are orchestrated to support higher level business functions. Orchestration is also machine driven, so that events seem to self-select how they are manifested.

We're going to have to think a lot differently in this paradigm!

No comments:

Post a Comment

Comments on this blog are moderated and we do not accept comments that have links to other websites.

Share

About Isaac Sacolick

Isaac Sacolick is President of StarCIO, a technology leadership company that guides organizations on building digital transformation core competencies. He is the author of Digital Trailblazer and the Amazon bestseller Driving Digital and speaks about agile planning, devops, data science, product management, and other digital transformation best practices. Sacolick is a recognized top social CIO, a digital transformation influencer, and has over 900 articles published at InfoWorld, CIO.com, his blog Social, Agile, and Transformation, and other sites. You can find him sharing new insights @NYIke on Twitter, his Driving Digital Standup YouTube channel, or during the Coffee with Digital Trailblazers.