Microservices Architectures and Magic Clouds: Evolving Cloud-native Java Virtual Machines

Advances in cloud-native JVMs now make running Java in Kubernetes easier and faster than ever.

November 8, 2022

Ironically, the very thing that makes Java applications lightning-fast – the just-in-time (JIT) compiler – is also the biggest obstacle for Java in modern cloud environments. But advances in cloud-native JVMs now make running Java in Kubernetes easier and faster than ever, says Simon Ritter, Java champion and deputy CTO at Azul.

Many companies run one or more Java applications in the cloud. While there’s always a flavor of the month, Java continues to grow and evolve into more microservice-architecture-based service landscapes.

Running Java in the cloud, specifically in resource managers such as Kubernetes (k8s), comes with its own set of complications. High resource usage and ineffective cost management directly result from the inability to quickly scale up and down based on the current load profile.

Mitigating these issues usually requires intensive engineering effort. But as we will see, a cloud-native JVM can help scale resources to improve performance and save costs.

The “Hidden” Cost of Java in the Cloud

The biggest culprit in the Java cloud issues is also the biggest strength of Java and any other JVM-based language – the just-in-time (JIT) compiler.

Java is not compiled directly into native machine code. The JIT compiler does that. Here’s how it works. Java compiles to an intermediate, binary bytecode. The JVM interprets the bytecode operation by operation, collecting runtime information like performance profiles and execution statistics. Using these performance data, the JIT compiler compiles the bytecode to native machine code optimized for the platform it is running on.

What sounds like an overcomplicated design is actually the primary reason Java services and applications are fast. They use the current runtime environment to the fullest extent, and new JVM versions bring new optimizations that the same application can use without recompilation.

However, this compilation process is also the biggest obstacle with modern environments like Kubernetes. k8s loves to scale the number of service instances down when the load profile allows and scale up when it has to. The ephemeral nature of service instances helps keep application environments from getting bloated. With the slow warmup of the JVM (as detailed above), it takes time until the JVM is ready to serve requests to the fullest.

Sure, there are workaround tricks. Some companies feed real-world data to a new process to artificially warm up the performance profile and JIT compiler, but these shortcuts have issues. They could warm up performance with a profile that is obsolete or appropriate for the coming workload. There must be a better way.

See More: Java Virtual Machine: Behind the Code

Just-in-Time or Ahead-of-Time

One alternative to the just-in-time compilation is the so-called ahead-of-time (AOT) compilation. It resembles the idea of natively compiled programming languages, such as C/C++ or Go. The AOT Compiler takes the Java bytecode and compiles it into native executables, based on the selected target environment (for example x64/Linux).

A second approach is the JAOTC command, a Java AOT static compiler that produces native code in the form of a shared library for the Java methods in specified Java class files. The JVM can load these AOT libraries and use their native code when corresponding Java methods are called. JAOTC has been dropped from OpenJDK.

While AOT approaches shorten JVM warmup, they erode the biggest benefit of any JVM language – the dynamic compilation down to specific optimizations based on the current CPU platform. The more general issue with AOT is that it needs to use a generic performance profile during compilation where everything is equally important rather than a profile appropriate for the actual performance profile in runtime. It just doesn’t exist at the point of native compilation.

But a new approach takes Java from a pre-cloud architecture to a cloud-native technology.

Optimizing Resource Usage of Java

New Java SE-compliant JVM implementations include features built around Java in the age of resource-managed environments, such as k8s.

Some JIT compilers operate as a standalone feature, deployed alongside the application into Kubernetes to provide JIT compilation services to multiple deployed JVMs. By sharing the compiled native code between multiple instances of the same service, compilation happens only once. It makes performance profiles (of the actual running application) available right away to newly started applications. More specifically, it makes performance profiles available to microservice-based service instances. This is a huge improvement in resource efficiency and cost savings compared to a compiler running for each JVM. Full speed right away.

This enables cost-efficiency actions such as quick scale-downs, which can save real hard currency. In addition, deployments can be simplified, mitigating the need for tricks like pre-start processes to warm up the JVM before allowing it to serve requests.

The ability to quickly spin up new containers or remove them is also critical for the agility to meet sudden load demands. Online retailers can see huge spikes during the holiday season, followed by steep drops. JIT compilation enables them to meet the demand – and create revenue – without using excess resources at other times.

Last but not least, it saves costs in terms of resource utilization. The cheapest form of compilation is the one that doesn’t happen. Not only cheap in the sense of time but also CPU and memory usage. That substantially reduces the memory and CPU consumption of Java processes, and the necessary resource sizing, again saving real money. At a time when business leaders are becoming more cognizant of cloud costs, JIT compilation can help mitigate those costs without forcing companies to look at options like repatriating cloud resources back to on-premise.

If the JIT compiler is in the JVM, a container must be provisioned with sufficient resources to handle warmup JIT compilation. This process leads to potential overprovisioning, as these resources may not even be required. A cloud native compiler reduces resources to what’s necessary, saving costs without compromising a fast warmup.

Java Loves Kubernetes

Advancements in cloud-native JVMs make running Java in Kubernetes easier, faster, more efficient, and less expensive than ever before, creating greater efficiency in the cloud.

The right JVM resolves a lot of the complexity:

  • by decreasing the required number of service instances
  • by decreasing the allocated resources (CPU / memory)
  • by decreasing deployment complexity
  • and more…

The cost efficiency and cost-cutting effects of a cloud-native JVM can be measured straight away. We all love cheap and simple things, and cloud-native JVMs deliver.

How have you benefited from cloud-native JVM? Share your experience with us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window .

MORE ON JAVA PROGRAMMING

Image Source: Shutterstock

Simon Ritter
Simon Ritter

Java Champion and deputy CTO, Azul

Simon Ritter is the Deputy CTO of Azul, where he helps people understand Java and Azul’s JVM products. He represents Azul on the JCP Executive Committee as well as the JSR Expert Groups for Java SE 9 and later. Prior to Azul, Simon joined Sun Microsystems in 1996 and spent time working in both Java development and consultancy. He has been presenting Java technologies to developers since 1999 focusing on the core Java platform as well as client and embedded applications.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.