Edit this Page

Virtual Thread support reference

This guide explains how to benefit from Java 21+ virtual threads in Quarkus application.

What are virtual threads?

Terminology

OS thread

A "thread-like" data structure managed by the Operating System.

Platform thread

Until Java 19, every instance of the Thread class was a platform thread, a wrapper around an OS thread. Creating a platform thread creates an OS thread, and blocking a platform thread blocks an OS thread.

Virtual thread

Lightweight, JVM-managed threads. They extend the Thread class but are not tied to one specific OS thread. Thus, scheduling virtual threads is the responsibility of the JVM.

Carrier thread

A platform thread used to execute a virtual thread is called a carrier thread. It isn’t a class distinct from Thread or VirtualThread but rather a functional denomination.

Differences between virtual threads and platform threads

We will give a brief overview of the topic here; please refer to the JEP 425 for more information.

Virtual threads are a feature available since Java 19 (Java 21 is the first LTS version including virtual threads), aiming at providing a cheap alternative to platform threads for I/O-bound workloads.

Until now, platform threads were the concurrency unit of the JVM. They are a wrapper over OS structures. Creating a Java platform thread creates a "thread-like" structure in your operating system.

Virtual threads, on the other hand, are managed by the JVM. To be executed, they need to be mounted on a platform thread (which acts as a carrier to that virtual thread). As such, they have been designed to offer the following characteristics:

Lightweight

Virtual threads occupy less space than platform threads in memory. Hence, it becomes possible to use more virtual threads than platform threads simultaneously without blowing up the memory. By default, platform threads are created with a stack of about 1 MB, whereas virtual threads stack is "pay-as-you-go." You can find these numbers and other motivations for virtual threads in this presentation given by the lead developer of project Loom (the project that added the virtual thread support to the JVM).

Cheap to create

Creating a platform thread in Java takes time. Currently, techniques such as pooling, where threads are created once and then reused, are strongly encouraged to minimize the time lost in starting them (as well as limiting the maximum number of threads to keep memory consumption low). Virtual threads are supposed to be disposable entities that we create when we need them, it is discouraged to pool them or reuse them for different tasks.

Cheap to block

When performing blocking I/O, the underlying OS thread wrapped by the Java platform thread is put in a wait queue, and a context switch occurs to load a new thread context onto the CPU core. This operation takes time. Since the JVM manages virtual threads, no underlying OS thread is blocked when they perform a blocking operation. Their state is stored in the heap, and another virtual thread is executed on the same Java platform (carrier) thread.

The Continuation Dance

As mentioned above, the JVM schedules the virtual threads. These virtual threads are mounted on carrier threads. The scheduling comes with a pinch of magic. When the virtual thread attempts to use blocking I/O, the JVM transforms this call into a non-blocking one, unmounts the virtual thread, and mounts another virtual thread on the carrier thread. When the I/O completes, the waiting virtual thread becomes eligible again and will be re-mounted on a carrier thread to continue its execution. For the user, all this dance is invisible. Your synchronous code is executed asynchronously.

Note that the virtual thread may not be re-mounted on the same carrier thread.

Virtual threads are useful for I/O-bound workloads only

We now know we can create more virtual threads than platform threads. One could be tempted to use virtual threads to perform long computations (CPU-bound workload). It is useless and counterproductive. CPU-bound doesn’t consist of quickly swapping threads while they need to wait for the completion of an I/O, but in leaving them attached to a CPU core to compute something. In this scenario, it is worse than useless to have thousands of threads if we have tens of CPU cores, virtual threads won’t enhance the performance of CPU-bound workloads. Even worse, when running a CPU-bound workload on a virtual thread, the virtual thread monopolizes the carrier thread on which it is mounted. It will either reduce the chance for the other virtual thread to run or will start creating new carrier threads, leading to high memory usage.

Run code on virtual threads using @RunOnVirtualThread

In Quarkus, the support of virtual thread is implemented using the @RunOnVirtualThread annotation. This section briefly overviews the rationale and how to use it. There are dedicated guides for extensions supporting that annotation, such as:

Why not run everything on virtual threads?

As mentioned above, not everything can run safely on virtual threads. The risk of monopolization can lead to high-memory usage. Also, there are situations where the virtual thread cannot be unmounted from the carrier thread. This is called pinning. Finally, some libraries use ThreadLocal to store and reuse objects. Using virtual threads with these libraries will lead to massive allocation, as the intentionally pooled objects will be instantiated for every (disposable and generally short-lived) virtual thread.

As of today, it is not possible to use virtual threads in a carefree manner. Following such a laissez-faire approach could quickly lead to memory and resource starvation issues. Thus, Quarkus uses an explicit model until the aforementioned issues disappear (as the Java ecosystem matures). It is also the reason why reactive extensions have the virtual thread support, and rarely the classic ones. We need to know when to dispatch on a virtual thread.

It is essential to understand that these issues are not Quarkus limitations or bugs but are due to the current state of the Java ecosystem which needs to evolve to become virtual thread friendly.

Monopolization cases

The monopolization has been explained in the Virtual threads are useful for I/O-bound workloads only section. When running long computations, we do not allow the JVM to unmount and switch to another virtual thread until the virtual thread terminates. Indeed, the current scheduler does not support preempting tasks.

This monopolization can lead to the creation of new carrier threads to execute other virtual threads. Creating carrier threads results in creating platform threads. So, there is a memory cost associated with this creation.

Suppose you run in a constrained environment, such as containers. In that case, monopolization can quickly become a concern, as the high memory usage can lead to out-of-memory issues and container termination. The memory usage may be higher than with regular worker threads because of the inherent cost of the scheduling and virtual threads.

Pinning cases

The promise of "cheap blocking" might not always hold: a virtual thread might pin its carrier on certain occasions. The platform thread is blocked in this situation, precisely as it would have been in a typical blocking scenario.

According to JEP 425 this can happen in two situations:

  • when a virtual thread performs a blocking operation inside a synchronized block or method

  • when it executes a blocking operation inside a native method or a foreign function

It can be reasonably easy to avoid these situations in your code, but verifying every dependency you use is hard. Typically, while experimenting with virtual threads, we realized that versions older than 42.6.0 of the postgresql-JDBC driver result in frequent pinning. Most JDBC drivers still pin the carrier thread. Even worse, many libraries require code changes.

For more information, see When Quarkus meets Virtual Threads

This information about pinning cases applies to PostgreSQL JDBC driver 42.5.4 and earlier. For PostgreSQL JDBC driver 42.6.0 and later, virtually all synchronized methods have been replaced by reentrant locks. For more information, see the Notable Changes for PostgreSQL JDBC driver 42.6.0.

The pooling case

Some libraries are using ThreadLocal as an object pooling mechanism. Extremely popular libraries like Jackson and Netty assume that the application uses a limited number of threads, which are recycled (using a thread pool) to run multiple (unrelated but sequential) tasks.

This pattern has multiple advantages, such as:

  • Allocation benefit: heavy objects are only allocated once per thread, but because the number of these threads was intended to be limited, it would not use too much memory.

  • Thread safety: only one thread can access the object stored in the thread local - preventing concurrent accesses.

However, this pattern is counter-productive when using virtual threads. Virtual threads are not pooled and generally short-lived. So, instead of a few of them, we now have many of them. For each of them, the object stored in the ThreadLocal is created (often large and expensive) and won’t be reused, as the virtual thread is not pooled (and won’t be used to run another task once the execution completes). This problem leads to high memory usage. Unfortunately, it requires sophisticated code changes in the libraries themselves.

Use @RunOnVirtualThread with Quarkus REST (formerly RESTEasy Reactive)

This section shows a brief example of using the @RunOnVirtualThread annotation. It also explains the various development and execution models offered by Quarkus.

The @RunOnVirtualThread annotation instructs Quarkus to invoke the annotated method on a new virtual thread instead of the current one. Quarkus handles the creation of the virtual thread and the offloading.

Since virtual threads are disposable entities, the fundamental idea of @RunOnVirtualThread is to offload the execution of an endpoint handler on a new virtual thread instead of running it on an event-loop or worker thread (in the case of Quarkus REST).

To do so, it suffices to add the @RunOnVirtualThread annotation to the endpoint. If the Java Virtual Machine used to run the application provides virtual thread support (so Java 21 or later versions), then the endpoint execution is offloaded to a virtual thread. It will then be possible to perform blocking operations without blocking the platform thread upon which the virtual thread is mounted.

In the case of Quarkus REST, this annotation can only be used on endpoints annotated with @Blocking or considered blocking because of their signature. You can visit Execution model, blocking, non-blocking for more information.

Get started with virtual threads with Quarkus REST

Add the following dependency to your build file:

pom.xml
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-rest</artifactId>
</dependency>
build.gradle
implementation("io.quarkus:quarkus-rest")

Then, you also need to make sure that you are using Java 21+, this can be enforced in your pom.xml file with the following:

pom.xml
<properties>
    <maven.compiler.source>21</maven.compiler.source>
    <maven.compiler.target>21</maven.compiler.target>
</properties>

Three development and execution models

The example below shows the differences between three endpoints, all of them querying a fortune in the database then returning it to the client.

  • the first one uses the traditional blocking style, it is considered blocking due to its signature.

  • the second one uses Mutiny, it is considered non-blocking due to its signature.

  • the third one uses Mutiny but in a synchronous way, since it doesn’t return a "reactive type" it is considered blocking and the @RunOnVirtualThread annotation can be used.

package org.acme.rest;

import org.acme.fortune.model.Fortune;
import org.acme.fortune.repository.FortuneRepository;
import io.smallrye.common.annotation.RunOnVirtualThread;
import io.smallrye.mutiny.Uni;

import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import java.util.List;
import java.util.Random;


@Path("")
public class FortuneResource {

    @Inject FortuneRepository repository;

    @GET
    @Path("/blocking")
    public Fortune blocking() {
        // Runs on a worker (platform) thread
        var list = repository.findAllBlocking();
        return pickOne(list);
    }

    @GET
    @Path("/reactive")
    public Uni<Fortune> reactive() {
        // Runs on the event loop
        return repository.findAllAsync()
                .map(this::pickOne);
    }

    @GET
    @Path("/virtual")
    @RunOnVirtualThread
    public Fortune virtualThread() {
        // Runs on a virtual thread
        var list = repository.findAllAsyncAndAwait();
        return pickOne(list);
    }

}

The following table summarizes the options:

Model Example of signature Pros Cons

Synchronous code on worker thread

Fortune blocking()

Simple code

Use worker thread (limit concurrency)

Reactive code on event loop

Uni<Fortune> reactive()

High concurrency and low resource usage

More complex code

Synchronous code on virtual thread

@RunOnVirtualThread Fortune vt()

Simple code

Risk of pinning, monopolization and under-efficient object pooling

Note that all three models can be used in a single application.

Use virtual thread friendly clients

As mentioned in the Why not run everything on virtual threads? section, the Java ecosystem is not entirely ready for virtual threads. So, you need to be careful, especially when using a libraries doing I/O.

Fortunately, Quarkus provides a massive ecosystem that is ready to be used in virtual threads. Mutiny, the reactive programming library used in Quarkus, and the Vert.x Mutiny bindings provides the ability to write blocking code (so, no fear, no learning curve) which do not pin the carrier thread.

As a result:

  1. Quarkus extensions providing blocking APIs on top of reactive APIs can be used in virtual threads. This includes the REST Client, the Redis client, the mailer…​

  2. API returning Uni can be used directly using uni.await().atMost(…​). It blocks the virtual thread, without blocking the carrier thread, and also improves the resilience of your application with an easy (non-blocking) timeout support.

  3. If you use a Vert.x client using the Mutiny bindings, use the andAwait() methods which block until you get the result without pinning the carrier thread. It includes all the reactive SQL drivers.

Detect pinned thread in tests

We recommend to use the following configuration when running tests in application using virtual threads. If would not fail the tests, but at least dump start traces if the code pins the carrier thread:

<plugin>
  <artifactId>maven-surefire-plugin</artifactId>
  <version>${surefire-plugin.version}</version>
  <configuration>
      <systemPropertyVariables>
        <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager>
        <maven.home>${maven.home}</maven.home>
      </systemPropertyVariables>
      <argLine>-Djdk.tracePinnedThreads</argLine>
  </configuration>
</plugin>

Run application using virtual threads

java -jar target/quarkus-app/quarkus-run.jar
Prior to Java 21, virtual threads were still an experimental feature, you need to start your application with the --enable-preview flag.

Build containers for application using virtual threads

When running your application in JVM mode (so not compiled into native, for native check the dedicated section), you can follow the containerization guide to build a container.

In this section, we use JIB to build the container. Refer to the containerization guide to learn more about the alternatives.

To containerize your Quarkus application that use @RunOnVirtualThread, add the following properties in your application.properties:

quarkus.container-image.build=true
quarkus.container-image.group=<your-group-name>
quarkus.container-image.name=<you-container-name>
quarkus.jib.base-jvm-image=registry.access.redhat.com/ubi8/openjdk-21-runtime (1)
quarkus.jib.platforms=linux/amd64,linux/arm64 (2)
1 Make sure you use a base image supporting virtual threads. Here we use an image providing Java 21. Quarkus picks an image providing Java 21+ automatically if you do not set one.
2 Select the target architecture. You can select more than one to build multi-archs images.

Then, build your container as you would do usually. For example, if you are using Maven, run:

mvn package

Compiling Quarkus application using virtual threads into native executable

Using a local GraalVM installation

To compile a Quarkus applications leveraging @RunOnVirtualThread into a native executable, you must be sure to use a GraalVM / Mandrel native-image supporting virtual threads, so providing at least Java 21.

Build the native executable as indicated on the native compilation guide. For example, with Maven, run:

mvn package -Dnative

Using an in-container build

In-container build allows building Linux 64 executables by using a native-image compiler running in a container. It avoids having to install native-image on your machine, and also allows configuring the GraalVM version you need. Note that, to use in-container build, you must have Docker or Podman installed on your machine.

Then, add to your application.properties file:

# In-container build to get a linux 64 executable
quarkus.native.container-build=true (1)
1 Enables the in-container build
From ARM/64 to AMD/64

If you are using a Mac M1 or M2 (using an ARM64 CPU), you need to be aware that the native executable you will get using an in-container build will be a Linux executable, but using your host (ARM 64) architecture. You can use emulation to force the architecture when using Docker with the following property:

quarkus.native.container-runtime-options=--platform=linux/amd64

Be aware that it increases the compilation time…​ a lot (>10 minutes).

Containerize native applications using virtual threads

To build a container running a Quarkus application using virtual threads compiled into a native executable, you must make sure you have a Linux/AMD64 executable (or ARM64 if you are targeting ARM machines).

Make sure your application.properties contains the configuration explained in the native compilation section.

Then, build your container as you would do usually. For example, if you are using Maven, run:

mvn package -Dnative
If you ever want to build a native container image and already have an existing native image you can set -Dquarkus.native.reuse-existing=true and the native image build will not be re-run.

Use the duplicated context in virtual threads

Methods annotated with @RunOnVirtualThread inherit from the original duplicated context (See the duplicated context reference guide for details). So, the data written in the duplicated context (and the request scope, as the request scoped is stored in the duplicated context) by filters and interceptors are available during the method execution (even if the filters and interceptors are not run on the virtual thread).

However, thread locals are not propagated.

Virtual thread names

Virtual threads are created without a thread name by default, which is not practical to identify the execution for debugging and logging purposes. Quarkus managed virtual threads are named and prefixed with quarkus-virtual-thread-. You can customize this prefix, or disable the naming altogether configuring an empty value:

quarkus.virtual-threads.name-prefix=

Inject the virtual thread executor

In order to run tasks on virtual threads Quarkus manages an internal ThreadPerTaskExecutor. In rare instances where you’d need to access this executor directly you can inject it using the @VirtualThreads CDI qualifier:

Injecting the Virtual Thread ExecutorService is experimental and may change in future versions.
package org.acme;

import org.acme.fortune.repository.FortuneRepository;

import java.util.concurrent.ExecutorService;

import jakarta.enterprise.event.Observes;
import jakarta.inject.Inject;
import jakarta.transaction.Transactional;

import io.quarkus.logging.Log;
import io.quarkus.runtime.StartupEvent;
import io.quarkus.virtual.threads.VirtualThreads;

public class MyApplication {

    @Inject
    FortuneRepository repository;

    @Inject
    @VirtualThreads
    ExecutorService vThreads;

    void onEvent(@Observes StartupEvent event) {
        vThreads.execute(this::findAll);
    }

    @Transactional
    void findAll() {
        Log.info(repository.findAllBlocking());
    }

}

Testing virtual thread applications

As mentioned above, virtual threads have a few limitations that can drastically affect your application performance and memory usage. The junit5-virtual-threads extension provides a way to detect pinned carrier threads while running your tests. Thus, you can eliminate one of the most prominent limitations or be aware of the problem.

To enable this detection:

  • 1) Add the junit5-virtual-threads dependency to your project:

<dependency>
    <groupId>io.quarkus.junit5</groupId>
    <artifactId>junit5-virtual-threads</artifactId>
    <scope>test</scope>
</dependency>
  • 2) In your test case, add the io.quarkus.test.junit5.virtual.VirtualThreadUnit and io.quarkus.test.junit.virtual.ShouldNotPin annotations:

@QuarkusTest
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
@VirtualThreadUnit // Use the extension
@ShouldNotPin // Detect pinned carrier thread
class TodoResourceTest {
    // ...
}

When you run your test (remember to use Java 21+), Quarkus detects pinned carrier threads. When it happens, the test fails.

The @ShouldNotPin can also be used on methods directly.

The junit5-virtual-threads also provides a @ShouldPin annotation for cases where pinning is unavoidable. The following snippet demonstrates the @ShouldPin annotation usage.

@VirtualThreadUnit // Use the extension
public class LoomUnitExampleTest {

    CodeUnderTest codeUnderTest = new CodeUnderTest();

    @Test
    @ShouldNotPin
    public void testThatShouldNotPin() {
        // ...
    }

    @Test
    @ShouldPin(atMost = 1)
    public void testThatShouldPinAtMostOnce() {
        codeUnderTest.pin();
    }

}

Related content