Towards faster builds
When we initially released Quarkus, the industry was very much focused on microservices, and that was our primary target. However, Quarkus is also perfectly suited for large monoliths, whether you are migrating existing applications to a more modern runtime or building new ones from scratch.
Quarkus has always been able to handle large applications, but we have recently made significant improvements in this area, particularly when it comes to build times. In this post, we will walk through some of these improvements.
How the story started
Once upon a time, I came across an article comparing Spring Boot and Quarkus for building large monoliths (it’s in French, apologies to non-French speakers, but the results are largely self-explanatory).
The article compared a Spring Boot application and a Quarkus application implementing the same functionality and even included a generator to create both applications. The generated applications are simple: a few entities, some REST services, and typical CRUD endpoints. What made it really interesting was that you could easily scale up the number of entities and services to see how both frameworks handled larger applications.
I truly value this kind of feedback: it not only helps us identify areas for improvement, but also provides a reproducible way to explore them.
As expected, Quarkus excelled in memory consumption and startup time. But the build time was noticeably higher than Spring Boot’s. Again, this wasn’t a surprise. After all, Quarkus shifts more work to build time. But the difference was still significant enough to warrant a closer look.
Long story short: we investigated and we improved. A lot.
Thanks
First, I would like to thank the author of the original article, SpaceFox, for writing it and for providing such a useful generator.
As is often the case in the Quarkus world, I was not alone on this journey. I would therefore like to thank everyone who contributed through code, discussions, reviews, insights, and feedback. In alphabetical order: Tamás Cservenák, Sanne Grinovero, Martin Kouba, David Lloyd, Alexey Loubyansky, Matej Novotny, Yoann Rodière, and Ladislav Thon (and if I missed anyone, please let me know!).
This covers the core Quarkus work, but, as discussed later, several improvements were also made to the SmallRye OpenAPI project. For those contributions, I would like to thank Mike Edgar and Martin Panzer.
Our journey to faster builds
This kind of journey naturally comes with its fair share of profiling and staring at flame graphs. It’s not just about spotting the hotspots, but also about evaluating whether your changes actually move the needle (in the right direction, hopefully!). In the Java world, we’ve been lucky to have a tool as powerful as Async Profiler by our side.
As with any optimization effort, the key lies in choosing your battles wisely and carefully weighing the inevitable trade-offs.
Code optimizations
A lot of effort was invested in optimizing various parts of the build process: reducing memory allocations and optimizing algorithms and data structures were a big part of it.
This work resulted in numerous pull requests across Quarkus, Jandex, and even ByteBuddy.
Parallelization
Our build process is already massively parallelized, but we identified a few areas where this was not the case, for example, the generation of Hibernate ORM proxies. We fixed that.
Another area for improvement was the creation of large JAR archives, which is inherently slow because it involves reading and compressing a significant number of resources, making it both I/O- and CPU-intensive. Until now, we were building JARs using a ZipFileSystem, adding resources one by one in a single thread. We have since switched to using the parallel compression support provided by Commons Compress.
This change required some fairly extensive refactoring of the JAR assembly code, but it was definitely worth it.
Doing less
|
All the other optimizations equally benefit Gradle builds, but the following change is specific to Maven as it relates to the Maven default lifecycle. |
With all these optimizations, the time spent in the various goals of the quarkus-maven-plugin was cut in half compared to our reference version, 3.25.1.
That’s great… but building our sample application still took around two minutes, which is more than we would like.
It’s time to step back and look at the bigger picture.
For Quarkus applications, we build our own JARs for two main reasons:
-
We need to include additional resources and metadata.
-
We introduced custom JAR packagings designed to improve startup time.
This is the process we optimized by leveraging the parallel compression support provided by Commons Compress.
However, when using Maven, a Quarkus application is still a traditional jar Maven project.
It follows the standard lifecycle for the jar packaging, which means Maven will also build a conventional JAR using the maven-jar-plugin.
Taking a step back, we actually don’t need this JAR in 99% of cases, so we should avoid building it (while still keeping the flexibility to do so when absolutely necessary).
In Quarkus 3.31, we introduced the quarkus packaging, which comes with its own lifecycle.
This packaging is intended to be used only for the Quarkus application module itself.
It automatically binds the goals of the quarkus-maven-plugin, resulting in less boilerplate in your pom.xml (and no changes required when new goals are added).
More importantly, it does not bind the maven-jar-plugin execution, nor the maven-install-plugin, which leads to significantly faster builds for large applications, and benefits all applications overall.
For newly generated applications, this new quarkus packaging will be the default.
Once 3.31 is released, you will also be able to switch your existing applications to the new packaging by applying the following changes:
diff --git a/pom.xml b/pom.xml
index 98660b8..3c60220 100644
--- a/pom.xml
+++ b/pom.xml
@@ -5,6 +5,7 @@
<groupId>fr.spacefox.perftests.quarkus</groupId>
<artifactId>perftests-quarkus</artifactId>
<version>1.0.0-SNAPSHOT</version>
+ <packaging>quarkus</packaging> (1)
<properties>
<compiler-plugin.version>3.14.0</compiler-plugin.version>
@@ -66,16 +67,6 @@
<artifactId>quarkus-maven-plugin</artifactId>
<version>${quarkus.platform.version}</version>
<extensions>true</extensions> (2)
- <executions> (3)
- <execution>
- <goals>
- <goal>build</goal>
- <goal>generate-code</goal>
- <goal>generate-code-tests</goal>
- <goal>native-image-agent</goal>
- </goals>
- </execution>
- </executions>
</plugin>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
| 1 | Use the quarkus packaging instead of the default jar packaging. |
| 2 | This is important and has been present in the generated projects for quite some time. Add it if not already there. |
| 3 | Drop the goals, they will be handled automatically and we don’t want to run them twice. |
To give you an idea of the impact, in our sample large application, this change alone reduced the build time from two minutes down to 37 seconds.
Follow-ups
SmallRye OpenAPI
In November 2025, I presented part of this work during the first Quarkus community call (see here for more information about our community calls).
Following this presentation, Martin Panzer, one of our regular community contributors, opened an issue in the SmallRye OpenAPI project highlighting how slow the build could be for large applications. He provided a solid reproducer, which enabled Mike Edgar to implement several improvements that significantly reduced the contribution of SmallRye OpenAPI to the overall build time.
These changes will directly benefit Quarkus build times, but since SmallRye OpenAPI is also used by other runtimes, they will positively impact the broader Java ecosystem as well.
The takeaway is simple: when you notice something odd, report it, we might be able to improve it.
The infamous ClassTooLargeException
In Quarkus, we generate a significant amount of bytecode, and for large applications you need to be careful, as scale can cause our generated bytecode to hit certain limits (for example, method size or class size limits).
The original French article mentioned hitting such a limit at a particular scale. To be fair, that scale was already quite large, but it’s still not ideal to run into an arbitrary limit just because one class happens to push you over the edge.
We recently alleviated this limitation, and Quarkus can now handle much larger applications. This improvement will also be available in Quarkus 3.31.
Java 25
Recently, we have rewritten most of our bytecode generation to use Gizmo 2, which is built on top of the Class-File API. This work is still ongoing, but key components such as ArC, our CDI implementation, are already relying on it.
To preserve compatibility with Java 17 and 21, we currently use a backport of the Class-File API, but the Class-File API is relying on some classes from the underlying JDK. Several of these classes have seen significant optimizations in Java 25, and as a result, our bytecode generation performance improves noticeably when running on Java 25.
As Quarkus 3.31 will provide full support for Java 25, we recommend using it to build your applications and take advantage of these performance improvements.
Conclusion
Quarkus has always pushed the boundaries of developer experience: we introduced Dev Mode, pioneered the concept of Dev Services, and much more.
But sometimes, improving developer experience means going back to the basics: build times.
That’s exactly what we focused on here, and we hope you’ll enjoy building your Quarkus applications faster (and greener!).
And if you spot additional opportunities to improve our build process, don’t hesitate to open an issue: we’re always happy to hear new ideas.
Come Join Us
We value your feedback a lot so please report bugs, ask for improvements… Let’s build something great together!
If you are a Quarkus user or just curious, don’t be shy and join our welcoming community:
-
provide feedback on GitHub;
-
craft some code and push a PR;
-
discuss with us on Zulip and on the mailing list;
-
ask your questions on Stack Overflow.