Micronaut Functions in GraalVM Native Images deployed to AWS Lambda

Learn how to deploy a GraalVM Image of a Micronaut Function to AWS Lambda

Authors: Will Buck

Micronaut Version: 1.2.6

1 Introduction

AWS Lambda is a dynamically scaled and billed-per-execution compute service. Instances of Lambdas are added and removed dynamically.

When a new instance handles its first request, the response time increases, which is called a cold start. After that request is processed, the instance stays alive (≈10 m) to be reused for subsequent requests.

In the guide Micronaut Functions deployed in AWS Lambda, we used the Java runtime. Cold startups may cause some request to take more than 5 seconds.

To get rid of cold startups, in this guide, we combine several technologies:

In Lambda proxy integration, when a client submits an API request, API Gateway passes to the integrated Lambda function the raw request as-is.

This request data includes the request headers, query string parameters, URL path variables, payload, and API configuration data. The configuration data can include current deployment stage name, stage variables, user identity, or authorization context (if any).

A runtime is a program that runs a Lambda function’s handler method when the function is invoked. You can include a runtime in your function’s deployment package in the form of an executable file named bootstrap.

GraalVM Native Image allows you to ahead-of-time compile Java code to a standalone executable, called a native image. This executable includes the application, the libraries, the JDK and does not run on the Java VM, but includes necessary components like memory management and thread scheduling from a different virtual machine. The resulting program has faster startup time and lower runtime memory overhead compared to a Java VM.

1.1 What you'll need to get started

To complete this guide, you will need the following:

  • Some time on your hands

  • A decent text editor or IDE

  • JDK 1.8 or greater installed with JAVA_HOME configured appropriately

You will also want the following:

sdk install micronaut

You may also, optionally, want

  • The AWS SAM CLI tool, to test deploying the Lambda architecture locally (so there’s no surprises in AWS proper!)

  • The AWS CLI tool, which can be used to deploy your entire architecture from the command line

1.2 Solution

We recommend you to follow the instructions in the next sections and create the app step by step. However, you can go right to the completed example.


Then, cd into the complete folder which you will find in the root project of the downloaded/cloned project.

2 Writing the Application

Using Lambda Proxy Integration in API Gateway allows you to code a serveless application as you would normally code a traditional app, with controllers to handle HTTP requests.

Use the micronaut feature aws-api-gateway-graal to create an app ready to be deployed to AWS Lambda with Custom runtime:

mn create-app example.micronaut.complete --features aws-api-gateway-graal

aws-api-gateway-graal feature includes the micronaut-function-aws-custom-runtime dependency which includes the MicronautLambdaRuntime class, an implementation that you can use to execute a custom runtime as described in AWS documentation Publishing a custom runtime.

dependencies {
    implementation("io.micronaut.aws:micronaut-function-aws-custom-runtime") {
        exclude group: "com.fasterxml.jackson.module", module: "jackson-module-afterburner"

Moreover, aws-api-gateway-graal feature includes the micronaut-function-aws-api-proxy dependency which adds the Micronaut support for the AWS Serverless Java Container project.

dependencies {
    implementation("io.micronaut.aws:micronaut-function-aws-api-proxy") {
        exclude group: "com.fasterxml.jackson.module", module: "jackson-module-afterburner"

Moreover, we have moved the dependencies (micronaut-http-server-netty and micronaut-http-client) to the test classpath. It is important to remove unused dependencies. :

dependencies {
    testImplementation "io.micronaut:micronaut-http-client"
    testImplementation "io.micronaut:micronaut-http-server-netty"

We’ll be keeping it simple and creating a controller to find all the prime numbers less than N, using the Sieve of Eratosthenes.

2.1 PrimeFinderService

Just as any other Micronaut app, we’ll want to isolate the "business logic" of how we compute the prime numbers to a service.

As we mentioned previously, the most efficient algorithm to calculate the primes below a given number N is the Sieve of Eratosthenes, which steps up from 2 to N and tosses out all the multiples of the given step, so that subsequent passes are working with only the leftover numbers on the next pass.

See below for an adaption of this O(n) java implementation of the algorithm

package example.micronaut;

import javax.inject.Singleton;
import java.util.ArrayList;
import java.util.List;
import java.util.Vector;

public class PrimeFinderService {
    // Credit to https://www.geeksforgeeks.org/sieve-eratosthenes-0n-time-complexity/
    // for this clever O(n) implementation of the Sieve
    public final int MAX_SIZE = 1000001;
    // isPrime[] : isPrime[i] is true if number is prime
    // prime[] : stores all prime number less than N
    // SPF[] that store smallest prime factor of number
    // [for Exp : smallest prime factor of '8' and '16'
    // is '2' so we put SPF[8] = 2 , SPF[16] = 2 ]
    private Vector<Boolean> isPrime = new Vector<>(MAX_SIZE);
    private Vector<Integer> SPF = new Vector<>(MAX_SIZE);

    public PrimeFinderService() {
        long startTime = System.currentTimeMillis();

        // Init the isPrime and SPF vectors
        for (int i = 0; i < MAX_SIZE; i++) {
        // 0 and 1 are not prime
        isPrime.set(0, false);
        isPrime.set(1, false);
        long endTime = System.currentTimeMillis();
        System.out.println("Total execution time: " + (endTime-startTime) + "ms");

    public List<Integer> findPrimesLessThan(int n) {
        // Fill in the rest of the entries
        List<Integer> prime = new ArrayList<>();
        for (int i = 2; i < n; i++) {
            // If isPrime[i] == True then i is
            // prime number
            if (isPrime.get(i)) {
                // put i into prime[] vector

                // A prime number is its own smallest
                // prime factor
                SPF.set(i, i);

            // Remove all multiples of  i*prime[j] which are
            // not prime by making isPrime[i*prime[j]] = false
            // and put smallest prime factor of i*Prime[j] as prime[j]
            // [for exp :let  i = 5, j = 0, prime[j] = 2 [ i*prime[j] = 10]
            // so smallest prime factor of '10' is '2' that is prime[j] ]
            // this loop run only one time for number which are not prime
            for (int j = 0;
                 j < prime.size() &&
                         i * prime.get(j) < n && prime.get(j) <= SPF.get(i);
                 j++) {
                isPrime.set(i * prime.get(j), false);

                // put smallest prime factor of i*prime[j]
                SPF.set(i * prime.get(j), prime.get(j));
        return prime;

2.2 PrimeFinderController

So unlike the previous guide, this lambda operates just like a standard Micronaut app.

Let’s add a small wrapper class for our responses.

package example.micronaut;

import io.micronaut.core.annotation.Introspected;

import java.util.List;

public class PrimeFinderResponse {

    private String message;
    private List<Integer> primes;

    public PrimeFinderResponse() {

    public List<Integer> getPrimes() {
        return primes;

    public void setPrimes(List<Integer> primes) {
        this.primes = primes;

    public String getMessage() {
        return message;

    public void setMessage(String message) {
        this.message = message;

Then, we’ll replace the ExampleController.java with with a PrimeFinderController that utilizes our service from the previous step.

package example.micronaut;

import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

@Controller("/") (1)
public class PrimeFinderController {
    private static final Logger LOG = LoggerFactory.getLogger(PrimeFinderController.class); (2)

    private final PrimeFinderService primeFinderService;

    public PrimeFinderController(PrimeFinderService primeFinderService) { (3)
        this.primeFinderService = primeFinderService;

    public PrimeFinderResponse findPrimesBelow(int number) {
        PrimeFinderResponse resp = new PrimeFinderResponse();
        if (number >= primeFinderService.MAX_SIZE) {
            if (LOG.isInfoEnabled()) {
                LOG.info("This number is too big, you can't possibly want to know all the primes below a number this big.");
            resp.setMessage("This service only returns lists for numbers below " + primeFinderService.MAX_SIZE);
            return resp;
        if (LOG.isDebugEnabled()) {
            LOG.debug("Computing all the primes smaller than {} ...", number);
        return resp;
1 Note this is just like a regular Micronaut controller, using the @Controller annotation
2 Be sure to add a LOG so that you will be able to see log output in CloudWatch
3 We want to use constructor-based injection to get our PrimeFinderService in the controller

We’ll also want to modify the logback.xml to set the DEBUG level for our example.micronaut package.


    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <!-- encoders are assigned the type
             ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
            <pattern>%cyan(%d{HH:mm:ss.SSS}) %gray([%thread]) %highlight(%-5level) %magenta(%logger{36}) - %msg%n</pattern>

    <root level="info">
        <appender-ref ref="STDOUT" />
    <logger name="example.micronaut" level="DEBUG"/> <!-- <1> -->
1 This is the line we need to add, that sets the log level to DEBUG for our package

You can test your servless app as you would normally test your app:

package example.micronaut;

import io.micronaut.http.HttpRequest;
import io.micronaut.http.client.HttpClient;
import io.micronaut.http.client.annotation.Client;
import io.micronaut.runtime.server.EmbeddedServer;
import io.micronaut.test.annotation.MicronautTest;
import org.junit.jupiter.api.Test;

import javax.inject.Inject;

import java.util.Arrays;
import java.util.Collections;

import static org.junit.jupiter.api.Assertions.assertEquals;

public class PrimeFinderControllerTest {
    EmbeddedServer server;

    HttpClient client;

    void testPrimesBelow3() {
        PrimeFinderResponse response = client.toBlocking()
                .retrieve(HttpRequest.GET("/find-primes-below/3"), PrimeFinderResponse.class);
        assertEquals(Collections.singletonList(2), response.getPrimes());

3 Running Lambda Locally via SAM

This step is optional. To follow along, you’ll need the AWS SAM CLI tool installed locally.

The S erverless A pplication M odel, or SAM, is a framework for defining an application using a serverless architecture.

It is also a CLI tool provided by Amazon to mock the AWS environment locally.

This is going to allow us to run our Micronaut app as a graal native image, within an AWS Lambda custom runtime, all on our own machine.

The Micronaut team have provided a simple shell script in the aws-api-gateway-graal feature called sam-local.sh, which builds the app into a Graal native image via Docker and packages it with the required bootstrap file for AWS Lambda custom runtimes into a simple zip.

It then executes the SAM CLI tool with the (also included) sam.yaml file, which describes the Cloudformation architecture of our Serverless application (in this case, it is simple one endpoint, run via Lambda and triggered via the API Gateway)

docker build . -t prime-finder (1)
mkdir -p build
docker run --rm --entrypoint cat prime-finder  /home/application/function.zip > build/function.zip (2)

sam local start-api -t sam.yaml -p 3000 (3)
1 We build the application zip in a docker container (see the dockerfile below)
2 This extracts the function.zip file built by the docker container to the build/ directory, for use by sam-local
3 This is the command to run our infrastructure locally

Try running this yourself, via


And then (after some time, building the native image in docker and orchestrating the SAM environment can take a couple minutes), try curl ing your endpoint (it should start up on port 3000 within the SAM docker container, forwarded to your machine)

curl localhost:3000/find-primes-below/999

4 Deploying to AWS Lambda

Now it’s time to try things out in AWS!

We’ve included a deploy.sh file that takes care of all of the lambda deployment, if you’d prefer that to the web-based AWS Console.

First we need to build the zip file that comprises our AWS Lambda custom runtime (if you followed along with the previous section, you can skip this as sam-local.sh already did it for you).

Building the Function.zip deployable
docker build . -t prime-finder
mkdir -p build
docker run --rm --entrypoint cat prime-finder  /home/application/function.zip > build/function.zip

Now we’ll want to navigate to https://console.aws.amazon.com/lambda/home and click Create function

lambda 1 create fn

We’ll be prompted for some inputs, so fill out the "Function name" as "prime-finder" and choose the "Runtime" to be "Custom runtime → Provide your own bootstrap" under

lambda 2 create inputs
lambda 3 create custom runtime detail

Now under "Permissions" at the bottom, we’ll need to create a new lambda execution role. AWS can take care of this for you, just select the first radio button!

lambda 4 create role

You should be brought to a new page to finish configuring the detail of your new "prime-finder" lambda.

Next we need to upload the function.zip file that docker placed in our build/ directory using the "Code entry type" dropdown.

lambda 5 add code zip

Finally, we have to click "Save" up in the top right, and wait for the zip to finish uploading, which may take 20-30 seconds.

lambda 6 wait for save

And we have our lambda! Clicking "Test" on this page isn’t going to do much of anything, as we still need the second piece of the puzzle.

5 Connecting our Lambda to API Gateway

Let’s get our Lambda an API Gateway trigger!

Navigate to https://console.aws.amazon.com/apigateway/home and click on the "Get Started" button in the middle of the screen. If you’ve created an API Gateway in the past, you’ll instead need to click "Create API" in the top right.

You’ll be presented with an Example API, but we want to create our own. Select the "New API" radio and give the API a name, then click "Create API"

gateway 1 create inputs

Now that we have our API, we need to create a catch-all "proxy" resource for our Lambda (we want our Micronaut app to handle routing).

Use the "Actions" dropdown and select "Create Resource"

gateway 2 create resource

Checking the "Configure as proxy resource" checkbox will automatically fill in the name and path, then we just need to click "Create Resource"

gateway 3 proxy

Next we need to connect this API to our lambda.

"Lambda Function Proxy" should already be selected, so just start typing our lambda’s name in the in the "Lambda Function" autocomplete box and select it once it pops up. Then click "Save"

gateway 4 connect lambda

At this point, we can use the AWS console to test out triggering our API, using the "Test" button on the left hand side of resource visualization.

gateway 5 test the api

As our app is expecting GET requests at /find-primes-below/{number}, that’s how we’ll fill in the test details.

gateway 6 test details

You should receive a 200 response with the list of primes.

gateway 7 test success
If you get an error, check the "Path" input box to ensure it matches the path of your controller method, and check the Lambda config to ensure the function zip uploaded successfully.

We’re almost to the finish line now! We’ve confirmed that everything works properly, but our API is not yet publicly accessible.

To make that happen, we need to "Deploy" our API. From the "Actions" dropdown, select "Deploy API"

gateway 8 deploy api

APIs can be deployed in "Stages". Commonly you might have a "test" and "prod" stage where changes would naturally propagate, but for the purposes of this guide we’ll just create one new stage called "demo".

gateway 9 deploy details

Click "Deploy", and we should get the URL for our newly deployed API!

gateway 10 deploy success url display

Now we can use any API client we like to test our new url.

gateway 11 curl

Congratulations! You’ve got yourself your first Micronaut GraalVM API with AWS!

6 Next Steps

Read more about Serverless Functions inside Micronaut.

Read more about Micronaut Lambda support within Micronaut.

Learn more Micronaut for GraalVM integration.

7 Help with Micronaut

OCI sponsored the creation of this Guide. OCI offers several Micronaut services:

Free consultation

The OCI Micronaut Team includes Micronaut co-founders, Jeff Scott Brown and Graeme Rocher. Check our Micronaut courses and learn from the engineers who developed, matured and maintain Micronaut.

Micronaut OCI Team