AWS CDK — Where Imperative Meets Declarative
Hello, Java, my old friend, I’ve come to build with you again…
This is Part 1 of a series. Subsequent parts can be found here:
Part 2 — AWS CDK for EKS — Kubernetes Manifest Handling
Part 3 — AWS CDK for EKS — Handling Helm Charts

I started writing Java in 1997, with version 1.13. I was building Lotus Notes apps back then, and using Java was easier to customize Notes than was the C API that we had been using. Over the next two decades I continued to work with Java and other technologies (enterprise web apps, Spring, containers, clouds, DevOps, etc.). However, the need and opportunity to continue Java development changed as my roles changed, and I was writing less and less Java as I got more and more into cloud-computing and Kubernetes.
AWS Cloud Development Kit
In 2020, I found Java again when I started using the AWS Cloud Development Kit (CDK). With the AWS CDK, I found myself writing Java apps again, and enjoying it, like I did when I was working with the AWS Java SDK and even Spring Data. For me, Java was exciting again. I dusted-off my Java (and Maven) coding skills to use the AWS CDK. I could have used Python, TypeScript, JavaScript, or even C#, but my deep Java background made the choice easy.
Polyglotism with jsii
The AWS CDK is built on top of TypeScript and jsii. With jsii developers create type-annotated bundles from typescript modules which can be used to auto-generate idiomatic packages in a variety of target languages. Generated types proxy calls to an embedded javascript VM, effectively allowing jsii modules to be “written once and run everywhere”.
Developer Enablement
The AWS CDK enables developers to use “familiar languages” to write applications that create AWS cloud resources. The resources are actually created via AWS CloudFormation templates, and the AWS CDK apps are used to synthesize (synth), deploy, and destroy these stacks. So, one could argue that the AWS CDK uses an imperative approach to creating the artifacts (AWS CloudFormation templates) that are used for a declarative approach to deploy resources in AWS as stacks. In my opinion, this enables developers to move faster, using tools already within their reach, while still “coloring inside the lines” with declarative tools like AWS CloudFormation.
The AWS CDK leverages the same AWS IAM permissions through which any other AWS account access is controlled, so again, developers building in AWS are using familiar tools. And, AWS resources are applied to the accounts via CloudFormation Change Sets. Change sets make it easier for builders to review and apply CloudFormation changes as new, and to existing, stacks.
Getting Started With AWS CDK
To install the AWS CDK on my Macbook Pro, I ran:
npm install -g aws-cdk
Then I checked the version of the CDK with:
cdk --version
1.91.0 (build 0f728ce)
I created CDK project, in an empty directory, with the following command:
cdk init app --language javaApplying project template app for java
# Welcome to your CDK Java project!This is a blank project for Java development with CDK.The `cdk.json` file tells the CDK Toolkit how to execute your app.It is a [Maven](https://maven.apache.org/) based project, so you can open this project with any Maven compatible Java IDE to build and run tests.## Useful commands* `mvn package` compile and run tests
* `cdk ls` list all stacks in the app
* `cdk synth` emits the synthesized CloudFormation template
* `cdk deploy` deploy this stack to your default AWS account/region
* `cdk diff` compare deployed stack with current state
* `cdk docs` open CDK documentationEnjoy!Initializing a new git repository...
Executing 'mvn package'
✅ All done!
The project below was created:

Except for adding maven dependencies in the pom.xml file (seen below) for the AWS CDK features I used, I kept the project unchanged.
...
<!-- AWS Cloud Development Kit -->
<dependency>
<groupId>software.amazon.awscdk</groupId>
<artifactId>core</artifactId>
<version>${cdk.version}</version>
</dependency>
<dependency>
<groupId>software.amazon.awscdk</groupId>
<artifactId>s3</artifactId>
<version>${cdk.version}</version>
</dependency>
<dependency>
<groupId>software.amazon.awscdk</groupId>
<artifactId>iam</artifactId>
<version>${cdk.version}</version>
</dependency>
<dependency>
<groupId>software.amazon.awscdk</groupId>
<artifactId>eks</artifactId>
<version>${cdk.version}</version>
</dependency>
<dependency>
<groupId>software.amazon.awscdk</groupId>
<artifactId>ec2</artifactId>
<version>${cdk.version}</version>
</dependency><dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>${junit.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>${junit.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
<version>3.18.1</version>
<scope>test</scope>
</dependency>
..
The NewCdkProjectApp.java file refers to the NewCdkProjectStack object that is instantiated to synthesize the CloudFormation template.
package com.myorg;import software.amazon.awscdk.core.App;import java.util.Arrays;public class NewCdkProjectApp {
public static void main(final String[] args) {
App app = new App(); new NewCdkProjectStack(app, "NewCdkProjectStack"); app.synth();
}
}
The NewCdkProjectStack.java file is where the Java code is written to create the AWS resources created by the CloudFormation stack. This stack creates an Amazon S3 bucket, and an AWS IAM role with one attached policy.
package com.myorg;import software.amazon.awscdk.core.Construct;
import software.amazon.awscdk.core.RemovalPolicy;
import software.amazon.awscdk.core.Stack;
import software.amazon.awscdk.core.StackProps;
import software.amazon.awscdk.services.iam.IManagedPolicy;
import software.amazon.awscdk.services.iam.ManagedPolicy;
import software.amazon.awscdk.services.iam.Role;
import software.amazon.awscdk.services.iam.ServicePrincipal;
import software.amazon.awscdk.services.s3.BlockPublicAccess;
import software.amazon.awscdk.services.s3.Bucket;
import software.amazon.awscdk.services.s3.BucketEncryption;import java.util.ArrayList;
import java.util.List;public class NewCdkProjectStack extends Stack {
public NewCdkProjectStack(final Construct scope, final String id) {
this(scope, id, null);
}public NewCdkProjectStack(final Construct scope, final String id, final StackProps props) {
super(scope, id, props);// The code that defines your stack goes here
Bucket.Builder.create(this, "MyFirstBucket")
.versioned(true)
.bucketName("cdk-unique-bucket-name")
.encryption(BucketEncryption.S3_MANAGED)
.blockPublicAccess(BlockPublicAccess.BLOCK_ALL)
.removalPolicy(RemovalPolicy.DESTROY)
.build();List<IManagedPolicy> policies = new ArrayList<>();
policies.add(ManagedPolicy
.fromManagedPolicyArn(this, "admin",
"arn:aws:iam::aws:policy/AdministratorAccess"));Role.Builder.create(this, "SCLauncher")
.roleName("SCLauncher")
.description("Role to provision project in ServiceCatalog, used by Service Catalog service")
.managedPolicies(policies)
.assumedBy(ServicePrincipal.Builder.create("servicecatalog.amazonaws.com").build())
.build();
}
}
As seen in the Java code example, the AWS CDK uses the builder pattern to create Java objects in a directed manner. This approach helps to remove variability in the creation process. The AWS CDK also exposes a fluent API (using Java method cascading) that provides a user-experience that helps developers move quickly through the object configuration. I think that fluent-interfaces help developers better understand the object graph they are creating.
I could run a mvn compile to verify that my Java code will compile, but it is not necessary. To view the output CloudFormation template, I ran:
cdk synthResources:
MyFirstBucketB8884501:
Type: AWS::S3::Bucket
Properties:
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
BucketName: cdk-unique-bucket-name
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
VersioningConfiguration:
Status: Enabled
UpdateReplacePolicy: Delete
DeletionPolicy: Delete
Metadata:
aws:cdk:path: NewCdkProjectStack/MyFirstBucket/Resource
SCLauncher54301679:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: sts:AssumeRole
Effect: Allow
Principal:
Service: servicecatalog.amazonaws.com
Version: "2012-10-17"
Description: Role to provision project in ServiceCatalog, used by Service Catalog service
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AdministratorAccess
RoleName: SCLauncher
Metadata:
aws:cdk:path: NewCdkProjectStack/SCLauncher/Resource
CDKMetadata:
Type: AWS::CDK::Metadata
Properties:
Modules: aws-cdk=1.91.0,@aws-cdk/aws-events=1.91.0,@aws-cdk/aws-iam=1.91.0,@aws-cdk/aws-kms=1.91.0,@aws-cdk/aws-s3=1.91.0,@aws-cdk/cloud-assembly-schema=1.91.0,@aws-cdk/core=1.91.0,@aws-cdk/cx-api=1.91.0,@aws-cdk/region-info=1.91.0,jsii-runtime=Java/15.0.1
Metadata:
aws:cdk:path: NewCdkProjectStack/CDKMetadata/Default
Condition: CDKMetadataAvailable
Conditions:
CDKMetadataAvailable:
Fn::Or:
- Fn::Or:
- Fn::Equals:
- Ref: AWS::Region
- ap-east-1
- Fn::Equals:
- Ref: AWS::Region
- ap-northeast-1
- Fn::Equals:
- Ref: AWS::Region
- ap-northeast-2
- Fn::Equals:
- Ref: AWS::Region
- ap-south-1
- Fn::Equals:
- Ref: AWS::Region
- ap-southeast-1
- Fn::Equals:
- Ref: AWS::Region
- ap-southeast-2
- Fn::Equals:
- Ref: AWS::Region
- ca-central-1
- Fn::Equals:
- Ref: AWS::Region
- cn-north-1
- Fn::Equals:
- Ref: AWS::Region
- cn-northwest-1
- Fn::Equals:
- Ref: AWS::Region
- eu-central-1
- Fn::Or:
- Fn::Equals:
- Ref: AWS::Region
- eu-north-1
- Fn::Equals:
- Ref: AWS::Region
- eu-west-1
- Fn::Equals:
- Ref: AWS::Region
- eu-west-2
- Fn::Equals:
- Ref: AWS::Region
- eu-west-3
- Fn::Equals:
- Ref: AWS::Region
- me-south-1
- Fn::Equals:
- Ref: AWS::Region
- sa-east-1
- Fn::Equals:
- Ref: AWS::Region
- us-east-1
- Fn::Equals:
- Ref: AWS::Region
- us-east-2
- Fn::Equals:
- Ref: AWS::Region
- us-west-1
- Fn::Equals:
- Ref: AWS::Region
- us-west-2
If I am happy with the template, I then run:
cdk deploy
The deployment starts with a warning about AWS IAM resources that will be created:

Choosing “y” continues the deployment, and the change sets are created and applied.

The newly created stack can be seen in the AWS CloudFormation console:

If I want to change the stack, like add another policy to the SCLauncher role, I would modify the Java stack source file.
policies.add(ManagedPolicy
.fromManagedPolicyArn(this, "power",
"arn:aws:iam::aws:policy/PowerUserAccess"));
Then run the deployment command:
cdk deploy


AWS CDK makes it easy to make the change via code and apply the change as a CloudFormation change set in a declarative manner.
Using the AWS CDK to Build Amazon EKS Clusters
Before the AWS CDK, I built all my EKS clusters using the eksctl CLI. eksctl is very functional and makes it very easy to manage cluster configurations. I still use it, and most likely will continue for certain use cases. Building clusters with the AWS CDK allows me to plug-in to the Java toolchain and DevOps solutions for Java.
To get started, I need to bootstrap the account/region in which I will be building clusters. The AWS CDK code that I will be running uses CDK Assets.
cdk bootstrap aws://123456789012/us-east-1 ⏳ Bootstrapping environment aws://123456789012/us-east-1...
CDKToolkit: creating CloudFormation changeset...
[██████████████████████████████████████████████████████████] (3/3)
If I try to run code that uses assets without the account/region being “bootstrapped”, an error is thrown, like so:
EksStack failed: Error: This stack uses assets, so the toolkit stack must be deployed to the environment (Run "cdk bootstrap aws://123456789012/us-east-1")
The assets are stored in an S3 bucket. Once the account/region has been bootstrapped, I can run my code to build my EKS cluster, associated VPC networking, and populate the cluster with workloads. Below is my my Java code for the EksStack.
package io.jimmyray.aws.cdk;
import io.jimmyray.aws.cdk.manifests.Yamls;
import io.jimmyray.utils.Config;
import io.jimmyray.utils.Strings;
import io.jimmyray.utils.WebRetriever;
import io.jimmyray.utils.YamlParser;
import org.jetbrains.annotations.NotNull;
import software.amazon.awscdk.core.Construct;
import software.amazon.awscdk.core.Stack;
import software.amazon.awscdk.core.StackProps;
import software.amazon.awscdk.services.ec2.*;
import software.amazon.awscdk.services.eks.*;
import software.amazon.awscdk.services.iam.*;
import software.amazon.awscdk.services.kms.IKey;
import software.amazon.awscdk.services.kms.Key;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Properties;
public class EksStack extends Stack {
public EksStack(final Construct scope, final String id) {
this(scope, id, null);
}
public EksStack(final Construct scope, final String id, final StackProps props) {
super(scope, id, props);
// Get properties object
final Properties properties = Config.properties;
/*
VPC Subnet Configs
*/
List<SubnetConfiguration> subnets = new ArrayList<>();
subnets.add(SubnetConfiguration.builder()
.subnetType(SubnetType.PUBLIC)
.name("public")
.cidrMask(Strings.getPropertyInt("subnet.bits", properties, Constants.SUBNET_BITS.getIntValue()))
.reserved(false)
.build());
subnets.add(SubnetConfiguration.builder()
.subnetType(SubnetType.PRIVATE)
.name("private")
.cidrMask(Strings.getPropertyInt("subnet.bits", properties, Constants.SUBNET_BITS.getIntValue()))
.reserved(false)
.build());
/*
* VPC
*/
IVpc vpc = Vpc.Builder.create(this, Strings.getPropertyString("vpc.id",
properties,
Constants.VPC_ID.getValue()))
.cidr(Strings.getPropertyString("vpc.cidr",
properties,
Constants.VPC_CIDR.getValue()))
.defaultInstanceTenancy(DefaultInstanceTenancy.DEFAULT)
.enableDnsHostnames(true)
.enableDnsSupport(true)
.subnetConfiguration(subnets)
.maxAzs(3)
.natGateways(3)
.natGatewayProvider(NatProvider.gateway())
.natGatewaySubnets(SubnetSelection.builder().subnetType(SubnetType.PUBLIC).build())
.build();
/*
EKS Cluster
*/
IKey secretsKey = Key.fromKeyArn(this, "EksSecretsKey", Strings.getPropertyString("eks.secrets.key.arn",
properties,
Constants.EKS_SECRETS_KEY.getValue()));
/*
* Use existing master admin role
*/
@NotNull IRole admin = Role.fromRoleArn(this, "admin", Strings.getPropertyString("iam.account.admin.role.arn",
properties, ""));
String eksId = Strings.getPropertyString("eks.id",
properties,
Constants.EKS_ID.getValue());
Cluster cluster = Cluster.Builder.create(this, eksId)
.clusterName(eksId)
.defaultCapacity(Strings.getPropertyInt("eks.default.capacity", properties, Constants.EKS_DEFAULT_CAPACITY.getIntValue()))
.endpointAccess(EndpointAccess.PUBLIC_AND_PRIVATE)
.mastersRole(admin)
.version(KubernetesVersion.V1_19)
.secretsEncryptionKey(secretsKey)
.vpc(vpc)
.build();
// Gather policies for node role
List<IManagedPolicy> policies = new ArrayList<>();
policies.add(ManagedPolicy.fromManagedPolicyArn(this, "node-policy",
Strings.getPropertyString("iam.policy.arn.eks.node", properties, Constants.NOT_FOUND.getValue())));
policies.add(ManagedPolicy.fromManagedPolicyArn(this, "cni-policy",
Strings.getPropertyString("iam.policy.arn.eks.cni", properties, Constants.NOT_FOUND.getValue())));
policies.add(ManagedPolicy.fromManagedPolicyArn(this, "registry-policy",
Strings.getPropertyString("iam.policy.arn.ecr.read", properties, Constants.NOT_FOUND.getValue())));
policies.add(ManagedPolicy.fromManagedPolicyArn(this, "autoscaler-policy",
Strings.getPropertyString("iam.policy.arn.eks.node.autoscaler", properties, Constants.NOT_FOUND.getValue())));
policies.add(ManagedPolicy.fromManagedPolicyArn(this, "ssm-policy",
Strings.getPropertyString("iam.policy.arn.ssm.core", properties, Constants.NOT_FOUND.getValue())));
policies.add(ManagedPolicy.fromManagedPolicyArn(this, "kms-policy",
Strings.getPropertyString("iam.policy.arn.kms.ssm.use", properties, Constants.NOT_FOUND.getValue())));
Role nodeRole = Role.Builder.create(this, "eks-nodes-role")
.roleName("EksNodes")
.managedPolicies(policies)
.assumedBy(new ServicePrincipal(Strings.getPropertyString("ec2.service.name", properties, "")))
.build();
Nodegroup.Builder.create(this, "ng1")
.cluster(cluster)
//.releaseVersion(KubernetesVersion.V1_19.getVersion())
.amiType(NodegroupAmiType.AL2_X86_64)
.capacityType(CapacityType.ON_DEMAND)
.desiredSize(3)
.maxSize(5)
.minSize(3)
.diskSize(100)
.remoteAccess(NodegroupRemoteAccess.builder().sshKeyName(Strings.getPropertyString("ssh.key.name",
properties, "")).build())
.nodegroupName("ng1")
.instanceTypes(List.of(new InstanceType(Strings.getPropertyString("eks.instance.type",
properties,
Constants.EKS_INSTANCE_TYPE.getValue()))))
.subnets(SubnetSelection.builder().subnets(cluster.getVpc().getPrivateSubnets()).build())
.nodeRole(nodeRole)
.build();
/*
Multiple k8s manifests, with dependencies, should be in the same KubernetesManifest object
*/
KubernetesManifest.Builder.create(this, "read-only")
.cluster(cluster)
.manifest((List<? extends Map<String, ? extends Object>>) List.of(YamlParser.parse(Yamls.namespace),
YamlParser.parse(Yamls.deployment), YamlParser.parse(Yamls.service)))
.overwrite(true)
.build();
/*
* Parse multiple docs in same string
*/
String yamlFile = null;
/*
* Try to get the YAML from GitHub
*/
try {
yamlFile = WebRetriever.getRaw(Strings.getPropertyString("ssm.agent.installer.url", properties, ""));
} catch (IOException e) {
e.printStackTrace();
}
if (yamlFile == null) yamlFile = Yamls.ssmAgent;
if (null != yamlFile && !yamlFile.isBlank()) {
Iterable<Object> manifestYamls = YamlParser.parseMulti(yamlFile);
List manifestList = new ArrayList();
for (Object doc : manifestYamls) {
manifestList.add((Map<String, ? extends Object>) doc);
}
KubernetesManifest.Builder.create(this, "ssm-agent")
.cluster(cluster)
.manifest(manifestList)
.overwrite(true)
.build();
}
}
}
Running this stack will create a VPC and Subnets, as well as the EKS cluster and associated IAM resources and Security Groups, and create the read-only app resources in the cluster. Using the manifest
method and a list of maps, I built the Kubernetes namespace manifest that will later be applied by the kubectl CLI in AWS Lambda.
KubernetesManifest.Builder.create(this, "read-only")
.cluster(cluster)
.manifest((List<? extends Map<String, ? extends Object>>) List.of(YamlParser.parse(Yamls.namespace),
YamlParser.parse(Yamls.deployment), YamlParser.parse(Yamls.service)))
.overwrite(true)
.build();
If I had an existing VPC into which I wanted to build the EKS stack, then I would change the code to lookup the existing VPC and reference it.
@NotNull IVpc vpc = Vpc.fromLookup(this, "vpcLookup", VpcLookupOptions.builder()
.vpcName(<VPC_NAME>);...Cluster cluster = Cluster.Builder.create(this, Constants.EKS_ID.getValue())
.clusterName(Constants.EKS_ID.getValue())
...
.vpc(vpc)
.build();
...
Once the stacks are complete:

The output of the stacks are AWS CLI commands to configure kube-config:
aws eks update-kubeconfig --name cdk-eks --region us-east-1 --role-arn arn:aws:iam::123456789012:role/EksClusterAdminRoleaws eks get-token --cluster-name cdk-eks --region us-east-1 --role-arn arn:aws:iam::123456789012:role/EksClusterAdminRole
I was able to provision an EKS cluster, provision a node group into private subnets, and create the namespace, deployment, and service resources in the cluster after provisioning. The service resource also created a load balancer in the AWS account, connected to the LoadBalancer service I provisioned.
k -n read-only get svc read-only -oyaml|yq d - metadata.managedFieldsapiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"read-only","aws.cdk.eks/prune-c8157df28ab1a464bab539b75e7483fab124b22805":"","env":"dev","owner":"jimmy"},"name":"read-only","namespace":"read-only"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080}],"selector":{"app":"read-only"},"type":"LoadBalancer"}}
creationTimestamp: "2021-05-12T21:38:29Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: read-only
aws.cdk.eks/prune-c8157df28ab1a464bab539b75e7483fab124b22805: ""
env: dev
owner: jimmy
name: read-only
namespace: read-only
resourceVersion: "2349"
selfLink: /api/v1/namespaces/read-only/services/read-only
uid: ac2caea7-ac5a-4e4d-9547-ebcb7c098abc
spec:
clusterIP: 172.20.101.178
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32372
port: 80
protocol: TCP
targetPort: 8080
selector:
app: read-only
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: aac2caea7ac5a4e4d9547ebcb7c098ab-2030281970.us-east-2.elb.amazonaws.com
The AWS CloudFormation stacks created are listed below.

The utility of the AWS CDK is that it handles a lot of the heavy lifting of creating the CloudFormation resources, in the background. For example, the EksStack class applies the Kubernetes resources after the cluster is provisioned. This is made possible by the AWS CDK Assets that were installed to S3 earlier. From these assets, AWS Lambda functions are created to handle the kubectl, helm, and other commands that might be employed.

Shifting Left with AWS CDK Synthesize
Being able to review and detect issues as far upstream (or left) as possible is a foundational characteristic in DevOps. Given that the AWS CDK leverages common programming languages and outputs the target CloudFormation templates to be applied, using the cdk synth command, continuous-integration (CI) pipelines are able to evaluate the output before it is applied to the target AWS account. This provides an evaluation and control point in the pipeline that can prevent unwanted changes from reaching AWS accounts.
A New Builder Toolset for Builders
Builders using AWS have freedom in their choices of tools as well as services, but there has mostly been a tradeoff between tools that are more imperative in nature, like the AWS SDKs, and declarative tools like CloudFormation. AWS CDK has emerged as a tool that bridges the gap between imperative and declarative approaches to cloud resource provisioning, while handling some of the configuration and heavy-lifting in the background. AWS CDK enables developers to build in the cloud using tools (like Java) with which they are already familiar. This openness to multiple languages improves adoption and eases the barrier to entry for cloud developers, a.k.a. builders.
The example code for this post can be had from this GitHub repo.