Recap
In Part 1 of this blog series we described how to create a custom VPC including security groups and subnets. It was the first step towards our goal to implement a new architectural setup with AWS CDK using TypeScript for one of our clients. If you are just beginning to use AWS CDK and want to know how to get started, we recommend you start reading Part 1 . This blog post is part two of our six-part blog series:
- How to create a custom VPC
- How to create S3 buckets
- How to create an RDS instance
- How to create Lambdas
- How to create a step function
- CDK lessons learned
We will focus on creating the depicted S3 Bucket and a connection into our previously created VPC. Since we only have an isolated subnet in that VPC at this point, for that purpose we will show how to implement a gateway endpoint from S3 to our VPC. From this point onward, we assume you have completed everything discussed Part 1, that everything is compiling and you successfully deployed the VPC to your AWS account.
The S3 Bucket
With a similar approach as we used when creating our VPC, let’s lay out the foundation for our S3 Bucket setup by adapting the file ./bin/sample_cdk.ts to include a new stack called S3Stack:
//sample_cdk.ts
import 'source-map-support/register';
import cdk = require('@aws-cdk/core');
import {VpcStack} from "../lib/vpc-stack";
import {S3Stack} from "../lib/s3-stack";
const app = new cdk.App();
const vpcStack = new VpcStack(app, 'VpcStack');
new S3Stack(app, 'S3Stack', {vpc: vpcStack.vpc});
app.synth();
and creating the file ./lib/s3-stack.ts containing the following code:
//s3-stack.ts
import {App, Stack, StackProps} from "@aws-cdk/core";
import {Vpc} from '@aws-cdk/aws-ec2';
import {BlockPublicAccess, Bucket} from '@aws-cdk/aws-s3';
interface S3StackProps extends StackProps {
vpc: Vpc;
}
export class S3Stack extends Stack {
readonly reportBucket: Bucket;
constructor(scope: App, id: string, props?: S3StackProps) {
super(scope, id, props);
//Place you resource definitions here:
}
}
(To install the S3 package, run the command npm i @aws-cdk/aws-s3
)
You might have already noticed the change in the constructor of the stack. For us to be able to add the gateway endpoint from our custom VPC to the S3 Bucket, we actually need access to the VPC itself. Alternatively, it is possible to define the gateway inside the file vpc-stack.ts, which would allow you to leave the constructor as is and leave the interface S3StackProps out.
It is time to create our first S3 Bucket. Insert the following code into the constructor of the class S3Stack inside the file ./lib/s3-stack.ts:
this.sampleBucket = new Bucket(this, 'sampleBucket', {
versioned: false,
bucketName: 'sample-bucket-cdk-tutorial',
encryption: BucketEncryption.KMS_MANAGED,
publicReadAccess: false,
blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
removalPolicy: RemovalPolicy.DESTROY
});
The Bucket constructor takes three parameters: the stack it should be added to (this), the id for the resource and a collection of properties defined in the interface BucketProps. For our purposes it is essential to block all public access to the S3 Bucket. Furthermore, we turned versioning of files off, as the nature of our files being uploaded are immutable and will not change over time. As an extra security measure against unauthorized access to the Bucket, we decided to turn on Bucket encryption, as sensitive user data will be uploaded to the Bucket at some point.
Remark 1: There are quite a few more options you can set when coding your Bucket resource, such as options related to static website hosting inside an S3 Bucket, or lifecycle rules that define how Amazon S3 manages objects during their lifetime. We will come back to lifecycle rules later as they can help us reduce costs in the long run.
Remark 2: During the stage of development we encountered ourselves with repeated alter code – check deployment cycles, hence we added the Bucket removal policy DESTROY to the Bucket resource. In Part 1 of this blog series we recommended to use an IAM user with admin rights for simplicity reasons. If you decide to create a least privileged IAM user after all, you will most definitely run into a few erroneous deployments and possibly the case where you need to delete the complete Cloudformation stack and start from scratch. By default, the Bucket will always be orphaned if not declared otherwise, thus blocking any subsequent deployments. The removal policy will save you the cumbersome manual work of deleting the Bucket every time you want to redeploy. Make sure though to never use this removal policy in production situations or you will lose all data inside that Bucket.
Meanwhile, let us check if our new setup actually compiles to an updated Cloudformation template. Run the following commands:
1npm run build && cdk synth
The console output should log that the stack was synthesized successfully. At this point you could deploy your new stack. Yet, we will code a few more things before actually starting a deployment to AWS.
S3 lifecycle rules
When coming up with a solution in AWS, the cost factor definitely needs to be taken into account. In our case, report files are uploaded to the S3 Bucket and will then be handled by the step function and migrated into the RDS instance. After that, it is rare that a file will be accessed again. Defining lifecycle rules for the objects stored inside our Bucket is a measure that will help us reduce incurring costs.
this.sampleBucket = new Bucket(this, 'sampleBucket', {
versioned: false,
bucketName: 'sample-bucket-cdk-tutorial',
encryption: BucketEncryption.KMS_MANAGED,
publicReadAccess: false,
blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
removalPolicy: RemovalPolicy.DESTROY,
lifecycleRules: [{
expiration: Duration.days(365),
transitions: [{
storageClass: StorageClass.INFREQUENT_ACCESS,
transitionAfter: Duration.days(30)
},{
storageClass: StorageClass.GLACIER,
transitionAfter: Duration.days(90)
}]
}]
});
Since we precisely know our object access patterns, we do not need intelligent tiering, but instead make use of static day transition periods. 30 days after the object being uploaded, it will be transitioned to IA-storage, after 90 days it transitions to S3-Glacier, where it remains until it expires and is finally deleted after 365 days.
VPC gateway endpoint
When we previously defined our Bucket, we blocked all public access to it. So how can we get a connection to the Bucket from our VPC? We achieve this by adding a gateway endpoint from our VPC to the S3 service. This endpoint allows all resources inside our VPC’s isolated subnet to access S3 functionality via the AWS backbone network. By not exposing the Bucket to the public internet, we achieved higher data security without limiting access for any resources inside our architecture.
We create the gateway endpoint by calling the function addGatewayEndpoint from the field vpc inside the S3StackProps. Add the following code to the S3Stack constructor:
props.vpc.addGatewayEndpoint('s3-trigger-gateway', {
service: GatewayVpcEndpointAwsService.S3,
subnets: [{
subnetName: this._isolatedSubnetName1
}]
});
Final build & deploy
We are all set up and ready to deploy our new CDK stack to our AWS cloud. In Part 1 we have already set up our credentials, so this time we can build and deploy by simply running the following commands:
1npm run build && cdk synth
After successfully having synthesized the Cloudformation template, you can comfortably check what changed by running the command:
1cdk diff --profile sample
Finally, we deploy the changes made to the AWS cloud by running the command
1cdk deploy --profile sample
Upon signing in to AWS Cloudformation you should see the stack being created.
We are now a step closer to reaching our architecture. We have discussed how to create a custom VPC and in this particular blog we have shown you how to set up an S3 Bucket and create a gateway endpoint into the VPC in order to avoid public internet traffic in and out of our VPC. The next steps we need to take are setting up our RDS instance inside the VPC’s isolated subnet and implement the lambda functions that the step function can execute in the end. In Part 3 of this series we will elaborate on how to set up a RDS instance with AWS CDK.
More articles
fromMaik Kingma
Your job at codecentric?
Jobs
Agile Developer und Consultant (w/d/m)
Alle Standorte
More articles in this subject area
Discover exciting further topics and let the codecentric world inspire you.
Gemeinsam bessere Projekte umsetzen.
Wir helfen deinem Unternehmen.
Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.
Hilf uns, noch besser zu werden.
Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.
Blog author
Maik Kingma
Do you still have questions? Just send me a message.
Do you still have questions? Just send me a message.