Search This Blog

Tuesday 7 July 2020

Amazon ECS : My first AWS container - Part 1

I had done a post sometime back on Container basics and AWS. Here I will explore ECS - another Container offering from AWS
In Pre Container days when Virtual Machines was the craze, AWS provided EC2 which where virtual hosts available for customers to scale and use without worrying about the underlying physical hardware. To launch an application customers had to create the deployment environment for the application. For example to launch a Java application:

  1. Select the Amazon Machine Image (the O.S. to install on EC2 instance)
  2. Launch the Ec2 instance. 
  3. Install necessary software (E.g. if the default java version is not what we need, install the correct version). Install Tomcat (for a web application)
  4. Copy your code to the correct location (The jar or war file)
  5. Run the application (Jar command or launch Tomcat)
This is not bad. Infact it is similar to steps you would do to launch any server instance. Once containers arrived, people began to run containers on the EC2 instances. This allowed them to leverage the container benefits on AWS. Users would now setup an Ec2 instance and then install Docker on it.
They would then deploy docker containers and manage them. (Check this link on running Docker on Ec2)
Companies could now have hundreds of EC2 instances, each running hundreds of containers if they needed. 
Though users got container benefits in AWS, it came with the added responsibilities of managing the containers and the EC2 instances.
This is where Amazon ECS comes in. [AWS Docs]
Amazon Elastic Container Service (Amazon ECS) is a fully managed container 
orchestration service.You can choose to run your ECS clusters using AWS 
Fargate, which is serverless compute for containers. Fargate removes the need 
to provision and manage servers, lets you specify and pay for resources per 
application, and improves security through application isolation by design. 
Thus with a Fargate and ECS combination we can be free of management of EC2 instances and Containers, allowing us to focus more on application development.
In this post I will create a simple Container application - one that reads from a SQS Queue and inserts record into Dynamo.
public class SqsConsumer {

    private static AmazonSQS sqsClient;
    private static DynamoDB dbClient;
    private static final String queueUrl = "https://sqs.us-east-1.amazonaws.com/XXXX/CollectionQueue";

    public static void initialize() {
        System.out.println("Initializing code...");
        AWSCredentials awsCredentials = new BasicAWSCredentials("AccessKey",
                "SecretKey");
        AWSCredentialsProvider awsCredentialsProvider = new AWSStaticCredentialsProvider(awsCredentials);

        dbClient = new DynamoDB(AmazonDynamoDBClientBuilder.standard()
                .withRegion("us-east-1")
                .withCredentials(awsCredentialsProvider).build());

        sqsClient = AmazonSQSClientBuilder.standard()
                .withRegion(Regions.US_EAST_1)
                .withCredentials(awsCredentialsProvider).build();
        System.out.println("Initializing complete");

    }


    public static void main(String[] args) {
        initialize();
        Timer timer = new Timer();
        final Table table = dbClient.getTable("Messages");

        timer.scheduleAtFixedRate(new TimerTask() {
            @Override
            public void run() {
                System.out.println("Running a check ...");
                ReceiveMessageResult receiveMessageResult = sqsClient.receiveMessage(queueUrl);
                List<Message> messages = receiveMessageResult.getMessages();
                if (messages.isEmpty()) {
                    System.out.println("No messages found in queue");
                } else {
                    System.out.println(messages.size() + " Messages found in Queue");
                    for (Message message : messages) {
                        Item item = new Item()
                                .withPrimaryKey("messageId", message.getMessageId())
                                .withString("message", message.getBody());
                        PutItemOutcome putItemOutcome = table.putItem(item);
                        System.out.println(putItemOutcome);
                        sqsClient.deleteMessage(queueUrl, message.getReceiptHandle());
                    }
                }
            }
        }, 3/*milliseonds*/, 100/*milliseonds*/);

    }
}
The class uses a Timer task which execute a code block every 100 milliseconds. The code fetches SQS messages and then adds them to a dynamo table.
I used maven-shade-plugin to create a single uber jar.
 <plugins>
    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-shade-plugin</artifactId>
        <version>3.2.2</version>
        <configuration>
            <createDependencyReducedPom>false</createDependencyReducedPom>
        </configuration>
        <executions>
            <execution>
                <phase>package</phase>
                <goals>
                    <goal>shade</goal>
                </goals>
            </execution>
        </executions>
    </plugin>
I also set the jar name to be 'sqsconsumer'
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-jar-plugin</artifactId>
    <configuration>
        <archive>
            <manifest>
                <addClasspath>true</addClasspath>
                <mainClass>org.learningviacode.SqsConsumer</mainClass>
            </manifest>
        </archive>
    </configuration>
</plugin>
</plugins>
<finalName>sqsconsumer</finalName>
The jar creation plugin can also be instructed to add details to manifest file - This ensures my uber jar is an executable
This ensures that I now have an executable jar. Next step is to create a Docker Image for use in the container.
For this I added a Dockerfile in the project root with below details:
FROM openjdk:8
ADD target/sqsconsumer.jar sqsconsumer.jar
ENTRYPOINT ["java", "-jar", "sqsconsumer.jar"]
The file instructions are:

  1. Use the openjdk container image with tag 8 (or JDK 1.8)
  2. copy the jar into the container
  3. Execute the command "java -jar sqsconsumer.jar" in the Container
Now I am ready to execute this Container in ECS. (I already tested this Image by deploying locally and verifying the working)

The ECS setup page took me to "Getting Started with Amazon Elastic Container Service (Amazon ECS) using Fargate" - I guess EC2 option is not encouraged anymore.
The setup process has the below steps:

The first step is selecting a Container Image (Container Definition). AWS provides some default container images, but in this case, I wanted to use my custom image. 
To use the generated image, I need to upload it to a Container Registry
The Registry is a stateless, highly scalable server side application that stores and lets you distribute Docker images. 
Docker built an open source app that can be use to store Container Images. They also deployed a version of the app - Docker Hub. However I do not want to upload my container here. Primarily because the very basic code includes my Account credentials and I do not want to expose them out there.
I also do not want to setup my own container registry to solve for this post. So how do I get secure private storage for my Container Images ?
AWS comes to the rescue here. They have setup a Container Registry Service called ECR
Amazon Elastic Container Registry (Amazon ECR) is a managed AWS Docker registry
service that is secure, scalable, and reliable. Amazon ECR supports private Docker 
repositories with resource-based permissions using AWS IAM so that specific users 
or Amazon EC2 instances can access repositories and images. Developers can use the
Docker CLI to push, pull, and manage images.
Great - so my next step is to upload my Container Image to ECR.
I will do this using the aws cli

> docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
sqs-consumer             latest              cc34b3c2741b        16 hours ago        519MB
docker-spring-boot       latest              904768e87c3c        18 hours ago        527MB
getting-started          latest              20b9e72c44e8        45 hours ago        179MB
node                     12-alpine           057fa4cc38c2        6 days ago          89.3MB
docker/getting-started   latest              73f5385a001d        7 days ago          25.1MB
openjdk                  8                   b190ad78b520        3 weeks ago         510MB

> aws ecr create-repository --repository-name sqs-consumer --region us-east-1
{
    "repository": {
        "repositoryArn": "arn:aws:ecr:us-east-1:123456789012:repository/sqs-consumer",
        "registryId": "123456789012",
        "repositoryName": "sqs-consumer",
        "repositoryUri": "123456789012.dkr.ecr.us-east-1.amazonaws.com/sqs-consumer",
        "createdAt": "2020-07-06T15:33:31-07:00",
        "imageTagMutability": "MUTABLE",
        "imageScanningConfiguration": {
            "scanOnPush": false
        }
    }
}

>  aws ecr describe-repositories --region us-east-1
{
    "repositories": [
        {
            "repositoryArn": "arn:aws:ecr:us-east-1:123456789012:repository/samples/java",
            "registryId": "123456789012",
            "repositoryName": "samples/java",
            "repositoryUri": "123456789012.dkr.ecr.us-east-1.amazonaws.com/samples/java",
            "createdAt": "2020-07-06T00:10:46-07:00",
            "imageTagMutability": "MUTABLE",
            "imageScanningConfiguration": {
                "scanOnPush": true
            }
        },
        {
            "repositoryArn": "arn:aws:ecr:us-east-1:123456789012:repository/sqs-consumer",
            "registryId": "123456789012",
            "repositoryName": "sqs-consumer",
            "repositoryUri": "123456789012.dkr.ecr.us-east-1.amazonaws.com/sqs-consumer",
            "createdAt": "2020-07-06T15:33:31-07:00",
            "imageTagMutability": "MUTABLE",
            "imageScanningConfiguration": {
                "scanOnPush": false
            }
        }
    ]
}

> aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
Login Succeeded

> docker tag sqs-consumer:latest 123456789012.dkr.ecr.us-east-1.amazonaws.com/sqs-consumer:latest

> docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/sqs-consumer:latest
The push refers to repository [123456789012.dkr.ecr.us-east-1.amazonaws.com/sqs-consumer]
cf667b61818e: Pushed
cb8e2372b23c: Pushed
68bb2d422178: Pushed
f5181c7ef902: Pushed
2e5b4ca91984: Pushed
527ade4639e0: Pushed
c2c789d2d3c5: Pushed
8803ef42039d: Pushed
latest: digest: sha256:0263548c54967d070dfc48001e4ed1e4bb2030ea376ddd0c6d6c7b45001322ce size: 2005
There is a whole lot of commands here:

  1. The first command list the details of images on my docker installation.We can see sqs-consumer in the list
  2.  The next command creates a container registry in my selected region. (I executed aws configure before this command)
  3. 'aws ecr describe-repositories' command lists the repositories in ECR. The result shows the one that I just created in previous step.
  4. The next command 'aws ecr get-login-password' is interesting. Amazon ECR CLI provides its own API to push and pull images, while also supporting docker commands to push/pull images from repositores. The Docker CLI does not support native IAM authentication methods. So this command is executed to allow Amazon ECR to authenticate and authorize Docker push and pull requests.
  5. Post this we execute the docker tag command. Here we have tagged our current image with latest tag.
  6. The next step is to push the image to the registry.
Post execution, I can see the image in ECR:
In the next post, I will continue with setting up my ECS solution.

2 comments:

  1. It's a great pleasure reading your post. Also visit here: Feeta.pk plot for sale in Islamabad . I’d really like to help appreciate it with the efforts you get with writing this post.

    ReplyDelete
  2. This is really a helpful blog for getting information, really amazing writing skills. Redspider classified script  is about to tell you that Thank you for writing here for us.

    ReplyDelete