Deploying New Container Using AWS App2Container

In our last article, we went through the containerization of a running application. The last step of this process is to deploy the container. The default approach is to deploy a container image to ECR and then create the CloudFormation templates to run that image in Amazon ECS using Fargate. If you would prefer to deploy to Amazon EKS instead, you will need to go to the deployment.json file in the output directory. This editable file contains the default settings for the application, ECR, ECS, and EKS. We will walk through each of the major areas in turn.

The first section is responsible for defining the application and is shown below.

"a2CTemplateVersion": "1.0",
"applicationId": "iis-tradeyourtools-6bc0a317",
"imageName": "iis-tradeyourtools-6bc0a317",
"exposedPorts": [
       {
              "localPort": 80,
              "protocol": "http"
       }
],
"environment": [],

The applicationId and the imageName are values we have seen before when going through App2Containers. The exposedPorts value should contain all of the IIS ports configured for the application. The one used in the example was not configured for HTTPS, but if it was there would be another entry for that value. The environment value allows you to enter any environment variables as key/value pairs that may be used by the application. Unfortunately, App2Container is not able to determine those because it does its analysis on running code rather than the code base. In our example, there are no environmental variables that are necessary.

Note – If you aren’t sure whether there are environment variables that your application may access, you can see which variables are available by going into the System -> Advanced system settings -> Environment variables. This will provide you with a list of available variables and you can evaluate those as to their relevance to your application.

The next section is quite small and contains the ECR configuration. The ECR repository that will be created is named with the imageName from above and then versioned with the value in the ecrRepoTag as shown below.

"ecrParameters": {
       "ecrRepoTag": "latest"
},

We are using the value latest as our version tag.

There are two remaining sections in the deployment.json file. The first is the ECS setup information with the second being the EKS setup information. We will first look at the ECS section. This entire section is listed below.

"ecsParameters": {
       "createEcsArtifacts": true,
       "ecsFamily": "iis-tradeyourtools-6bc0a317",
       "cpu": 2,
       "memory": 4096,
       "dockerSecurityOption": "",
       "enableCloudwatchLogging": false,
       "publicApp": true,
       "stackName": "a2c-iis-tradeyourtools-6bc0a317-ECS",
       "resourceTags": [
              {
                     "key": "example-key",
                     "value": "example-value"
              }
       ],
       "reuseResources": {
              "vpcId": "vpc-f4e4d48c",
              "reuseExistingA2cStack": {
                     "cfnStackName": "",
                     "microserviceUrlPath": ""
              },
              "sshKeyPairName": "",
              "acmCertificateArn": ""
       },
       "gMSAParameters": {
              "domainSecretsArn": "",
              "domainDNSName": "",
              "domainNetBIOSName": "",
              "createGMSA": false,
              "gMSAName": ""
       },
       "deployTarget": "FARGATE",
       "dependentApps": []
},

The most important value here is createEcsArtifacts, which if set to true means that deploying with App2Container will deploy the image into ECS. The next ones to look at are cpu and memory. These values are only used for Linux containers. In our case, these values do not matter because this is a Windows container. The next two values, dockerSecurityOption and enableCloudwatchLogging are only changed in special cases, so they will generally stay at their default values. The next value, publicApp, determines whether the application will be configured into a public subnet with a public endpoint. This is set to true because this is our hoped-for behavior. The next value, stackName, defines the name of the CloudFormation stack while the value after that, resourceTags, are the custom tags that should be added to the ECS task definition. There is a default set of key/values in the file, but those will not be used if kept in; only keys that are not defined as example-key will be added.

The next section, reuseResources, is where you can configure whether you wish to use any pre-existing resources, namely VPC – which is added to the vpcId value. When left blank, as shown below, App2Container will create a new VPC.

"reuseResources": {
     "vpcId": "",
     "reuseExistingA2cStack": {
            "cfnStackName": "",
            "microserviceUrlPath": ""
     },
     "sshKeyPairName": "",
     "acmCertificateArn": ""
}

Running the deployment with these settings will result in a brand new VPC being created. This means that, by default, you wouldn’t be able to connect in or out of the VPC without making changes to the VPC. If, however, you have an already existing VPC that you want to use, update the vpcId key with the ID of the appropriate VPC.

Note: App2Container requires that the included VPS has a routing table that is associated with at least two subnets and an internet gateway. The CloudFormation template for the ECS service requires this so that there is a route from your service to the internet from at least two different AZs for availability. Currently, there is no way for you to define these subnets. You will receive a Resource creation failures: PublicLoadBalancer: At least two subnets in two different Availability Zones must be specified message if your VPC is not set up properly.

You can also choose to reuse an existing stack created by App2Container. Doing this will ensure that the application is deployed into the already existing VPC and that the URL for the new application is added to the already created Application Load Balancer rather than being added to a new ALB.

The next value, sshKeyPairName, is the name of the EC2 key pair used for the instances on which your container runs. Using this rather defeats the point of using containers, so we left it blank as well. The last value, acmCertificateArn, is for the AWS Certificate Manager ARN that you want to use if you are enabling HTTPS on the created ALB. This parameter is required if you use an HTTPS endpoint for your ALB, and remember as we went over earlier this means that the request being forwarded into the application will be on port 80 and unencrypted because this would have been handled in the ALB.

The next set of configuration values are part of the gMSAParameters section. This becomes important to manage if your application relies upon group Managed Service Account (gMSA) Active Directory groups. This can only be used if deploying to EC2 and not Fargate (more on this later). These individual values are:

·         domainSecretsArn – The AWS Secrets Manager ARN containing the domain credentials required to join the ECS nodes to Active Directory.

·         domainDNSName – The DNS Name of the Active Directory the ECS nodes will join.

·         domainNetBIOSName – The NetBIOS name of the Active Directory to join.

·         createGMSA – A flag determining whether to create the gMSA Active Directory security group and account using the name supplied in the gMSAName field.

·         gMSAName – The name of the Active Directory account the container should use for access.

There are two fields remaining, deployTarget and dependentApps. For deployTarget there are two valid values for .NET applications running on Windows; fargate and ec2. You can only deploy to Fargate if your container is Windows 2019 or more recent. This would only be possible if your worker machine, the one you used for containerizing, was running Windows 2019+. Also, you cannot deploy to Fargate if you are using gMSA.

The value dependentApps is interesting, as it handles those applications that AWS defines as “complex Windows applications”. We won’t go into it in more details here, but you can go to https://docs.aws.amazon.com/app2container/latest/UserGuide/summary-complex-win-apps.html if you are interested in learning more about these types of applications.

The next section in the deployment.json file is eksParameters. You will see that much of these parameters are the same as what we went over when talking about the ECS parameters. The only differences are the createEksArtifacts parameter, which needs to be set to true if deploying to EKS, and in the gMSA section, the gMSAName parameter has inexplicably been changed to gMSAAccountName.

Once you have the deployment file set as desired, you next deploy the container:

PS C:\App2Container> app2container generate app-deployment --application-id APPID --deploy

This process takes several minutes, and you should get an output like Figure 1. The gold arrow points to the URL where you can go see your deployed application – go ahead and look at it to confirm that it has been successfully deployed and is running.

Figure 1. Output from generating an application deployment in App2Container

Logging in to the AWS console and going to Amazon ECR will show you the ECR repository that was created to store your image as shown in Figure 2.

Figure 2. Verifying the new container image is available in ECR

Once everything has been deployed and verified, you can poke around in ECS to see how it is all put together. Remember though, if you are looking to make modifications it is highly recommended that you use the CloudFormation templates, make the changes there, and then re-upload them as a new version. That way you will be able to easily redeploy as needed and not worry about losing any changes that you may have added. You can either alter the templates in the CloudFormation section of the console or you can find the templates in your App2Container working directory, update those, and then use those to update the stack.

Containerizing a Running Application with AWS App2Container

Now that we have gone through containerizing an already existing application where you have access to the source code, let’s look at containerizing a .NET application in a different way. This is for those applications you may have that are running and where you may not have access to the source code, or you don’t deploy it, or there are other reasons where you don’t want to change the source code as we just went over earlier. Instead, you want to containerize the application by just “picking it up off its server” and moving it into a container. Up until recently, that was not a simple thing to do. However, AWS created a tool to help you do just that. Let’s look at that now.

What is AWS App2Container?

AWS App2Container is a command-line tool that is designed to help migrate .NET web applications into a container format. You can learn more about and download this tool at https://aws.amazon.com/app2container/.  It also does Java, but hey, we’re all about .NET, so we won’t talk about that anymore! You can see the process in Figure 1, but at a high level, there are five major steps.

Figure 1. How AWS App2Container works

These steps are:

1.      Inventory – This step goes through the applications running on the server looking for running applications. At the time of writing, App2Container supports ASP.NET 3.5, and greater, applications running in IIS 7.5+ on Windows.

2.      Analyze – A chosen application is analyzed in detail to identify dependencies including known cooperating processes and network port dependencies. You can also manually add any dependencies that App2Container was unable to find.

3.      Containerize – In this step, all the application artifacts discovered during the “Analyze” phase are “dockerized.”

4.      Create – This step creates the various deployment artifacts (generally as CloudFormation templates) such as ECS task or Kubernetes pod definitions.

5.      Deploy – Store the image in Amazon ECR and deploy to ECS or EKS as desired.

There are three different modes in which you can use App2Container. The first is a mode where you perform the steps on two different machines. If using this approach, App2Container must be installed on both machines. The first machine, the Server, is the machine on which the application(s) that you want to containerize is running. You will run the first two steps on the server. The second machine, the Worker, is the machine that will perform the final three steps of the process based on artifacts that you copy from the server. The second mode is when you perform all the steps on the same machine, so it basically fills both the server and worker roles. The third mode is when you run all the commands on your worker machine, connecting to the server machine using the Windows Remote Management (WinRM) protocol. This approach has the benefit of not having to install App2Container on the server, but it also means that you must have WinRM installed and running. We will not be demonstrating this mode.

App2Container is a command-line tool that has some prerequisites that must be installed before the tool will run. These prerequisites are listed below.

·         AWS CLI – must be installed on both server and worker

·         PowerShell 5.0+ – must be installed on both server and worker

·         Administrator rights – You must be running as a Windows administrator

·         Appropriate permissions – You must have AWS credentials stored on the worker machine as was discussed in the earlier articles when installing the AWS CLI.

·         Docker tools – Docker version 17.07 or later must be installed on worker

·         Windows Server OS – Your worker system must run on Windows OS versions that support containers, namely Windows Server 2016 or 2019. If working in server\worker mode, the server system must be Windows 2008+.

·         Free Space – 20-30 GB of free space should be available on both server and worker

The currently supported types of applications are

·         Simple ASP.NET applications running on a single server

·         A Windows service running on a single server

·         Complex ASP.NET applications that depend on WCF, running on a single server or multiple servers

·         Complex ASP.NET applications that depend on Windows services or processes outside of IIS, running on a single server or multiple servers

·         Complex, multi-node IIS or Windows service applications, running on a single server or multiple servers

There are also two types of applications that are not supported:

·         ASP.NET applications that use files and registries outside of IIS web application directories

·         ASP.NET applications that depend on features of a Windows operating system version prior to Windows Server Core 2016

Now that we have described App2Container as well as the .NET applications on which it will and will not work, the next step is to show how to use the tool.

Using AWS App2Container to Containerize an Application

We will first describe the application that we are going to containerize. We have installed a .NET Framework 4.7.2 application onto a Windows EC2 instance that supports containers; the AMI we used is shown in Figure 2. Please note that since EC2 regularly revises its AMIs, you may see a different Id.

Figure 2. AMI used to host the website to containerize

The application is connected to an RDS SQL Server instance for database access using Entity Framework, and the connection string is stored in the web.config file.

The next step, now that we have a running application, is to download the AWS App2Container tool. You can access the tool by going to https://aws.amazon.com/app2container/ and clicking the Download AWS App2Container button at the top of the page. This will bring you to the Install App2Container page in the documentation which has a link to download a zip file containing the App2Container installation package. Download the file and extract it to a folder on the server. If you are doing the work using the server\worker mode, then download and extract the file on both servers. After you unzip the downloaded file, you should have 5 files, one of which is another zipped file.

Open PowerShell and navigate to the folder containing App2Container. You must then run the install script.

PS C:\App2Container> .\install.ps1

You will see the script running through several checks and then present some terms and conditions text that will require you to respond with a y to continue. You will then be able to see the tool complete its installation.

The next step is to initialize and configure App2Container. If using server/worker mode, then you will need to do this on each machine. You start the initializing with the following command.

PS C:\App2Container> app2container init

It will then prompt you for a Workspace directory path for artifacts value. This is where the files from the analysis and any containerization will be stored. Click enter to accept the default value or enter a new directory. It will then ask for an Optional AWS Profile. You can click enter if you have a default profile setup or you can enter the name of the profile to use if different.

Note: It is likely that a server running the application you want to containerize does not have the appropriate profile available. If not, you can set one up by running the aws configure command to set up your CLI installation that App2Container will use to create and upload the created container.

Next, the initialization will ask you for an Optional S3 bucket for application artifacts. Providing a value in this step will result in the tool output also being copied to the provided bucket. You can click enter to use the default of “no bucket” however, at the time of this writing you must have this value configured so that it can act as storage for moving the container image into ECR. We used an S3 bucket called “prodotnetonaws-app2container”. The next initialization step is whether you wish to Report usage metrics to AWS? (Y/N). No personal or confidential information is gathered, so we recommend that you click enter to accept the default of “Y”. The following initialization prompt asks if you want to Automatically upload logs and App2Container generated artifacts on crashes and internal errors? (Y/N). We want AWS to know as soon as possible if something went wrong so we selected “y”. The last initialization prompt is asking whether to Require images to be signed using Docker Content Trust (DCT)? (Y/N). We selected the default value, “n”. The initialization will then display the path in which the artifacts will be created and stored. Figure 3 shows our installation when completed.

Figure 3. Output from running the App2Container initialization

For those of you using the server/worker mode approach, take note of the application artifact directory displayed in the last line of the command output as this will contain the artifacts that you will need to move to the worker machine. Now that the application is initialized, the next step is to take the inventory of eligible applications running on the server. You do this by issuing the following command:

PS C:\App2Container> app2container inventory

The output from this command is a JSON object collection that has one entry for each application. The output on our EC2 server is shown below:

{
     "iis-demo-site-a7b69c34": {
          "siteName": "Demo Site",
          "bindings": "http/*:8080:",
          "applicationType": "IIS"
      },
      "iis-tradeyourtools-6bc0a317": {
          "siteName": "TradeYourTools",
          "bindings": "http/*:80:",
          "applicationType": "IIS"
      }
}

As you can see, there are two applications on our server, the “Trade Your Tools” app we described earlier as well as another website “Demo Site” that is running under IIS and is bound to port 8080. The initial key is the application ID that you will need moving forward.

Note: You can only containerize one application at a time. If you wish to containerize multiple applications from the same server you will need to repeat the following steps for each one of those applications.

The next step is to analyze the specific application that you are going to containerize. You do that with the following command, replacing the application ID (APPID) in the command with your own.

PS C:\App2Container> app2container analyze --application-id APPID

You will see a lot of flashing that shows the progress output as the tool analyzes the application, and when it is complete you will get output like that shown in Figure 4.

 Figure 4. Output from running the App2Container analyze command

The primary output from this analysis is the analysis.json file that is listed in the command output. Locating and opening that file will allow you to see the information that the tool gathered about the application, much of which is a capture of the IIS configuration for the site running your application. We won’t show the contents of the file here as it is several hundred lines long, however, much of the content of this file can be edited as you see necessary.

The next steps branch depending upon whether you are using a single server or using the server/worker mode.

When containerizing on a single server

Once you are done reviewing the artifacts created from the analysis, the next step is to containerize the application. You do this with the following command

PS C:\App2Container> app2container containerize --application-id APPID

The processing in this step may take some time to run, especially if, like us, you used a free-tier low-powered machine! Once completed, you will see output like Figure 5.

Figure 5. Output from containerizing an application in App2Container

At this point, you are ready to deploy your container and can skip to the next article, “Deploying…”, if you don’t care about containerizing using server/worker mode.

When containerizing using server/worker mode

Once you are done reviewing the artifacts created from the analysis, the next step is to extract the application. This will create the archive that will need to be moved to the worker machine for containerizing. Also, the tool will upload the archive to the S3 bucket provided during initialization. Since we didn’t provide a bucket, we must manually copy the file. The command to extract the application is:

PS C:\App2Container> app2container extract --application-id APPID

This command will process, and you should get a simple “Extraction successful” message.

Returning to the artifact directory that was displayed when initializing App2Container, you will see a new zip file named with your Application ID. Copy this file to the worker server.

Once you are on the worker server and App2Container has been initialized, the next step is to containerize the content from the archive. You do that with the following command

PS C:\App2Container> app2container containerize --input-archive PathToZip

The output from this step matches the output from running the containerization on a single server and can be seen in Figure 5 above.

The next article will show how to deploy this containerized application into AWS.

Containerizing a .NET Core-based Application for AWS

In our last post in this series, we talked about Containerizing a .NET 4.x Application for deployment onto AWS, and as you may have seen it was a somewhat convoluted affair. Containerizing a .NET Core type application is much easier, because a lot of the hoops that you must leap through to manage a Windows container will not be necessary. Instead, all AWS products, as well as IDEs, will support this out the gate.

Using Visual Studio

We have already gone through adding container support using Visual Studio, and that we are doing it now using a .NET Core-based application does not change that part of the process at all. What does change, however, is the ease of getting the newly containerized application into AWS. Once the Docker file has been added, the “Publish to AWS” options when right-clicking on the project name in the Solution Explorer is greatly expanded. Since our objective is to get this application deployed to Amazon ECR, make the choice to Push Container Images to Amazon Elastic Container Registry and click the Publish button. You will see the process walk through a few steps and it will end with a message stating that the image has been successfully deployed into ECR.

Using JetBrains Rider

The process of adding a container using JetBrains Rider is very similar to the process used in Visual Studio. Open your application in Rider, right-click the project, select Add, and then Docker Support as shown in Figure 1.

Figure 1. Adding Docker Support in JetBrains Rider

This will bring up a window where you select the Target OS, in this case, Linux.  Once you have this finished you will see a Dockerfile show up in your solution. Unfortunately, the AWS Toolkit for Rider does not currently support deploying the new container image to ECR. This means that any deployment to the cloud must be done with the AWS CLI or the AWS Tools for Powershell and would be the same as the upload process used when storing a Windows container in ECR that we went over in an earlier post.

As you can see, containerizing a .NET Core based application is much easier to do as well as easier to deploy into AWS.

Containerizing a .NET Framework 4.x Application for AWS

In this post we are going to demonstrate ways in which you can containerize your applications for deployment into the cloud, the next step in minimizing resource usage and likely saving money. This article is different from the previous entries in this series because those were a discussion of containers and running them within the AWS infrastructure while this post is much more practical and based upon getting to that point from an existing non-containerized application.

Using Visual Studio

Adding container support using Visual Studio is straightforward.

Adding Docker Support

Open an old ASP.NET Framework 4.7 application or create a new one. Once open, right-click on the project name, select Add, and then Docker Support as shown in Figure 1.

Figure 1. Adding Docker Support to an application.

Your Output view, when set to showing output from Container Tools, will show multiple steps being performed, and then it should finish successfully. When completed, you will see two new files added in the Solution Explorer, Dockerfile, and a subordinate .dockerignore file. You will also see that your default Debug setting has changed to Docker. You can see both changes in Figure 2.

Figure 2. Changes in Visual Studio after adding Docker support

You can test the support by clicking the Docker button. This will build the container, run it under your local Docker Desktop, and then open your default browser. This time, rather than going to a localhost URL you will instead go to an IP address, and if you compare the IP address in the URL to your local IP you will see that they are not the same. That is because this new IP address points to the container running on your system.

Before closing the browser and stopping the debug process, you will be able to confirm that the container is running by using the Containers view in Visual Studio as shown in Figure 3.

Figure 3. Using the Containers view in Visual Studio to see the running container

You can also use Docker Desktop to view running containers. Open Docker Desktop and select Containers / Apps. This will bring you to a list of the running containers and apps, one of which will be the container that you just started as shown in Figure 4.

Figure 4. Viewing a running container in Docker Desktop

Once these steps have been completed, you are ready to save your container in ECR, just as we covered earlier in this series.

Deploying your Windows Container to ECR

However, there are some complications with this, as the AWS Toolkit for Visual Studio does not support the container deployment options we saw earlier when looking at the toolkit when working with Windows containers. Instead, we are going to use the AWS PowerShell tools to build and publish your image to ECR. At a high level, the steps are:

·         Build your application in Release mode. This is the only way that Visual Studio puts the appropriate files in the right place, namely the obj\Docker\publish subdirectory of your project directory. You can see this value called out in the last line of your Dockerfile: COPY ${source:-obj/Docker/publish} .

·         Refresh your ECR authentication token. You need this later in the process so that you can login to ECR to push the image.

·         Build the Docker image.

·         Tag the image. Creates the image tag on the repository

·         Push the image to the server. Copy the image into ECR

Let’s walk through them now. The first step is to build your application in Release mode. However, before you can do that, you will need to stop your currently running container. You can do that through either Docker Desktop or the Containers view in Visual Studio. If you do not do this, your build will fail because you will not be able to override the necessary files. Once that is completed, your Release mode build should be able to run without problem.

Next, open PowerShell and navigate to your project directory. This directory needs to be the one that contains the Docker file. First thing we will do is to set the authentication context. We do that by first getting the command to execute, and then executing that command. That is why this process has two steps.

$loginCommand = Get-ECRLoginCommand -Region <repository region>

And then

Invoke-Expression $loginCommand.Command

This refreshed the authentication token into ECR. The remaining commands are based upon an existing ECR repository. You can access this information through the AWS Explorer by clicking on the repository name. This will bring up the details page as shown in Figure 5.

Figure 5. Viewing a running container in Docker Desktop

The value shown by the 1 is the repository name and by number 2 is the repository URI. You will need both of those values for the remaining steps. Build the image:

docker build -t <repository> .

The next step is to tag the image. In this example we are setting this version as the latest version by appending both the repository name and URI with “:latest”.

docker tag <repository>:latest <URI>:latest

The last step is to push the image to the server:

docker push <URI>:latest

You will see a lot of work going on as everything is pushed to the repository but eventually it will finish processing and you will be able to see your new image in the repository.

Note: Not all container services on AWS support Windows containers. Amazon ECS on AWS Fargate is one of the services that does as long as you make the appropriate choices as you configure your tasks. There are detailed directions to doing just that at https://aws.amazon.com/blogs/containers/running-windows-containers-with-amazon-ecs-on-aws-fargate/.

While Visual Studio offers a menu-driven approach to containerizing your application, you always have the option to containerize your application manually.

Containerizing Manually

Containerizing an application manually requires several steps. You’ll need to create your Docker file and then coordinate the build of the application so that it works with the Docker file you created. We’ll start with those steps first, and we’ll do it using JetBrains Rider. The first thing you’ll need to do is to add a Docker file to your sample application, called Dockerfile. This file needs to be in the root of your active project directory. Once you have this added to the project, right-click the file to open the Properties window and change the Build action to None and the Copy to output directory to Do not copy as shown in Figure 6.

Figure 6. Build properties for the new Docker file

This is important because it makes sure that the Docker file itself will not end up deployed into the container.

Now that we have the file, let’s start adding the instructions:

FROM mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2019
ARG source
WORKDIR /inetpub/wwwroot

These commands are defining the source image with FROM, defining an argument, and then defining the directory and entry point where the code is going to be running on the container. The source image that we have defined includes support for ASP.NET and .NET version 4.8, mcr.microsoft.com/dotnet/framework/aspnet:4.8, and is being deployed onto Windows Server 2019, windowsservercore-ltsc2019. There is an image for Windows Server 2022, windowsservercore-ltsc2022, but this may not be usable for you if you are not running the most current version of Windows on your machine

The last part that we need to do is to configure the Docker file to include the compiled application. However, before we can do that, we need to build the application in such a way that we can access these deployed bits. This is done by publishing the application. In Rider, you publish the application by right-clicking on the project and selecting the Publish option. This will give you the option to publish to either a Local folder or Server. This brings up the configuration screen where you can select the directory in which to publish as shown in Figure 7.

Figure 7. Selecting a publish directory

It will be easiest if you select a directory underneath the project directory; we recommend within the bin directory so that the IDEs will tend to ignore it. Clicking the Run button will publish the app to the directory. The last step is to add one more command to the Dockerfile where you point the source command to the directory in which you published the application.

COPY ${source:-bin/release} .

Once you add this last line into the Dockerfile, you are ready to deploy the Windows container to ECR using the steps that we went through in the last section.

Now that we have walked through two different approaches for containerizing your older .NET Framework-based Windows application, the next step is to do the same with a .NET Core-based application. As you will see, this process is a lot easier because we will build the application onto a Linux-based container so you will see a lot of additional support in the IDEs. Let’s look at that next.