.NET and Amazon Simple Notification Service (SNS)

SNS, as you can probably guess from its name, is a straightforward service that uses pub\sub messaging to deliver messages. Pub\Sub, or Publish\Subscribe, messaging is an asynchronous communication method. This model includes the publisher who sends the data, a subscriber that receives the data, and the message broker that handles the coordination between the publisher and subscriber. In this case, Amazon SNS is the message broker because it handles the message transference from publisher to subscriber.

Note – The language used when looking at events and messaging as we did above can be confusing. Messaging is the pattern we discussed above. Messages are the data being sent and are part of both events and messaging. The term “message” is considered interchangeable with notification or event – even to the point where you will see articles about the messaging pattern that refer to the messages as events.

The main responsibility of the message broker is to determine which subscribers should be sent what messages. It does this using a topic. A topic can be thought of as a category that describes the data contained within the message. These topics are defined based on your business. There will be times that a broad approach is best, so perhaps topics for “Order” and “Inventory” where all messages for each topic are sent. Thus, the order topic could include messages for “Order Placed” and “Order Shipped” and the subscribers will get all of those messages. There may be other times where a very narrow focus is more appropriate, in which case you may have an “Order Placed” topic and an “Order Shipped” topic where systems can subscribe to them independently. Both approaches have their strength and weaknesses.

When you look at the concept of messaging, where one message has one recipient, the advantage that a service like SNS offers is the ability to distribute a single message to multiple recipients as shown in Figure 1, which is one of the key requisites of event-based architecture.

Figure 1. Pub\Sub pattern using Amazon SNS
Figure 1. Pub\Sub pattern using Amazon SNS

Now that we have established that SNS can be effectively used when building in an event-based architecture, let’s go do just that!

Using AWS Toolkit for Visual Studio

If you’re a Visual Studio user, you can do a lot of the configuration and management through the toolkit. Going into Visual Studio and examining the AWS Explorer will show that one of the options is Amazon SNS. At this point, you will not be able to expand the service in the tree control because you have not yet started to configure it. Right-clicking on the service will bring up a menu with three options, Create topic, View subscriptions, and Refresh. Let’s get started by creating our first topic. Click on the Create topic link and create a topic. We created a topic named “ProDotNetOnAWS” – it seems to be a trend with us. Once you save the topic you will see it show up in the AWS Explorer.

Right-click on the newly created topic and select to View topic. This will add the topic details screen into the main window as shown in Figure 2.

Figure 2. SNS topic details screen in the Toolkit for Visual Studio
Figure 2. SNS topic details screen in the Toolkit for Visual Studio

In the details screen, you will see a button to Create New Subscription. Click this button to bring up the Create New Subscription popup window. There are two fields that you can complete, Protocol and Endpoint. The protocol field is a dropdown and contains various choices.

HTTP or HTTPS Protocol Subscription

The first two of these choices are HTTP and HTTPS. Choosing one of these protocols will result in SNS making an HTTP (or HTTPS) POST to the configured endpoint. This POST will result in a JSON document with the following name-value pairs.

·         Message – the content of the message that was published to the topic

·         MessageId – A universally unique identifier for each message that was published

·         Signature – Base64-encoded signature of the Message, MessageId, Subject, Type, Timestamp, and TopicArn values.

·         SignatureVersion – version of the signature used

·         SigningCertUrl – the URL of the certificate that was used to sign the message

·         Subject – the optional subject parameter used when the notification was published to a topic. In those examples where the topic is broadly-based, the subject can be used to narrow down the subscriber audience.

·         Timestamp – the time (GMT) when the notification was published.

·         TopicARN – the Amazon Resource Name (ARN) for the topic

·         Type – the type of message being sent. For an SNS message, this type is Notification.

At a minimum, your subscribing system will care about the message, as this message contains the information that was provided by the publisher. One of the biggest advantages of using an HTTP or HTTPS protocol subscription is that the system that is subscribed does not have to do anything other than accept the message that is submitted. There is no special library to consume, no special interactions that must happen, just an endpoint that accepts requests.

Some considerations as you think about using SNS to manage your event notifications. There are several different ways to manage the receipt of these notifications. The first is to create a single endpoint for each topic to which you subscribe. This makes each endpoint very discrete and only responsible for handling one thing; usually considered a plus in the programming world. However, this means that the subscribing service has some limitations as there are now external dependencies on multiple endpoints. Changing an endpoint URL, for example, will now require coordination across multiple systems.

On the other hand, there is another approach where you create a single endpoint that acts as the recipient of messages across multiple topics. The code within the endpoint identifies the message and then forwards it through the appropriate process. This approach abstracts away any work within the system, as all of those changes happen below this single broadly bound endpoint. We have seen both of those approaches working successfully, it really comes down to your own business needs and how you see your systems evolving as you move forward.

Other Protocol Subscriptions

There are other protocol subscriptions that are available in the toolkit. The next two in the list are Email and Email (JSON). Notifications sent under this protocol are sent to the email address that is entered as the endpoint value. This email is sent in two ways, where the Message field of the notification becomes the body of the email or where the email body is a JSON object very similar to that used when working with the HTTP\HTTPS protocols. There are some business-to-business needs for this, such as sending a confirmation to a third party upon processing an order; but you will generally find any discussion of these two protocols under Application-to-Person (A2P) in the documentation and examples.

The next protocol that is available in the toolkit is Amazon SQS. Amazon Simple Queue Service (SQS) is a queue service that follows the messaging pattern that we discussed earlier where one message has one recipient and one recipient only.

The last protocol available in the toolkit is Lambda. Choosing this protocol means that a specified Lambda function will be called with the message payload being set as an input parameter. This option makes a great deal of sense if you are building a system based on serverless functions. Of course, you can also use HTTP\HTTPS protocol and make the call to the endpoint that surfaces the Lambda method; but using this direct approach will remove much of that intermediate processing.

Choosing either the SQS or Lambda protocols will activate the Add permission for SNS topic to send messages to AWS resources checkbox as shown in Figure 3.

Figure 3. Create New Subscription window in the Toolkit for Visual Studio
Figure 3. Create New Subscription window in the Toolkit for Visual Studio

Checking this box will create the necessary permissions allowing the topic to interact with AWS resources. This is not necessary if you are using HTTP\HTTPS or Email.

For the sake of this walk-through, we used an approach that is ridiculous for enterprise systems; we selected the Email (JSON) protocol. Why? So we could easily show you the next few steps in a way that you could easily duplicate. This is important because all you can do in the Toolkit is to create the topic and the subscription. However, as shown in Figure 4, this leaves the subscription in a PendingConfirmation state.

Figure 4. Newly created SNS topic subscription in Toolkit for Visual Studio
Figure 4. Newly created SNS topic subscription in Toolkit for Visual Studio

Subscriptions in this state are not yet fully configured, as they need to be confirmed before they are able to start receiving messages. Confirmation happens after a SubscriptionConfirmation message is sent to the endpoint, which happens automatically when creating a new subscription through the Toolkit. The JSON we received in email is shown below:

  "Type" : "SubscriptionConfirmation",
  "MessageId" : "b1206608-7661-48b1-b82d-b1a896797605",
  "Token" : "TOKENVALUE", 
  "TopicArn" : "arn:aws:sns:xxxxxxxxx:ProDotNetOnAWS",
  "Message" : "You have chosen to subscribe to the topic arn:aws:sns:xxxxxxx:ProDotNetOnAWS.\nTo confirm the subscription, visit the SubscribeURL included in this message.",
  "SubscribeURL" : "https://sns.us-west-2.amazonaws.com/?Action=ConfirmSubscription&TopicArn=xxxxxxxx",
  "Timestamp" : "2022-08-20T19:18:27.576Z",
  "SignatureVersion" : "1",
  "Signature" : "xxxxxxxxxxxxx==",
  "SigningCertURL" : "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-56e67fcb41f6fec09b0196692625d385.pem"

The Message indicates the action that needs to be taken – you need to visit the SubscribeURL that is included in the message. Clicking that link will bring you to a confirmation page in your browser like that shown in Figure 5.

Figure 5. Subscription confirmation message displayed in browser
Figure 5. Subscription confirmation message displayed in browser

Refreshing the topic in the Toolkit will show you that the PendingConfirmation message is gone and has been replaced with a real Subscription ID.

Using the Console

The process for using the console is very similar to the process we just walked through in the Toolkit. You can get to the service by searching in the console for Amazon SNS or by going into the Application Integration group under the services menu. Once there, select Create topic. At this point, you will start to see some differences in the experiences.

The first is that you have a choice on the topic Type, as shown in Figure 6. You can select from FIFO (first-in, first-out) and Standard. FIFO is selected by default. However, selecting FIFO means that the service will follow the messaging architectural approach that we went over earlier where there is exactly-once message delivery and message ordering is strictly preserved. The Standard type, on the other hand, supports “at least once message delivery” which means that it supports multiple subscriptions.

Figure 6. Creating an SNS topic in the AWS Console
Figure 6. Creating an SNS topic in the AWS Console

Figure 6 also displays a checkbox labeled Content-based message deduplication. This selection is only available when FIFO type is selected. When selected, the message being sent is assumed to be unique and SNS will not provide a unique deduplication value. Otherwise, SNS will add a unique value to each message that it will use to determine whether a particular message has been delivered.

Another difference between creating a topic in the console vs in the toolkit is that you can optionally set preferences around message encryption, access policy, delivery status logging, delivery retry policy (HTTP\S), and, of course, tags. Let’s look in more detail at two of those preferences. The first of these is the Delivery retry policy. This allows you to set retry rules for how SNS will retry sending failed deliveries to HTTP/S endpoints. These are the only endpoints that support retry. You can manage the following values:

·         Number of retries – defaults to 3 but can be any value between 1 and 100

·         Retries without delay – defaults to 0 and represents how many of those retries should happen before the system waits for a retry

·         Minimum delay – defaults to 20 seconds with a range from 1 to the value of the Maximum delay.

·         Maximum delay – defaults to 20 seconds with a range from the Minimum delay to 3,600.

·         Retry backoff function – defaults to linear. There are four options, Exponential, Arithmetic, Linear, and Geometric. Each of those functions processes the timing for retries differently. You can see the differences between these options at https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html.

The second preference that is available in the console but not the toolkit is Delivery status logging. This preference will log delivery status to CloudWatch Logs. You have two values to determine. This first is Log delivery status for these protocols which presents a series of checkboxes for AWS Lambda, Amazon SQS, HTTP/S, Platform application endpoint, and Amazon Kinesis Data Firehouse. These last two options are a preview of the next big difference between working through the toolkit or through the console.

Additional Subscriptions in the Console

Once you have finished creating the topic, you can then create a subscription. There are several protocols available for use in the console that are not available in the toolkit. These include:

·         Amazon Kinesis Data Firehouse – configure this subscription to go to Kinesis Data Firehouse. From there you can send notifications to Amazon S3, Amazon Redshift, Amazon OpenSearch Service, and third-party service providers such as Datadog, New Relic, MongoDB, and Splunk.

·         Platform-application endpoint – this protocol sends the message to an application on a mobile device. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts. Go to https://docs.aws.amazon.com/sns/latest/dg/sns-mobile-application-as-subscriber.html for more information on configuring your SNS topic to deliver to a mobile device.

·         SMS – this protocol delivers text messages, or SMS messages, to SMS-enabled devices. Amazon SNS supports SMS messaging in several regions, and you can send messages to more than 200 countries and regions. An interesting aspect of SMS is that your account starts in a SMS sandbox or non-production environment with a set of limits. Once you are convinced that everything is correct you must create a case with AWS support to move your account out of the sandbox and actually start sending messages to non-limited numbers.

Now that we have configured our SNS topic and subscription, lets next look at sending a message.

.NET and Amazon SNS

The first step to interacting with SNS from within your .NET application is to install the appropriate NuGet package, AWSSDK.SimpleNotificationService. This will also install AWSSDK.Core. Once you have the NuGet package, you can access the appropriate APIs by adding several using statements

using Amazon.SimpleNotificationService;
using Amazon.SimpleNotificationService.Model;

These namespaces provide access to the AmazonSimpleNotificationServiceClient class that manages the interaction with the SNS service as well as the models that are represented in the client methods. There are a lot of different types of interactions that you can support with this client. A list of the more commonly used methods is displayed below:

·         PublishAsync – Send a message to a specific topic for processing by SNS

·         PublishBatchAsync – Send multiple messages to a specific topic for processing by SNS.

·         Subscribe – Subscribe a new endpoint to a topic

·         Unsubscribe – Remove an endpoint’s subscription to a topic.

These four methods allow you to add and remove subscriptions as well as publish messages. There are dozens of other methods available from that client, including the ability to manage topics and confirm subscriptions.

The code below is a complete console application that sends a message to a specific topic.

static void Main(string[] args)
    string topicArn = "Arn for the topic to publish";
    string messageText = "Message from ProDotNetOnAWS_SNS";

    var client = new AmazonSimpleNotificationServiceClient();

    var request = new PublishRequest
        TopicArn = topicArn,
        Message = messageText,
        Subject = Guid.NewGuid().ToString()

    var response = client.PublishAsync(request).Result;

       $"Published message ID: {response.MessageId}");


As you can see, the topic needs to be described with the Arn for the topic rather than simply the topic name. Publishing a message entails the instantiation of the client and then defining a PublishRequest object. This object contains all of the fields that we are intending to send to the recipient, which in our case is simply the subject and message. Running the application presents a console as shown in Figure 7.

Figure 7. Console application that sent message through SNS
Figure 7. Console application that sent message through SNS

The message that was processed can be seen in Figure 8. Note the MessageId values are the same as in Figure 7.

Figure 8. Message sent through console application
Figure 8. Message sent through console application

We have only touched on the capabilities of Amazon SNS and its capacity to help implement event-driven architecture. However, there is another AWS service that is even more powerful, Amazon EventBridge. Let’s look at that next.

Deploying New Container Using AWS App2Container

In our last article, we went through the containerization of a running application. The last step of this process is to deploy the container. The default approach is to deploy a container image to ECR and then create the CloudFormation templates to run that image in Amazon ECS using Fargate. If you would prefer to deploy to Amazon EKS instead, you will need to go to the deployment.json file in the output directory. This editable file contains the default settings for the application, ECR, ECS, and EKS. We will walk through each of the major areas in turn.

The first section is responsible for defining the application and is shown below.

"a2CTemplateVersion": "1.0",
"applicationId": "iis-tradeyourtools-6bc0a317",
"imageName": "iis-tradeyourtools-6bc0a317",
"exposedPorts": [
              "localPort": 80,
              "protocol": "http"
"environment": [],

The applicationId and the imageName are values we have seen before when going through App2Containers. The exposedPorts value should contain all of the IIS ports configured for the application. The one used in the example was not configured for HTTPS, but if it was there would be another entry for that value. The environment value allows you to enter any environment variables as key/value pairs that may be used by the application. Unfortunately, App2Container is not able to determine those because it does its analysis on running code rather than the code base. In our example, there are no environmental variables that are necessary.

Note – If you aren’t sure whether there are environment variables that your application may access, you can see which variables are available by going into the System -> Advanced system settings -> Environment variables. This will provide you with a list of available variables and you can evaluate those as to their relevance to your application.

The next section is quite small and contains the ECR configuration. The ECR repository that will be created is named with the imageName from above and then versioned with the value in the ecrRepoTag as shown below.

"ecrParameters": {
       "ecrRepoTag": "latest"

We are using the value latest as our version tag.

There are two remaining sections in the deployment.json file. The first is the ECS setup information with the second being the EKS setup information. We will first look at the ECS section. This entire section is listed below.

"ecsParameters": {
       "createEcsArtifacts": true,
       "ecsFamily": "iis-tradeyourtools-6bc0a317",
       "cpu": 2,
       "memory": 4096,
       "dockerSecurityOption": "",
       "enableCloudwatchLogging": false,
       "publicApp": true,
       "stackName": "a2c-iis-tradeyourtools-6bc0a317-ECS",
       "resourceTags": [
                     "key": "example-key",
                     "value": "example-value"
       "reuseResources": {
              "vpcId": "vpc-f4e4d48c",
              "reuseExistingA2cStack": {
                     "cfnStackName": "",
                     "microserviceUrlPath": ""
              "sshKeyPairName": "",
              "acmCertificateArn": ""
       "gMSAParameters": {
              "domainSecretsArn": "",
              "domainDNSName": "",
              "domainNetBIOSName": "",
              "createGMSA": false,
              "gMSAName": ""
       "deployTarget": "FARGATE",
       "dependentApps": []

The most important value here is createEcsArtifacts, which if set to true means that deploying with App2Container will deploy the image into ECS. The next ones to look at are cpu and memory. These values are only used for Linux containers. In our case, these values do not matter because this is a Windows container. The next two values, dockerSecurityOption and enableCloudwatchLogging are only changed in special cases, so they will generally stay at their default values. The next value, publicApp, determines whether the application will be configured into a public subnet with a public endpoint. This is set to true because this is our hoped-for behavior. The next value, stackName, defines the name of the CloudFormation stack while the value after that, resourceTags, are the custom tags that should be added to the ECS task definition. There is a default set of key/values in the file, but those will not be used if kept in; only keys that are not defined as example-key will be added.

The next section, reuseResources, is where you can configure whether you wish to use any pre-existing resources, namely VPC – which is added to the vpcId value. When left blank, as shown below, App2Container will create a new VPC.

"reuseResources": {
     "vpcId": "",
     "reuseExistingA2cStack": {
            "cfnStackName": "",
            "microserviceUrlPath": ""
     "sshKeyPairName": "",
     "acmCertificateArn": ""

Running the deployment with these settings will result in a brand new VPC being created. This means that, by default, you wouldn’t be able to connect in or out of the VPC without making changes to the VPC. If, however, you have an already existing VPC that you want to use, update the vpcId key with the ID of the appropriate VPC.

Note: App2Container requires that the included VPS has a routing table that is associated with at least two subnets and an internet gateway. The CloudFormation template for the ECS service requires this so that there is a route from your service to the internet from at least two different AZs for availability. Currently, there is no way for you to define these subnets. You will receive a Resource creation failures: PublicLoadBalancer: At least two subnets in two different Availability Zones must be specified message if your VPC is not set up properly.

You can also choose to reuse an existing stack created by App2Container. Doing this will ensure that the application is deployed into the already existing VPC and that the URL for the new application is added to the already created Application Load Balancer rather than being added to a new ALB.

The next value, sshKeyPairName, is the name of the EC2 key pair used for the instances on which your container runs. Using this rather defeats the point of using containers, so we left it blank as well. The last value, acmCertificateArn, is for the AWS Certificate Manager ARN that you want to use if you are enabling HTTPS on the created ALB. This parameter is required if you use an HTTPS endpoint for your ALB, and remember as we went over earlier this means that the request being forwarded into the application will be on port 80 and unencrypted because this would have been handled in the ALB.

The next set of configuration values are part of the gMSAParameters section. This becomes important to manage if your application relies upon group Managed Service Account (gMSA) Active Directory groups. This can only be used if deploying to EC2 and not Fargate (more on this later). These individual values are:

·         domainSecretsArn – The AWS Secrets Manager ARN containing the domain credentials required to join the ECS nodes to Active Directory.

·         domainDNSName – The DNS Name of the Active Directory the ECS nodes will join.

·         domainNetBIOSName – The NetBIOS name of the Active Directory to join.

·         createGMSA – A flag determining whether to create the gMSA Active Directory security group and account using the name supplied in the gMSAName field.

·         gMSAName – The name of the Active Directory account the container should use for access.

There are two fields remaining, deployTarget and dependentApps. For deployTarget there are two valid values for .NET applications running on Windows; fargate and ec2. You can only deploy to Fargate if your container is Windows 2019 or more recent. This would only be possible if your worker machine, the one you used for containerizing, was running Windows 2019+. Also, you cannot deploy to Fargate if you are using gMSA.

The value dependentApps is interesting, as it handles those applications that AWS defines as “complex Windows applications”. We won’t go into it in more details here, but you can go to https://docs.aws.amazon.com/app2container/latest/UserGuide/summary-complex-win-apps.html if you are interested in learning more about these types of applications.

The next section in the deployment.json file is eksParameters. You will see that much of these parameters are the same as what we went over when talking about the ECS parameters. The only differences are the createEksArtifacts parameter, which needs to be set to true if deploying to EKS, and in the gMSA section, the gMSAName parameter has inexplicably been changed to gMSAAccountName.

Once you have the deployment file set as desired, you next deploy the container:

PS C:\App2Container> app2container generate app-deployment --application-id APPID --deploy

This process takes several minutes, and you should get an output like Figure 1. The gold arrow points to the URL where you can go see your deployed application – go ahead and look at it to confirm that it has been successfully deployed and is running.

Figure 1. Output from generating an application deployment in App2Container

Logging in to the AWS console and going to Amazon ECR will show you the ECR repository that was created to store your image as shown in Figure 2.

Figure 2. Verifying the new container image is available in ECR

Once everything has been deployed and verified, you can poke around in ECS to see how it is all put together. Remember though, if you are looking to make modifications it is highly recommended that you use the CloudFormation templates, make the changes there, and then re-upload them as a new version. That way you will be able to easily redeploy as needed and not worry about losing any changes that you may have added. You can either alter the templates in the CloudFormation section of the console or you can find the templates in your App2Container working directory, update those, and then use those to update the stack.

Amazon RDS – PostgreSQL for .NET Developers

PostgreSQL is a free, open-source database that emphasizes extensibility and SQL compliance and was first released in 1996. A true competitor to commercial databases such as SQL Server and Oracle, PostgreSQL supports both online transaction processing (OLTP) and online analytical processing (OLAP) and has one of the most advanced performance features available, multi-version concurrency control (MVCC). MVCC supports the simultaneous processing of multiple transactions with almost no deadlock, so transaction-heavy applications and systems will most likely benefit from using PostgreSQL over SQL Server, and there are companies that use PostgreSQL to manage petabytes of data.

Another feature that makes PostgreSQL attractive is that not only does it support your traditional relational database approach, but it also fully supports a JSON/JSONB key/value storage approach that makes it a valid alternative to your more traditional NoSQL databases; so, you can now use a single product to support the two most common data access approaches. Because of its enterprise-level of features and the amount of work it takes to manage and maintain those, even though it is also open source and free software like MySQL and MariaDB, it is slightly more expensive to run PostgreSQL on Amazon RDS than those other open-source products.

PostgreSQL and .NET

As with any database products that you will access from your .NET application, its level of support for .NET is important. Fortunately for us, there is a large community involved in helping ensure that PostgreSQL is relevant to .NET users.

Let’s look at what you need to do to get .NET and PostgreSQL working together. The first thing you need to do is to include the necessary NuGet packages, Npgsql and Npgsql.EntityFrameworkCore.PostgreSQL as shown in Figure 1.

Figure 1. NuGet packages required to connect to PostgreSQL

Once you have the packages, the next thing that you need to do is to configure your application to use PostgreSQL. You do this by using the UseNpgsql method when overriding the OnConfiguring method in the context class as shown below:

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    optionsBuilder.UseNpgsql("connection string here");

A connection string for PostgreSQL has six required fields:

  • server – server with which to connect
  • port – port number on which PostgreSQL is listening
  • user id – user name
  • password – password
  • database – database with which to connect
  • pooling – whether to use connection pooling (true or false)

When working in an ASP.NET Core application the connection string is added to the appsettings.json file as shown in Figure 2.

Figure 2. Adding a connection string to an ASP.NET Core application

Let’s now go create a PostgreSQL database.

Setting up a PostgreSQL Database on Amazon RDS

Now that we know how to set up our .NET application to access PostgreSQL, let’s go look at setting up a PostgreSQL instance. First, log into the console, go to RDS, select Create database. On the Create Database screen, select Standard create and then PostgreSQL. You then have a lot of different versions that you can select from, however, the NuGet packages that we used in our earlier example require a reasonably modern version of PostgreSQL, so unless you have any specific reason to use an older version you should always use the default, most updated version.

Once you have defined the version of PostgreSQL that you will use, your next option is to select the Template that you would like to use. Note that you only have two different templates to choose from:

·         Production – defaults are set to support high availability and fast, consistent performance.

·         Dev/Test – defaults are set in the middle of the range.

Note: Both MySQL and MariaDB had a third template, Free tier, that is not available when creating a PostgreSQL database. That does not mean that you must automatically pay, however, as the AWS Free Tier for Amazon RDS offer provides free use of Single-AZ Micro DB instances running PostgreSQL. It is important to consider that the free usage tier is capped at 750 instance hours per month across all your RDS databases.

Selecting the template sets defaults across the rest of the setup screen and we will call those values out as we go through those items.

Once you select a template, your next setup area is Availability and durability. There are three options to choose from:

·         Multi-AZ DB cluster – As of the time of writing, this option is in preview. Selecting this option creates a DB cluster with a primary DB instance and two readable standby instances, with each instance in a different Availability Zone (AZ). Provides high availability, data redundancy and increases capacity to serve read workloads.

·         Multi-AZ DB instance – This option creates a primary DB instance and a standby DB instance in a different AZ. Provides high availability and data redundancy, but the standby instance doesn’t support connections for read workloads. This is the default value if you chose the Production template.

·         Single DB instance– This option creates a single DB instance with no standby instances. This is the default value if you chose the Dev/Test template.

The next section, Settings, is where you provide the DB instance identifier, or database name, and your Master username and Master password. Your database identifier value must be unique across all the database instances you have in the current region, regardless of engine option. You also have the option of having AWS auto-generate a password for you.

The next section allows you to select the DB instance class. You have the same three filters that you had before of Standard classes, Memory optimized classes, and Burstable classes. Selecting one of the filters changes the values in the instance drop-down box, You need to select Burstable classes and then one of the instances with micro in the title, such as a db.t3.micro as shown in Figure 3.

Figure 3. Selecting a free-tier compatible DB instance

The next section in the setup is the Storage section, with the same options that you had available when going through the MySQL and MariaDB setups, though the default values may be different based upon the instance class that you selected. After the storage section are the Connectivity and Database authentication sections that we walked through earlier, so we will not go through them again now – they are standard across all RDS engine options. Selecting the Create database button will take you back to the RDS Databases screen where you will get a notification that the database is being created as well as a button that you can click to access the connection details. Make sure you get the password if you selected for AWS to create your administrative password. You will only be able to access the password this one time.

The pricing for PostgreSQL is slightly higher than MariaDB or MySQL when looking at compatible configurations, about 6% higher.

Selecting between PostgreSQL and MySQL/MariaDB

There are some significant differences between PostgreSQL and MySQL\MariaDB that can become meaningful when building your .NET application. Some of the more important differences are listed below. There are quite a few management and configuration differences, but those are not mentioned since RDS manages all of those for you!

·         Multi-Version Concurrency Control – PostgreSQL was the first DBMS to rollout multi-version concurrency control (MVCC), which means reading data never blocks writing data, and vice versa. If your database is heavily used for both reading and writing than this may be a significant influencer.

·         More types supported – PostgreSQL natively supports NoSQL as well as a rich set of data types including Numeric Types, Boolean, Network Address, Bit String Types, and Arrays. It also supports JSON, hstore (a list of comma-separated key/value pairs), and XML, and users can even add new types.

·         Sequence support – PostgreSQL supports multiple tables taking their ids from the same sequence while MySQL/MariaDB do not.

·         Index flexibility – PostgreSQL can use functions and conditional indexes, which makes PostgreSQL database tuning very flexible, such as not having a problem if primary key values aren’t inserted sequentially.

·         Spatial capability – PostgreSQL has much richer support for spatial data management, quantity measurement, and geometric topology analysis.

While PostgreSQL is considered one of the most advanced databases around, that doesn’t mean that it should automatically be your choice. Many of the advantages listed above can be considered advanced functionality that you may not need. If you simply need a place to store rarely changing data, then MySQL\MariaDB may still be a better choice. Why? Because it is less expensive and performs better than PostgreSQL when performing simple reads with simple join. As always, keep your use cases in mind when selecting your database.

Note: AWS contributes to an open-source project called Babelfish for PostgreSQL, which is designed to provide the capability for PostgreSQL to understand queries from applications written for Microsoft SQL Server. Babelfish understands the SQL Server wire-protocol and T-SQL. This understanding means that you can use SQL Server drivers for .NET to talk to PostgreSQL databases. As of this writing, this functionality is not yet available in the PostgreSQL version of RDS. It is, however, available for Aurora PostgreSQL. We will go over this in more detail later in the chapter. The project can be seen at https://www.babelfishpg.org.

MariaDB, MySQL, and PostgreSQL are all open-source databases that have existed for years and that you can use anywhere, including that old server under your desk. The next database we will talk about is only available in the cloud and within RDS, Amazon Aurora.

Configuring the AWS Toolkit in JetBrains’ Rider

JetBrains just announced their release of their Amazon Web Services Toolkit for Rider.  For those that aren’t familiar with it, Rider is JetBrains’ cross-platform .NET IDE. Rider is based on JetBrains’ IntelliJ platform and ReSharper, and can be considered a direct competitor to Visual Studio.  It is also considerably less expensive then is Visual Studio.  If you don’t already have Rider, you can download a 30-day trial version at their site.

NOTE:  There is actually more work that needs to be done to fully configure your system to support the AWS Toolkit in Rider.  These steps include:

  • An AWS account configured with the correct permissions. Create a free AWS account here.
  • The AWS CLI (Command Line Interface) and AWS SAM (Serverless Application Model) CLI tools, which the plugin uses to build, deploy and invoke your code. SAM Instructions are several steps long and not trivial.  The regular CLI install steps, shown here, are also not trivial.
  • .NET Core 2.0 or 2.1. It’s not enough to just have .NET Core 3.0 or above installed. Make sure you’re running tests on the same version you’re targeting. Instructions
  • The SAM CLI will download and create a container to invoke your Lambda locally. Instructions (Should be covered in the CLIs, but just in case)

Install the Toolkit

Before we can walk through the new toolkit, we have to get it installed.  Installing the toolkit is simple once all the prerequisites are completed.  The directions are available from the AWS site here.  I have created my own version of these directions below because the sample on the AWS site is based upon the IntelliJ IDE rather than the Rider IDE so some of the steps are slightly unclear or look different.

  1. Open Rider. You do the installation of tool kits through the IDE rather than installing them outside of the context of the IDE (like a lot of the Visual Studio tool kits)
  2. Select Configure on the splash screen as shown below.
  3. Select Plugins on the drop-down menu. This will bring up the Plugins
  4. Search for “AWS”. You should get a list of plugins, including the AWS Toolkit.
  5. Install it.
  6. After installing, you will be asked to restart the IDE. You should probably go ahead and follow those directions.  I didn’t try NOT doing it, but I am a firm believer in the usefulness of “turning it off and on”; so I am all over it when the manufacturer tells me to take that step.
  7. Once the application is restarted, selecting New Solution will show you that there is a new solution template available, the AWS Serverless Application, as shown below.
  8. Go ahead and Create the solution named HelloWorld. It may be a little predictable, but HelloWorld is still the gold standard of initial applications…  Rider will go ahead and load the solution once it is completed and then open the README.md file.

Configure your connection to AWS

If you have already connected to AWS through the CLI or another toolkit or IDE, you should see the following in the bottom left corner of the IDE:


If not, follow the instructions for Configuring the AWS CLI here.  This will setup your default profile.  You will probably have to restart Rider after setting up your credentials.

Once Rider is restarted, you will see a notification on the bottom-right of the window that says “No credentials selected.”  Before you do what I did and roll your eyes, muttering “OMG” under your breath, click on that message (which is really a button)


And find out that it is actually a menu.  Select the credential from the drop-down (it is probably called Default)


And you’ll see that the it no longer says “No credentials selected.”  Congratulations, you are connected!