.NET and Amazon EventBridge

As briefly mentioned in an earlier post, Amazon EventBridge is a serverless event bus service designed to deliver data from applications and services to a variety of targets. It uses a different methodology than does SNS to distribute events.

The event producers submit their events to the service bus. From there, a set of rules determines what messages get sent to which recipients. This flow is shown in Figure 1.

Figure 1. Message flow through Amazon EventBridge.
Figure 1. Message flow through Amazon EventBridge.

The key difference between SNS and EventBridge is that in SNS you send your message to a topic, so the sender makes some decisions about where the message is going. These topics can be very broadly defined and domain-focused so that any application interested in order-related messages subscribes to the order topic, but this still obligates the sender to have some knowledge about the messing system.

In EventBridge you simply toss messages into the bus and the rules sort them to the appropriate destination. Thus, unlike SNS where the messages themselves don’t really matter as much as the topic; in EventBridge you can’t define rules without an understanding of the message on which you want to apply rules. With that in mind, we’ll go in a bit of a different order now and go into using EventBridge within a .NET application, that way we’ll have a definition of the message on which we want to apply rules.

.NET and Amazon EventBridge

The first step to interacting with EventBridge from within your .NET application is to install the appropriate NuGet package, AWSSDK.EventBridge. This will also install AWSSDK.Core. Once you have the NuGet package, you can access the appropriate APIs by adding several using statements:

using Amazon.EventBridge;
using Amazon.EventBridge.Model;

You will also need to ensure that you have added:

using System.Collections.Generic;
using System.Text.Json;

These namespaces provide access to the AmazonEventBridgeClient class that manages the interaction with the EventBridge service as well as the models that are represented in the client methods. As with SNS, you can manage all aspects of creating the various EventBridge parts such as service buses, rules, endpoints, etc. You can also use the client to push events to the bus, which is what we do now. Let’s first look at the complete code and then we will walk through the various sections.

static void Main(string[] args)
{
    var client = new AmazonEventBridgeClient();

    var order = new Order();

    var message = new PutEventsRequestEntry
    {
        Detail = JsonSerializer.Serialize(order),
        DetailType = "CreateOrder",
        EventBusName = "default",
        Source = "ProDotNetOnAWS"
    };

    var putRequest = new PutEventsRequest
    {
        Entries = new List<PutEventsRequestEntry> { message }
    };

    var response = client.PutEventsAsync(putRequest).Result;
    Console.WriteLine(
$"Request processed with ID of  
          #{response.ResponseMetadata.RequestId}");
    Console.ReadLine();
}

The first thing we are doing in the code is newing up our AmazonEventBridgeClient so that we can use the PutEventsAsync method, which is the method used to send the event to EventBridge. That method expects a PutEventsRequest object that has a field Entries that are a list of PutEventsRequestEntry objects. There should be a PutEventsRequestEntry object for every event that you want to be processed by EventBridge, so a single push to EventBridge can include multiple events.

Tip: One model of event-based architecture is to use multiple small messages that imply different items of interest. Processing an order, for example, may result in a message regarding the order itself as well as messages regarding each of the products included in the order so that the inventory count can be managed correctly. This means the Product domain doesn’t listen for order messages, they only pay attention to product messages. Each of these approaches has its own advantages and disadvantages.

The PutEventsRequestEntry contains the information to be sent. It has the following properties:

·         Detail – a valid JSON object that cannot be more than 100 levels deep.

·         DetailType – a string that provides information about the kind of detail contained within the event.

·         EventBusName – a string that determines the appropriate event bus to use. If absent, the event will be processed by the default bus.

·         Resources – a List<string> that contains ARNs which the event primarily concerns. May be empty.

·         Source – a string that defines the source of the event.

·         Time – a string that sets the time stamp of the event. If not provided EventBridge will use the time stamp of when the Put call was processed.

In our code, we only set the Detail, DetailType, EventBusName, and Source.

This code is set up in a console, so running the application gives results similar to that shown in Figure 2

Figure 2. Console application that sent a message through EventBridge
Figure 2. Console application that sent a message through EventBridge

We then used Progress Telerik Fiddler to view the request so we can see the message that was sent. The JSON from this message is shown below.

{
    "Entries":
    [
        {
            "Detail": "{\"Id\":0,
                        \"OrderDate\":
                                \"0001-01-01T00:00:00\",
                        \"CustomerId\":0,
                        \"OrderDetails\":[]}",
            "DetailType": "CreateOrder",
            "EventBusName": "default",
            "Source": "ProDotNetOnAWS"
        }
    ]
}

Now that we have the message that we want to process in EventBridge, the next step is to set up EventBridge. At a high level, configuring EventBridge in the AWS console is simple.

Configuring EventBridge in the Console

You can find Amazon EventBridge by searching in the console or by going into the Application Integration group. Your first step is to decide whether you wish to use your account’s default event bus or create a new one. Creating a custom event bus is simple as all you need to provide is a name, but we will use the default event bus.

Before going any further, you should translate the event that you sent to the event that EventBridge will be processing. You do this by going into Event buses and selecting the default event bus. This will bring you to the Event bus detail page. On the upper right, you will see a button Send events. Clicking this button will bring you to the Send events page where you can configure an event. Using the values from the JSON we looked at earlier, fill out the values as shown in Figure 3.

Figure 3. Getting the “translated” event for EventBridge
Figure 3. Getting the “translated” event for EventBridge

Once filled out, clicking the Review button brings up a window with a JSON object. Copy and paste this JSON as we will use it shortly. The JSON that we got is displayed below.

{
  "version": "0",
  "detail-type": "CreateOrder",
  "source": "ProDotNetOnAWS",
  "account": "992271736046",
  "time": "2022-08-21T19:48:09Z",
  "region": "us-west-2",
  "resources": [],
  "detail": "{\"Id\":0,\"OrderDate\":\"0001-01-01T00:00:00\",\"CustomerId\":0,\"OrderDetails\":[]}"
}

The next step is to create a rule that will evaluate the incoming messages and route them to the appropriate recipient. To do so, click on the Rules menu item and then the Create rule button. This will bring up Step 1 of the Create rule wizard. Here, you define the rule by giving it a name that must be unique by event bus, select the event bus on which the rule will run, and choose between Rule with an event pattern and Schedule. Selecting to create a schedule rule will create a rule that is run regularly on a specified schedule. We will choose to create a rule with an event pattern.

Step 2 of the wizard allows you to select the Event source. You have three options, AWS events or EventBridge partner events, Other, or All events. The first option references the ability to set rules that identify specific AWS or EventBridge partner services such as SalesForce, GitHub, or Stripe, while the last option allows you to set up destinations that will be forwarded every event that comes through the event bus. We typically see this when there is a requirement to log events in a database as they come in or some special business rule such as that. We will select Other so that we can handle custom events from our application(s).

You next can add in a sample event. You don’t have to take this action, but it is recommended to do this when writing and testing the event pattern or any filtering criteria. Since we have a sample message, we will select Enter my own and paste the sample event into the box as shown in Figure 4.

Figure 4. Adding a Sample Event when configuring EventBridge
Figure 4. Adding a Sample Event when configuring EventBridge

Be warned, however, if you just paste the event directly into the sample event it will not work as the matching algorithms will reject it as invalid without an id added into the JSON as highlighted by the golden arrow in Figure 4.

Once you have your sample event input, the next step is to create the Event pattern that will determine where this message should be sent. Since we are using a custom event, select the Custom patterns (JSON editor) option. This will bring a JSON editor window in which you enter your rule. There is a drop-down of helper functions that will help you put the proper syntax into the window but, of course, there is no option for simple matching – you have to know what that syntax is already. Fortunately, it is identical to the rule itself, so adding an event pattern that will select every event that has a detail-type of “Create Order” is:

{
  "detail-type": ["CreateOrder"]
}

Adding this into the JSON editor and selecting the Test pattern button will validate that the sample event matched the event pattern. Once you have successfully tested your pattern select the Next button to continue.

You should now be on the Step 3 Select Target(s) screen where you configure the targets that will receive the event. You have three different target types that you can select from, EventBridge event bus, EventBridge API Destination, or AWS Service. Clicking on each of the different target types will change the set of information that you will need to manage that detail the target. We will examine two of these in more detail, the EventBridge API destination, and the AWS Service, starting with the AWS service.

Selecting the AWS service radio button brings up a drop-down list of AWS services that can be targeted. Select the SNS target option. This will bring up a drop-down list of the available topics. Select the topic we worked with in the previous section and click the Next button. You will have the option configure Tags and then can Create rule.

Once we had this rule configured, we re-ran our code to send an event from the console. Within several seconds we received the email that was sent from our console application running on our local machine to EventBridge where the rule filtered the event to send it to SNS which then configured and sent the email containing the order information that we submitted from the console.

Now that we have verified the rule the fun way, let’s go back into it and make it more realistic. You can edit the targets for a rule by going into Rules from the Amazon EventBridge console and selecting the rule that you want to edit. This will bring up the details page. Click on the Targets tab and then click the Edit button. This will bring you back to the Step 3 Select Target(s) screen. From here you can choose to add an additional target (you can have up to 5 targets for each rule) or replace the target that pointed to the SNS service. We chose to replace our existing target.

Since we are looking at using EventBridge to communicate between various microservices in our application we will configure the target to go to a custom endpoint. To do so requires that we choose a Target type of EventBridge API destination. We will then choose to Create a new API destination which will provide all of the destination fields that we need to configure. These fields are listed below.

·         Name – the name of the API destination. Destinations can be reused in different rules, so make sure the name is clear.

·         Description – optional value describing the destination.

·         API destination endpoint – the URL to the endpoint which will receive the event.

·         HTTP Method – the HTTP method used to send the event, can be any of the HTTP methods.

·         Invocation rate limit per second – an optional value, defaulted to 300, of the number of invocations per second. Smaller values mean that events may not be delivered.

The next section to configure is the Connection. The connection contains information about authorization as every API request must have some kind of security method enabled. Connections can be reused as well, and there are three different Authorization types supported. These types are:

·         Basic (Username/Password) – where a username and password combination is entered into the connection definition.

·         OAuth Client Credentials – where you enter the OAuth configuration information such as Authorization endpoint, Client ID, and Client secret.

·         API Key – which adds up to 5 key\value pairs in the header.

Once you have configured your authorization protocol you can select the Next button to once again complete moving through the EventBridge rules creation UI.

There are two approaches that are commonly used when creating the rule to target API endpoint mapping. The first is a single endpoint per type of expected message. This means that, for example, if you were expecting “OrderCreated” and “OrderUpdated” messages then you would have created two separate endpoints, one to handle each message. The second approach is to create a generic endpoint for your service to which all inbound EventBridge messages are sent and then the code within the service evaluates each message and manages it from there.

Modern Event Infrastructure Creation

So far, we have managed all the event management through the console, creating topics and subscriptions in SNS and rules, connections, and targets in EventBridge. However, taking this approach in the real world will be extremely painful. Instead, modern applications are best served by modern methods of creating services; methods that can be run on their own without any human intervention. There are two approaches that we want to touch on now, Infrastructure-as-Code (IaC) and in-application code.

Infrastructure-as-Code

Using AWS CloudFormation or AWS Cloud Development Kit within the build and release process allows developers to manage the growth of their event infrastructure as their usage of events grows. Typically, you would see the work breakdown as being the teams building the systems sending events are responsible for creating the infrastructure required for sending, and the teams for the systems listening for events need to manage the creation of that infrastructure. Thus, if you are planning on using SNS then the sending system would have the responsibility for adding the applicable topic(s) while the receiving system would be responsible for adding the appropriate subscription(s) to the topics in which they are interested.

Using IaC to build out your event infrastructure allows you to scale your use of events easily and quickly. It also makes it easier to manage any changes that you may feel are necessary, as it is very common for the messaging approach to be adjusted several times as you determine the level of messaging that is appropriate for the interactions needed within your overall system.

In-Application Code

In-Application code is a completely different approach from IaC as the code to create the infrastructure resides within your application. This approach is commonly used in “configuration-oriented design”, where configuration is used to define the relationship(s) that each application plays. An example of a configuration that could be used when an organization is using SNS is below.

{
     “sendrules”:[{“name”:”Order”, “key”:”OrdersTopic”}],
     “receiverules”: [{“name”:” ProductUpdates”, 
                       “key”:” Products”,
                       “endpoint”:”$URL/events/product”}],
}

The code in the application would then ensure that every entry in the sendrules property has the appropriate topic created, so using the example above the name value represents the topic name and the key value represents the value that will be used within the application to map to the “Order” topic in SNS. The code in the application would then evaluate the receiverules value and create subscriptions for each entry.

This seems like a lot of extra work, but for environments that do not support IaC then this may be the easiest way to allow developers to manage the building of the event’s infrastructure. We have seen this approach built as a framework library included in every application that used events, and every application provided a configuration file that represented the messages they were sending and receiving. This framework library would evaluate the service(s) to see if there was anything that needed to be added and if so then add them.

.NET and Amazon Simple Notification Service (SNS)

SNS, as you can probably guess from its name, is a straightforward service that uses pub\sub messaging to deliver messages. Pub\Sub, or Publish\Subscribe, messaging is an asynchronous communication method. This model includes the publisher who sends the data, a subscriber that receives the data, and the message broker that handles the coordination between the publisher and subscriber. In this case, Amazon SNS is the message broker because it handles the message transference from publisher to subscriber.

Note – The language used when looking at events and messaging as we did above can be confusing. Messaging is the pattern we discussed above. Messages are the data being sent and are part of both events and messaging. The term “message” is considered interchangeable with notification or event – even to the point where you will see articles about the messaging pattern that refer to the messages as events.

The main responsibility of the message broker is to determine which subscribers should be sent what messages. It does this using a topic. A topic can be thought of as a category that describes the data contained within the message. These topics are defined based on your business. There will be times that a broad approach is best, so perhaps topics for “Order” and “Inventory” where all messages for each topic are sent. Thus, the order topic could include messages for “Order Placed” and “Order Shipped” and the subscribers will get all of those messages. There may be other times where a very narrow focus is more appropriate, in which case you may have an “Order Placed” topic and an “Order Shipped” topic where systems can subscribe to them independently. Both approaches have their strength and weaknesses.

When you look at the concept of messaging, where one message has one recipient, the advantage that a service like SNS offers is the ability to distribute a single message to multiple recipients as shown in Figure 1, which is one of the key requisites of event-based architecture.

Figure 1. Pub\Sub pattern using Amazon SNS
Figure 1. Pub\Sub pattern using Amazon SNS

Now that we have established that SNS can be effectively used when building in an event-based architecture, let’s go do just that!

Using AWS Toolkit for Visual Studio

If you’re a Visual Studio user, you can do a lot of the configuration and management through the toolkit. Going into Visual Studio and examining the AWS Explorer will show that one of the options is Amazon SNS. At this point, you will not be able to expand the service in the tree control because you have not yet started to configure it. Right-clicking on the service will bring up a menu with three options, Create topic, View subscriptions, and Refresh. Let’s get started by creating our first topic. Click on the Create topic link and create a topic. We created a topic named “ProDotNetOnAWS” – it seems to be a trend with us. Once you save the topic you will see it show up in the AWS Explorer.

Right-click on the newly created topic and select to View topic. This will add the topic details screen into the main window as shown in Figure 2.

Figure 2. SNS topic details screen in the Toolkit for Visual Studio
Figure 2. SNS topic details screen in the Toolkit for Visual Studio

In the details screen, you will see a button to Create New Subscription. Click this button to bring up the Create New Subscription popup window. There are two fields that you can complete, Protocol and Endpoint. The protocol field is a dropdown and contains various choices.

HTTP or HTTPS Protocol Subscription

The first two of these choices are HTTP and HTTPS. Choosing one of these protocols will result in SNS making an HTTP (or HTTPS) POST to the configured endpoint. This POST will result in a JSON document with the following name-value pairs.

·         Message – the content of the message that was published to the topic

·         MessageId – A universally unique identifier for each message that was published

·         Signature – Base64-encoded signature of the Message, MessageId, Subject, Type, Timestamp, and TopicArn values.

·         SignatureVersion – version of the signature used

·         SigningCertUrl – the URL of the certificate that was used to sign the message

·         Subject – the optional subject parameter used when the notification was published to a topic. In those examples where the topic is broadly-based, the subject can be used to narrow down the subscriber audience.

·         Timestamp – the time (GMT) when the notification was published.

·         TopicARN – the Amazon Resource Name (ARN) for the topic

·         Type – the type of message being sent. For an SNS message, this type is Notification.

At a minimum, your subscribing system will care about the message, as this message contains the information that was provided by the publisher. One of the biggest advantages of using an HTTP or HTTPS protocol subscription is that the system that is subscribed does not have to do anything other than accept the message that is submitted. There is no special library to consume, no special interactions that must happen, just an endpoint that accepts requests.

Some considerations as you think about using SNS to manage your event notifications. There are several different ways to manage the receipt of these notifications. The first is to create a single endpoint for each topic to which you subscribe. This makes each endpoint very discrete and only responsible for handling one thing; usually considered a plus in the programming world. However, this means that the subscribing service has some limitations as there are now external dependencies on multiple endpoints. Changing an endpoint URL, for example, will now require coordination across multiple systems.

On the other hand, there is another approach where you create a single endpoint that acts as the recipient of messages across multiple topics. The code within the endpoint identifies the message and then forwards it through the appropriate process. This approach abstracts away any work within the system, as all of those changes happen below this single broadly bound endpoint. We have seen both of those approaches working successfully, it really comes down to your own business needs and how you see your systems evolving as you move forward.

Other Protocol Subscriptions

There are other protocol subscriptions that are available in the toolkit. The next two in the list are Email and Email (JSON). Notifications sent under this protocol are sent to the email address that is entered as the endpoint value. This email is sent in two ways, where the Message field of the notification becomes the body of the email or where the email body is a JSON object very similar to that used when working with the HTTP\HTTPS protocols. There are some business-to-business needs for this, such as sending a confirmation to a third party upon processing an order; but you will generally find any discussion of these two protocols under Application-to-Person (A2P) in the documentation and examples.

The next protocol that is available in the toolkit is Amazon SQS. Amazon Simple Queue Service (SQS) is a queue service that follows the messaging pattern that we discussed earlier where one message has one recipient and one recipient only.

The last protocol available in the toolkit is Lambda. Choosing this protocol means that a specified Lambda function will be called with the message payload being set as an input parameter. This option makes a great deal of sense if you are building a system based on serverless functions. Of course, you can also use HTTP\HTTPS protocol and make the call to the endpoint that surfaces the Lambda method; but using this direct approach will remove much of that intermediate processing.

Choosing either the SQS or Lambda protocols will activate the Add permission for SNS topic to send messages to AWS resources checkbox as shown in Figure 3.

Figure 3. Create New Subscription window in the Toolkit for Visual Studio
Figure 3. Create New Subscription window in the Toolkit for Visual Studio

Checking this box will create the necessary permissions allowing the topic to interact with AWS resources. This is not necessary if you are using HTTP\HTTPS or Email.

For the sake of this walk-through, we used an approach that is ridiculous for enterprise systems; we selected the Email (JSON) protocol. Why? So we could easily show you the next few steps in a way that you could easily duplicate. This is important because all you can do in the Toolkit is to create the topic and the subscription. However, as shown in Figure 4, this leaves the subscription in a PendingConfirmation state.

Figure 4. Newly created SNS topic subscription in Toolkit for Visual Studio
Figure 4. Newly created SNS topic subscription in Toolkit for Visual Studio

Subscriptions in this state are not yet fully configured, as they need to be confirmed before they are able to start receiving messages. Confirmation happens after a SubscriptionConfirmation message is sent to the endpoint, which happens automatically when creating a new subscription through the Toolkit. The JSON we received in email is shown below:

{
  "Type" : "SubscriptionConfirmation",
  "MessageId" : "b1206608-7661-48b1-b82d-b1a896797605",
  "Token" : "TOKENVALUE", 
  "TopicArn" : "arn:aws:sns:xxxxxxxxx:ProDotNetOnAWS",
  "Message" : "You have chosen to subscribe to the topic arn:aws:sns:xxxxxxx:ProDotNetOnAWS.\nTo confirm the subscription, visit the SubscribeURL included in this message.",
  "SubscribeURL" : "https://sns.us-west-2.amazonaws.com/?Action=ConfirmSubscription&TopicArn=xxxxxxxx",
  "Timestamp" : "2022-08-20T19:18:27.576Z",
  "SignatureVersion" : "1",
  "Signature" : "xxxxxxxxxxxxx==",
  "SigningCertURL" : "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-56e67fcb41f6fec09b0196692625d385.pem"
}

The Message indicates the action that needs to be taken – you need to visit the SubscribeURL that is included in the message. Clicking that link will bring you to a confirmation page in your browser like that shown in Figure 5.

Figure 5. Subscription confirmation message displayed in browser
Figure 5. Subscription confirmation message displayed in browser

Refreshing the topic in the Toolkit will show you that the PendingConfirmation message is gone and has been replaced with a real Subscription ID.

Using the Console

The process for using the console is very similar to the process we just walked through in the Toolkit. You can get to the service by searching in the console for Amazon SNS or by going into the Application Integration group under the services menu. Once there, select Create topic. At this point, you will start to see some differences in the experiences.

The first is that you have a choice on the topic Type, as shown in Figure 6. You can select from FIFO (first-in, first-out) and Standard. FIFO is selected by default. However, selecting FIFO means that the service will follow the messaging architectural approach that we went over earlier where there is exactly-once message delivery and message ordering is strictly preserved. The Standard type, on the other hand, supports “at least once message delivery” which means that it supports multiple subscriptions.

Figure 6. Creating an SNS topic in the AWS Console
Figure 6. Creating an SNS topic in the AWS Console

Figure 6 also displays a checkbox labeled Content-based message deduplication. This selection is only available when FIFO type is selected. When selected, the message being sent is assumed to be unique and SNS will not provide a unique deduplication value. Otherwise, SNS will add a unique value to each message that it will use to determine whether a particular message has been delivered.

Another difference between creating a topic in the console vs in the toolkit is that you can optionally set preferences around message encryption, access policy, delivery status logging, delivery retry policy (HTTP\S), and, of course, tags. Let’s look in more detail at two of those preferences. The first of these is the Delivery retry policy. This allows you to set retry rules for how SNS will retry sending failed deliveries to HTTP/S endpoints. These are the only endpoints that support retry. You can manage the following values:

·         Number of retries – defaults to 3 but can be any value between 1 and 100

·         Retries without delay – defaults to 0 and represents how many of those retries should happen before the system waits for a retry

·         Minimum delay – defaults to 20 seconds with a range from 1 to the value of the Maximum delay.

·         Maximum delay – defaults to 20 seconds with a range from the Minimum delay to 3,600.

·         Retry backoff function – defaults to linear. There are four options, Exponential, Arithmetic, Linear, and Geometric. Each of those functions processes the timing for retries differently. You can see the differences between these options at https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html.

The second preference that is available in the console but not the toolkit is Delivery status logging. This preference will log delivery status to CloudWatch Logs. You have two values to determine. This first is Log delivery status for these protocols which presents a series of checkboxes for AWS Lambda, Amazon SQS, HTTP/S, Platform application endpoint, and Amazon Kinesis Data Firehouse. These last two options are a preview of the next big difference between working through the toolkit or through the console.

Additional Subscriptions in the Console

Once you have finished creating the topic, you can then create a subscription. There are several protocols available for use in the console that are not available in the toolkit. These include:

·         Amazon Kinesis Data Firehouse – configure this subscription to go to Kinesis Data Firehouse. From there you can send notifications to Amazon S3, Amazon Redshift, Amazon OpenSearch Service, and third-party service providers such as Datadog, New Relic, MongoDB, and Splunk.

·         Platform-application endpoint – this protocol sends the message to an application on a mobile device. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts. Go to https://docs.aws.amazon.com/sns/latest/dg/sns-mobile-application-as-subscriber.html for more information on configuring your SNS topic to deliver to a mobile device.

·         SMS – this protocol delivers text messages, or SMS messages, to SMS-enabled devices. Amazon SNS supports SMS messaging in several regions, and you can send messages to more than 200 countries and regions. An interesting aspect of SMS is that your account starts in a SMS sandbox or non-production environment with a set of limits. Once you are convinced that everything is correct you must create a case with AWS support to move your account out of the sandbox and actually start sending messages to non-limited numbers.

Now that we have configured our SNS topic and subscription, lets next look at sending a message.

.NET and Amazon SNS

The first step to interacting with SNS from within your .NET application is to install the appropriate NuGet package, AWSSDK.SimpleNotificationService. This will also install AWSSDK.Core. Once you have the NuGet package, you can access the appropriate APIs by adding several using statements

using Amazon.SimpleNotificationService;
using Amazon.SimpleNotificationService.Model;

These namespaces provide access to the AmazonSimpleNotificationServiceClient class that manages the interaction with the SNS service as well as the models that are represented in the client methods. There are a lot of different types of interactions that you can support with this client. A list of the more commonly used methods is displayed below:

·         PublishAsync – Send a message to a specific topic for processing by SNS

·         PublishBatchAsync – Send multiple messages to a specific topic for processing by SNS.

·         Subscribe – Subscribe a new endpoint to a topic

·         Unsubscribe – Remove an endpoint’s subscription to a topic.

These four methods allow you to add and remove subscriptions as well as publish messages. There are dozens of other methods available from that client, including the ability to manage topics and confirm subscriptions.

The code below is a complete console application that sends a message to a specific topic.

static void Main(string[] args)
{
    string topicArn = "Arn for the topic to publish";
    string messageText = "Message from ProDotNetOnAWS_SNS";

    var client = new AmazonSimpleNotificationServiceClient();

    var request = new PublishRequest
    {
        TopicArn = topicArn,
        Message = messageText,
        Subject = Guid.NewGuid().ToString()
    };

    var response = client.PublishAsync(request).Result;

    Console.WriteLine(
       $"Published message ID: {response.MessageId}");

    Console.ReadLine();
}

As you can see, the topic needs to be described with the Arn for the topic rather than simply the topic name. Publishing a message entails the instantiation of the client and then defining a PublishRequest object. This object contains all of the fields that we are intending to send to the recipient, which in our case is simply the subject and message. Running the application presents a console as shown in Figure 7.

Figure 7. Console application that sent message through SNS
Figure 7. Console application that sent message through SNS

The message that was processed can be seen in Figure 8. Note the MessageId values are the same as in Figure 7.

Figure 8. Message sent through console application
Figure 8. Message sent through console application

We have only touched on the capabilities of Amazon SNS and its capacity to help implement event-driven architecture. However, there is another AWS service that is even more powerful, Amazon EventBridge. Let’s look at that next.

Modern .NET Application Design

In this post we will go over several modern application design architectures, with the predominant one being event and message-based architecture. After this mostly theoretical discussion, we will move into practical implementation. We will do this by going over two different AWS services. The first of these services, Amazon Simple Notification Service (SNS), is a managed messaging service that allows you to decouple publishers from subscribers. The second service is Amazon EventBridge, which is a serverless event bus. As we are going over each, we will also review the inclusion of these services into a .NET application so that you can see how it works.

Modern Application Design

The growth in the public cloud and its ability to quickly scale computing resources up and down has made the building of complex systems much easier. Let’s start by looking at what the Microservice Extractor for .NET does for you. For those of you unaware of this tool, you can check out the user guide or see a blog article on its use. Basically, however, the tool analyzes your code and helps you determine what areas of the code you can split out into a separate microservice. Figure 1 shows the initial design and then the design after the extractor was run.

Figure 1. Pre and Post design after running the Microservice Extractor
Figure 1. Pre and Post design after running the Microservice Extractor

Why is this important? Well, consider the likely usage of this system. If you think about a typical e-commerce system, you will see that the inventory logic, the logic that was extracted, is a highly used set of logic. It is needed to work with the catalog pages. It is needed when working with orders, or with the shopping cart. This means that this logic may act as a bottleneck for the entire application. To get around this with the initial design means that you would need to deploy additional web applications to ease the load off and minimize this bottleneck.

Evolving into Microservices

However, the extractor allows us to use a different approach. Instead of horizontally scaling the entire application, scale the set of logic that gets the most use.  This allows you to minimize the number of resources necessary to keep the application optimally running. There is another benefit to this approach as you now have an independently managed application, which means that it can have its own development and deployment processes and can be interacted with independently of the rest of the application stack. This means that a fully realized microservices approach could look more like that shown in Figure 2.

Figure 2. Microservices-based system design
Figure 2. Microservices-based system design

This approach allows you to scale each web service as needed. You may only need one “Customer” web service running but need multiples of the “Shopping Cart” and “Inventory” running to ensure performance. This approach means you can also do work in one of the services, say “Shopping Cart” and not have to worry about testing anything within the other services because those won’t have been impacted – and you can be positive of that because they are completely different code lines.

This more decoupled approach also allows you to manage business changes more easily.

Note – Tightly coupled systems have dependencies between the systems that affect the flexibility and reusability of the code. Loosely coupled, or decoupled, systems have minimal dependencies between each other and allow for greater code reuse and flexibility.

Consider Figure 3 and what it would have taken to build this new mobile application with the “old-school” approach. There most likely would have been some duplication of business logic, which means that as each of the applications evolve, they would likely have drifted apart. Now, that logic is in a single place so it will always be the same for both applications (excluding any logic put into the UI part of the application that may evolve differently – but who does that?)

Figure 3. Microservices-based system supporting multiple applications
Figure 3. Microservices-based system supporting multiple applications

One look at Figure 3 shows how this system is much more loosely coupled than was the original application. However, there is still a level of coupling within these different subsystems. Let’s look at those next and figure out what to do about them.

Deep Dive into Decoupling

Without looking any deeper into the systems than the drawing in Figure 3 you should see one aspect of tight coupling that we haven’t addressed. The “Source Database.” Yes, this shared database indicates that there is still a less than optimal coupling between the different web services. Think about how we used the Extractor to pull out the “Inventory” service so we could scale that independently of the regular application. We did not do the same to the database service that is being accessed by all these web services. So, we still have that quandary, only at the database layer than at the business logic layer.

The next logical step in decoupling these systems would be to break out the database responsibilities as well, resulting in a design like that shown in Figure 4.

Figure 4. Splitting the database to support decoupled services
Figure 4. Splitting the database to support decoupled services

Unfortunately, it is not that easy. Think about what is going on within each of these different services; how useful is a “Shopping Cart” or an “Order” without any knowledge of the “Inventory” being added to the cart, or sold? Sure, those services do not need to know everything about “Inventory”, but they need to either interact with the “Inventory” service or go directly into the database to get information. These two options are shown in Figure 5.

Figure 5. Sharing data through a shared database or service-to-service calls
Figure 5. Sharing data through a shared database or service-to-service calls

As you can see, however, we have just added back in some coupling, as in either approach the “Order” service and “Shopping Cart” service now have dependencies on the “Inventory” service in some form or another. However, this may be unavoidable based on certain business requirements – those requirements that mean that the “Order” needs to know about “Inventory.” Before we stress out too much about this design, let’s further break down this “need to know” by adding in an additional consideration about when the application needs to know about the data. This helps us understand the required consistency.  

Strong Consistency

Strong consistency means that all applications and systems see the same data at the same time. The solutions in Figure 3 represent this approach because, regardless of whether you are calling the database directly or through the web service, you are seeing the most current set of the data, and it is available immediately after the data is persisted. There may easily be requirements where that is required. However, there may just as easily be requirements where a slight delay between the “Inventory” service and the “Shopping Cart” service knowing information may be acceptable.

For example, consider how a change in inventory availability (the quantity available for the sale of a product) may affect the shopping cart system differently than the order system. The shopping cart represents items that have been selected as part of an order, so inventory availability is important to it – it needs to know that the items are available before those items can be processed as an order. But when does it need to know that? That’s where the business requirements come into play. If the user must know about the change right away, that will likely require some form of strong consistency. If, on the other hand, the inventory availability is only important when the order is placed, then strong consistency is not as necessary. That means there may be a case for eventual consistency.

Eventual Consistency

As the name implies, data will be consistent within the various services eventually – not right away. This difference may be as slight as milliseconds or it can be seconds or even minutes, all depending upon business needs and system design. The smaller the timeframe necessary, the more likely you will need to use strong consistency. However, there are plenty of instances where seconds and even minutes are ok. An order, for example, needs some information about a product so that it has context. This could be as simple as the product name or more complex relationships such as the warehouses and storage locations for the products. But the key factor is that changes in this data are not really required to be available immediately to the order system. Does the order system need to know about a new product added to the inventory list? Probably not – as it is highly unlikely that this new product will be included in an order within milliseconds of becoming active.  Being available within seconds should be just fine. Figure 6 shows a time series graph of the differences between strong and eventual consistency.

Figure 6. Time series showing the difference between strong and eventual consistency
Figure 6. Time series showing the difference between strong and eventual consistency

What does the concept of eventual consistency mean when we look at Figure 3 showing how these three services can have some coupling? It gives us the option for a paradigm shift. Our assumption up to this time is that data is stored in a single source, whether all the data is stored in a big database or whether each service has its own database – such as the Inventory service “owning” the Inventory database that stores all the Inventory information. Thus, any system needing inventory data would have to go through these services\databases in some way.

This means our paradigm understands and accepts the concept of a microservice being responsible for maintaining its own data – that relationship between the inventory service and the inventory database. Our paradigm shift is around the definition of the data that should be persisted in the microservices database. For example, currently, the order system stores only data that describes orders – which is why we need the ability to somehow pull data from the inventory system. However, this other information is obviously critical to the order so instead of making the call to the inventory system we instead store that critical inventory-related data in the order system. Think what that would be like.

Oh No! Not duplicated data!

Yes, this means some data may be saved in multiple places. And you know what? That’s ok. Because it is not going to be all the data, but just those pieces of data that the other systems may care about. That means the databases in a system may look like those shown in Figure 7 where there may be overlap in the data being persisted.

Figure 7. Data duplication between databases
Figure 7. Data duplication between databases

This data overlap or duplication is important because it eliminates the coupling that we identified when we realized that the inventory data was important to other systems. By including the interesting data in each of the subsystems, we no longer have that coupling, and that means our system will be much more resilient.

If we continued to have that dependency between systems, then an outage in the inventory system means that there would also be an outage in the shopping cart and order systems, because those systems have that dependency upon the inventory system for data. With this data being persisted in multiple places, an outage in the inventory system will NOT cause any outage in those other systems. Instead, those systems will continue to happily plug along without any concern for what is going on over in inventory-land.  It can go down, whether intentionally because of a product release or unintentionally, say by a database failure, and the rest of the systems continue to function. That is the beauty of decoupled systems, and why modern system architectural design relies heavily on decoupling business processes.

We have shown the importance of decoupling and how the paradigm shift of allowing some duplication of data can lead to that decoupling. However, we haven’t touched on how we would do this. In this next section, we will go into one of the most common ways to drive this level of decoupling and information sharing.

Designing a messaging or event-based architecture

The key to this level of decoupling requires that one system notify the other systems when data has changed. The most powerful method for doing this is through either messaging or events. While both messaging and events provide approaches for sending information to other systems, they represent different forms of communication and different rules that they should follow.

Messaging

Conceptually, the differences are straightforward. Messaging is used when:

·         Transient Data is needed – this data is only stored until the message consumer has processed the message or it hits a timeout or expiration period.

·         Two-way Communication is desired – also known as a request\reply approach, one system sends a request message, and the receiving system sends a response message in reply to the request.

·         Reliable Targeted Delivery – Messages are generally targeted to a specific entity. Thus, by design, a message can have one and only one recipient as the message will be removed from the queue once the first system processes it.

Even though messages tend to be targeted, they provide decoupling because there is no requirement that the targeted system is available when the message is sent. If the target system is down, then the message will be stored until the system is back up and accepting messages. Any missed messages will be processed in a First In – First Out process and the targeted system will be able to independently catch up on its work without affecting the sending system.

When we look at the decoupling we discussed earlier, it becomes apparent that messaging may not be the best way to support eventual consistency as there is more than one system that could be interested in the data within the message. And, by design, messaging isn’t a big fan of this happening. So, with these limitations, when would messaging make sense?

Note: There are technical design approaches that allow you to send a single message that can be received by multiple targets. This is done through a recipient list, where the message sender sends a single message and then there is code around the recipient list that duplicates that message to every target in the list. We won’t go into these approaches here.

The key thing to consider about messaging is that it focuses on assured delivery and once and once-only processing. This provides insight into the types of operations best supported by messaging. An example may be the web application submitting an order. Think of the chaos if this order was received and processed by some services but not the order service. Instead, this submission should be a message targeted at the order service. Sure, in many instances we are handling this as an HTTP Request (note the similarities between a message and the HTTP request) but that may not always be the best approach. Instead, our ordering system sends a message that is assured of delivery to a single target.

Events

Events, on the other hand, are traditionally used to represent “something that happened” – an action performed by the service that some other systems may find interesting. Events are for when you need:

·         Scalable consumption – multiple systems may be interested in the content within a single event

·         History – the history of the “thing that happened” is useful. Generally, the database will provide the current state of information. The event history provides insight into when and what caused changes to that data. This can be very valuable insight.

·         Immutable data – since an event represents “something that already happened” the data contained in an event is immutable – that data cannot be changed. This allows for very accurate tracing of changes, including the ability to recreate database changes.

Events are generally designed to be sent by a system, with that system having no concern about whether other systems receive the event or act upon it. The sender fires the event and then forgets about it.

When you consider the decoupled design that we worked through earlier, it becomes quickly obvious that events are the best approach to provide any changed inventory data to the other systems. In the nest article we will jump right into Amazon Simple Notification Service (SNS), and talk more about events within our application using SNS as our guide.