What kind of “NoSQL” are you talking about?

Too many times I hear NoSQL databases as if they are a single way of “doing business.” However, there are multiple types of NoSQL databases, each with its specializations and ways in which they approach persisting and accessing information. So just saying “use NoSQL” isn’t really all that useful.

With that in mind, lets’ take a look at the various choices that you have to consider. The main flavors of NoSQL databases are document databases, key-value stores, column-oriented databases, and graph databases. There are also databases that are combination of these and even some other highly specialized databases that do not depend of relationships and SQL, so they could probably be NoSQL as well. That being said, let’s look at those four primary types of NoSQL databases.

Document Databases

A document database stores data in JSON as shown in Figure 10-3, BSON, or XML. BSON stands for “Binary JSON.” BSON’s binary structure allows for the encoding of type and length information. This encoding tends to allow the data to be parsed much more quickly and it helps support indexing and querying into the properties of the document being stored.

Figure 1. Document database interpretation of object

Documents are typically stored in a format that is much closer to the data model being worked with in the application. The examples used above show this clearly. Additional functionality supported by document databases tend to be around ways of indexing and\or querying data within the document. The Id will generally be the key used for primary access but imagine cases like where you want to search the data for all instances of a Person with a FirstName of “Steve”. In SQL that is easy because that data is segregated into its own column. It is more complex in a NoSQL database because that query must now go rummaging around the data within the document which, as you can probably imagine, will be slow without a lot of work around query optimization.  Common examples of document databases include MongoDB, Amazon DocumentDB, and Microsoft Azure CosmosDB.

Key-Value Stores

A key-value store is the simplest of the NoSQL databases. In these types of databases, every element in the database is stored as a simple key-value pair made up of an attribute, or key, and the value, which can be anything. Literally, anything. It can be a simple string, a JSON document like is stored in a document database, or even something very unstructured like an image or some other blob of binary data.

The difference between Document databases and Key-Value databases is generally their focus. Document databases focus on ways of parsing the document so that users can do things like making decisions about data within that document. Key-value databases are very different. They care much less what the value is, so they tend to focus more on storing and accessing the value as quickly as possible and making sure that they can easily scale to store A LOT of different values. Examples of Key-Value databases include Amazon DynamoDB, BerkeleyDB, Memcached, and Redis.

Those of you familiar with the concept of caching, where you create a high-speed data storage layer that stores a subset of data so that accessing that data is more responsive than calling directly into the database, will see two products that are commonly used for caching, Memcached and Redis. These are excellent examples of the speed vs. document analysis trade-off between these two types of databases.

Column-Oriented Databases

Yes, the name of these types of NoSQL databases may initially be confusing because you may think about how the data in relational databases are broken down into columns. However, the data is accessed and managed differently. In a relational database, an item is best thought of as a row, with each column containing an aspect or value referring to a specific part of that item. A column-oriented database instead stores data tables by column rather than row.

Let’s take a closer look at that since that sounds pretty crazy. Figure 2 shows a snippet of a data table like you are used to seeing in a relational database system, where the RowId is an internal value used to refer to the data (only within the system) while the Id is the external value used to refer to the data.

Figure 2. Data within a relational database table, stored as a row.

Since row-oriented systems are designed to efficiently return data for a row, this data could be saved to disk as:

1:100,Bob,Smith;
2:101,Bill,Penberthy;

In a column-oriented database, however, this data would instead be saved as

100:1,101:2;
Bob:1,Bill:2;
Smith:1,Penberthy:2

Why would they take that kind of approach? Mainly, speed of access when pulling a subset of values for an “item”. Let’s look at an index for a row-based system. An index is a structure that is associated with a database table to speed the retrieval of rows from that table. Typically, the index sorts the list of values so that it is easier to find a specific value. If you consider the data in Figure 10-4 and know that it is very common to query based on a specific LastName, then creating a database index on the LastName field means that a query looking for “Roberts” would not have to scan every row in the table to find the data, as that price was paid when building the index. Thus, an index on the LastName field would look something like the snipped below, where all LastNames have been alphabetized and stored independently:

Penberthy:2;
Smith:1

If you look at this definition of an index, and look back at the column-oriented data, you will see a very similar approach. What that means is that when accessing a limited subset of data or the data is only sparsely populated, a column-oriented database will most likely be much faster than accessing a row-based database. If, on the other hand, you tend to use a broader set of data then storing that information in a row would likely be much more performant. Examples of column-oriented systems include Apache Kudu, Amazon Redshift, and Snowflake.

Graph Databases

The last major type of NoSQL databases is graph databases. A graph database focuses on storing nodes and relationships. A node is an entity, is generally stored with a label, such as Person, and contains a set of key-value pairs, or properties. That means you can think of a node as being very similar to the document as stored in a database. However, a graph database takes this much further as it also stores information about relationships.

A relationship is a directed and named connection between two different nodes, such as Person Knows Person. A relationship always has a direction, a starting node, an ending node, and a type of relationship. A relationship also can have a set of properties, just like a node. Nodes can have any number or type of relationships and there is no effect on performance when querying those relationships or nodes. Lastly, while relationships are directed, they can be navigated efficiently in either direction.

Let’s take a closer look at this. From the example above, there are two people, Bill and Bob that are friends. Both Bill and Bob watched the movie “Black Panther”. This means there are 3 nodes, with 4 known relationships:

  • Bill is friends with Bob
  • Bob is friends with Bill
  • Bill watched “Black Panther”
  • Bob watched “Black Panther”     

Figure 3 shows how this is visualized in a graph model.

Figure 3. Graph model representation with relationships

Since both nodes and relationships can contain additional information, we can include more information in each node, such as LastName for the Persons or a Publish date for the Book, and we can add additional information to a relationship, such as a StartDate for writing the book or a Date for when they become friends.

Graph databases tend to have their own way of querying data because you can be pulling data based upon both nodes and relationships. There are 3 common query approaches:

·         Gremlin – part of an Apache project is used for creating and querying property graphs.  A query to get everyone Bill knows would look like this:

g.V().has("FirstName","Bill").
  out("Is_Friends_With").
values("FirstName")

·         SPARQL – a World Wide Web Consortium supported project for a declarative language for graph pattern matching. A query to get everyone Steve knows would look like this:

PREFIX vcard: http://www.w3.org/2001/vcard-rdf/3.0#
SELECT ?FirstName
WHERE {?person vcard:Is_Friends_With ?person }

·         openCypher – a declarative query language for property graphs that was originally developed by Neo4j and then open-sourced. A query to get everyone Bill knows would look like this:

MATCH (me)-[:Is_Friends_With*1]-(remote_friend)
WHERE me.FirstName = Bill
RETURN remote_friend.FirstName

The most common graph databases are Neo4j, ArangoDB, and Amazon Neptune.

Next time – don’t just say “lets put that in a NoSQL database.” Instead, say what kind of NoSQL database makes sense for that combination of data and business need. That way, it is obvious you know what you are saying and have put some thought into the idea!

Selecting a BI Data Stack – Snowflake vs Azure Synapse Analytics

This article is designed to help choose the appropriate technical tools for building out a Data Lake and a Data Warehouse.

Components

A Data Lake is a centralized repository that allows you to store structured, semi-structured, and unstructured data at scale. The idea is that copies of all your transactional data are gathered, stored, and made available for additional processing, whether it is transformed and moved into a data warehouse or made available for direct usage for analytics such as machine learning or BI reporting.

A Data Warehouse, on the other hand, is a digital storage system that connects and harmonizes large amounts of data from many different sources. Its purpose is to feed business intelligence (BI), reporting, and analytics, and support regulatory requirements – so companies can turn their data into insight and make smart, data-driven decisions. Data warehouses store current and historical data in one place and act as the single source of truth for an organization.

While a data warehouse is a database structured and optimized to provide insight, a data lake is different because it stores relational data from line of business applications and non-relational data from other systems. The structure of the data or schema is not defined when data is captured. This means you can store all your data without careful design or the need-to-know what questions you might need answers for in the future. Different types of analytics on your data like SQL queries, big data analytics, full text search, real-time analytics, and machine learning can be used to uncover insights.

General Requirements 

There are a lot of different ways to implement a data lake. The first is in a simple object storage system such as Amazon S3. The other is through a combination of ways, such as tying together a database system in which to store relational data and an object storage system to store unstructured or semi-structured data, or adding in stream-processing, or a key/value database, or any number of different products that handle the job similarly. With all these different approaches potentially filling the definition of “data lake”, the first thing we must do is to define deeper the data lake requirements that we will use for the comparison.

  • Relational data –One requirement is that the data stack include multiple SAAS systems that will providing data to the data lake. These systems default to a relational\object-based structures where objects are consistently defined and are inter-related. This means that this information will be best served by storing within a relational database system. Examples of systems providing this type of data include Salesforce, Marketo, OpenAir, and NetSuite.
  • Semi-structured data – some of the systems that will act as sources for this data lake contain semi structured data, which is data that has some structure but does not necessarily conform to a data model. Semi-structured data may mostly “look like each other” but each piece of data may have different attributes or properties. Data from these sources could be transformed into a relational approach, but this transformation gets away from storing the data in its original state. An example of sources of this kind of data include logs, such as Splunk or application logs.
  • Unstructured data – Unstructured data is data which does not conform to a data model and has no easily identifiable structure, such as images, videos, documents, etc. At this time, there are no identified providers of unstructured data that is expected to be managed within the data lake.

Thus, fulfilling this set of data lake requirements means support for both relational\structured and semi-structured data are “must-haves”, and the ability to manage unstructured data is a “nice-to-have”.

The next set of requirements is for both data lakes and data warehouses are around being able to add, access, and extract data, with there being many more requirements around being able to manage data as it is being read and extracted. These requirements are:

  • Input – The data lake must be able to support write and edit access from external systems. The system must support authentication and authorization as well as the ability to automate data movement without having to rely on internal processes. Thus, a tool requiring a human to click through a UI is not acceptable.
  • Access – the data lake must be able to support different ways of accessing data from external sources, so the tool:
    • must be able to support the processing of SQL (or similar approaches) for queries to limit/filter data being returned
    • must be supported as a data source for Tableau, Excel, and other reporting systems
    • should be able to give control over the number of resources used when executing queries to have control over the performance/price ratio

Top Contenders

There are a lot of products available for consideration that support either, or both, data lake and data warehouse needs. However, one of the key pivots was to only look at offerings that were available as a managed service in the cloud rather than a product that would be installed and maintained on-premise and that the product can support both data lake and data warehouse needs. With that in mind, we narrowed the list down to 5 contenders: 1) Google BigQuery, 2) Amazon Redshift, 3) Azure Synapse Analytics, 4) Teradata Vantage, and 5) Snowflake Data Cloud based upon the Forrester Wave™ on Cloud Data Warehouses study as shown below in Figure 1.

Figure 1 – Forrester Wave on Cloud Data Warehouses

The Forrester scoring was based upon 25 different criteria, of which we pulled out the seven that were the most important to our defined set of requirements and major use cases (in order of importance):

  1. Data Ingestion and Loading – The ability to easily insert and transform data into the product
  2. Administration Automation – An ability to automate all the administration tasks, including user provisioning, data management, and operations support
  3. Data Types – The ability to support multiple types of data, including structured, semi-structured, unstructured, files, etc.
  4. Scalability Features – The ability to easily and automatically scale for both storage and execution capabilities
  5. Security Features – The ability to control access to data stored in the product. This includes management at database, table, row, and column levels.
  6. Support – The strength of support offered by the vendor, including training, certification, knowledge base, and a support organization.
  7. Partners – The strength of the partner network, other companies that are experienced in implementing the software as well as creating additional integration points and/or features.
Google BigQueryAzure Synapse AnalyticsTeradata VantageSnowflake Data CloudAmazon Redshift
Forrester Scores   
Data Ingestion and Loading5.003.003.005.005.00
Administration Automation3.003.003.005.003.00
Data Types5.005.005.005.005.00
Scalability Features5.005.003.005.003.00
Security Features3.003.003.005.005.00
Support5.005.003.005.005.00
Partners5.001.005.005.005.00
Average of Forrester Scores4.433.573.575.004.43
Table 1 – Base Forrester scores for selected products

This shows a difference between the overall scores, based on the full list of 25 items, and the list of scores based upon those areas where it was determined our requirements pointed to the most need. Snowflake was an easy selection into the in-depth, final evaluation, but the choices for the other products to dig deeper into was more problematic. In this first evaluation we compared Snowflake to Azure Synapse Analytics. Other comparisons will follow.

Final Evaluation

There are two additional areas in which we dive deeper into the differences between Synapse and Snowflake, cost, and execution processing. Execution processing is important because the more granular the ability to manage execution resources, the more control the customer has over performance in their environment.

Cost

Calculating cost for data platforms generally comes down to four different areas, data storage, processing memory, processing CPUs, and input/output. There are two different approaches to Azure’s pricing for Synapse, Dedicated and Serverless. Dedicated is broken down into 2 values, storage and an abstract representative of compute resources and performance called a Data Warehouse Unit (DWU). The Serverless consumption model, on the other hand, charges by TB of data processed. Thus, a query that scans the entire database would be very expensive while small, very concise queries will be relatively inexpensive. Snowflake has taken a similar approach to the Dedicated mode, with their abstract representation called a Virtual Warehouse (VW).  We will build a pricing example that includes both Data Lake and Data Warehouse usage using the following customer definition:

  • 20 TBs of data, adding 50 GB a week
  • 2.5 hours a day on a 2 CPU, 32GB RAM machine to support incoming ETL work.
  • 10 users working 8 hours, 5 days a week, currently supported by a 4 CPU, 48GB RAM machine
ProductDescriptionPriceNotes
Synapse Ded.Storage Requirements$ 4,60020 TB * $23 (per TB month) * 12 months
Synapse Ded.ETL Processing$ 2,190400 Credits * 2.5 hours * 365 days * $0.006 a credit
Synapse Ded.User Processing$ 11,5201000 Credits * 8 hours * 240 days * $.006 a credit
Synapse Ded.Total$ 18,310 
Synapse Serv.Storage (Serverless)$ 4,56020 TB * $19 (per TB month) * 12 months
Synapse Serv.Compute$ 6,091((5 TB * 240 days) + (.05 TB * 365 days)) * $5
Synapse Serv.Total$ 10,651 
SnowflakeStorage Requirements$ 1,1044 TB (20 TB uncompressed) * $23 (per TB month)
SnowflakeETL Processing$ 3,6502 Credits * 2.5 hours * 365 days * $2 a credit
SnowflakeUser Processing$ 15,3604 Credits * 8 hours * 240 days * $2 a credit
SnowflakeTotal$ 20,114 

As an estimate, before considering any discounts, Synapse is approximately 10% less expensive than Snowflake when using Dedicated consumption and almost 50% less costly when using Serverless consumption, though that is very much an estimate because Serverless charges by the TB processed in a query, so large or poorly constructed queries can be exponentially more expensive. However, there is some cost remediation of the Snowflake plan because the movement of data from the data lake to the data warehouse is simplified. Snowflake’s structure and custom SQL language allow the movement of data between databases using SQL, so there is only the one-time processing fee for the SQL execution rather than paying for execution of query to get data out of the system and the execution to get data into the system.  Synapse offers some of the same capability, but much of that is outside of the data management system and the advantage gained in the process is less than that offered by Snowflake.

Execution Processing

The ability to flexibly scale the number of resources that you use when executing a command is invaluable. Both products do this to some extent. Snowflake, for example, uses the concept of a Warehouse, which is simply a cluster of compute resources such as CPU, memory, and temporary storage. A warehouse can either be running or suspended, and charges accrue only when running. The flexibility that Snowflake operates is around these warehouses, where the user can provision different warehouses, or sizes of work, and then select the desired data warehouse, or processing level, when connecting to the server to perform work as shown in Figure 2. This allows for matching the processing to the query being done. Selecting a single item from the database by it’s primary key may not need to be run using the most powerful (expensive) warehouse, but a report doing complete table scans across TBs of data could use the boost. Another example would be a generated scheduled report where the processing taking slightly longer is not an issue.

Figure 2 – Switching Snowflake warehouses to change dedicated compute resources for a query

Figure 3 shows the connection management screen in the integration tool Boomi, with the arrow pointing towards where you can set the desired warehouse when you connect to Snowflake.

Figure 3 – Boomi connection asking for Warehouse name

Synapse takes a different approach. You either use a Serverless model, where you have no control over the processing resources or a Dedicated model where you determine the level of resource consumption when you create the SQL pool which will store your data. You do not have the ability to scale up or down the execution resources being used.

Due to this flexibility, Snowflake has the advantage in execution processing.

Recommendation

In summary, Azure Synapse Analytics has an advantage on cost while Snowflake has an advantage on the flexibility with which you can manage system processing resources. When bringing in the previous scoring as provided by Forrester, Snowflake has the advantage based on its strength around data ingestion and administration automation, both areas that help ease the speed of delivery as well as help ease the burden around operational support.  Based off this analysis my recommendation would be to use Snowflake for the entire data stack, including both Data Lake and Data Warehouse rather than Azure Synapse Analytics.

Integrated Development Environment (IDE) Toolkits for AWS

As a developer myself, I have spent significant time working in so-called integrated development environments (IDEs) or source code editors. Many a time I’ve heard the expression “don’t make me leave my IDE” in relation to the need to perform some management task! For those of you with similar feelings, AWS offers integrations, known as “toolkits”, for the most popular IDEs and source code editors in use today in the .NET community – Microsoft Visual Studio, JetBrains Rider, and Visual Studio Code.

The toolkits vary in levels of functionality and the areas of development they target. All three however share a common ability of making it easy to package up and deploy your application code to a variety of AWS services.

AWS Toolkit for Visual Studio

Ask any longtime developer working with .NET and it’s almost certain they will have used Visual Studio. For many .NET developers it could well be the only IDE they have ever worked with in a professional environment. It’s estimated that almost 90% of .NET developers are using Visual Studio for their .NET work which is why AWS has supported an integration with Visual Studio since Visual Studio 2008. Currently, the AWS toolkit is available for the Community, Professional, and Enterprise editions of Visual Studio 2017 and Visual Studio 2019. The toolkit is available on the Visual Studio marketplace.

The AWS Toolkit for Visual Studio, as the most established of the AWS integrations, is also the most functionally rich and supports working with features of multiple AWS services from within the IDE environment. First, a tool window – the AWS Explorer – surfaces credential and region selection, along with a tree of services commonly used by developers as they learn, experiment, and work with AWS. You can open the explorer window using an entry on the IDE’s View menu. Figure 1 shows a typical view of the explorer, with its tree of services and controls for credential profile and region selection.

Figure 1. The AWS Explorer

The combination of selected credential profile and region scopes the tree of services and service resources within the explorer, and resource views opened from the explorer carry that combination of credential and region scoping with them. In other words, if the explorer is bound to (say) US East (N. Virginia) and you open a view onto EC2 instances, that view shows the instances in the US East (N. Virginia) region only, owned by the user represented by the selected credential profile. If the selected region, or credential profile, in the explorer changes the instances shown in the document view window do not – the instances view remains bound to the original credential and region selection.

Expanding a tree node (service) in the explorer will display a list of resources or resource types, depending on the service. In both cases, double clicking a resource or resource type node, or using the Open command in the node’s context menu, will open a view onto that resource or resource type. Consider the Amazon DynamoDB, Amazon EC2, and Amazon S3 nodes, shown in Figure 2.

Figure 2. Explorer tree node types

AWS Toolkit for JetBrains Rider

Rider is a relatively new, and increasingly popular, cross-platform IDE from JetBrains. JetBrains are the creators of the popular ReSharper plugin, and the IntelliJ IDE for Java development, among other tools. Whereas Visual Studio runs solely on Windows (excluding the “special” Visual Studio for Mac), Rider runs on Windows, Linux, and macOS.

Unlike the toolkit for Visual Studio, which is more established and has a much broader range of supported services and features, the toolkit for Rider focuses on features to support the development of serverless and container-based modern applications. You can install the toolkit from the JetBrains marketplace by selecting Plugins from the Configure link in Rider’s startup dialog.

To complement the toolkit’s features, you do need to install a couple of additional dependencies. First, the AWS Serverless Application Model (SAM) CLI because the Rider toolkit uses the SAM CLI to build, debug, package, and deploy serverless applications. In turn, SAM CLI needs Docker to be able to provide the Lambda-like debug environment. Of course, if you are already working on container-based applications you’ll likely already have this installed.

With the toolkit and dependencies installed, let’s first examine the AWS Explorer window, to compare it to the Visual Studio toolkit. Figure 3 shows the explorer, with some service nodes expanded.

Figure 3. The AWS Explorer in JetBrains Rider

We can see immediately that the explorer gives access to fewer services than the explorer in Visual Studio; this reflects the Rider toolkit’s focus on serverless and container development. However, it follows a familiar pattern of noting your currently active credential profile, and region, in the explorer toolbar.

Controls in the IDE’s status bar link to the credential and region fields in the explorer. This enables you to see at a glance which profile and region are active without needing to keep the explorer visible (this isn’t possible in Visual Studio, where you need to open the explorer to see the credential and region context). Figure 4 shows the status bar control in action to change region. Notice that the toolkit also keeps track of your most recently used profile and region, to make changing back-and-forth super quick.

Figure 4. Changing region or credentials using the IDE’s status bar

AWS Toolkit for Visual Studio Code

Visual Studio Code (VS Code) is an editor with plugin extensions, rather than a full-fledged IDE in the style of Visual Studio or Rider. However, the sheer range of available extensions make it a more than capable development environment for multiple languages, including C# / .NET development on Windows, Linux, and macOS systems.

Like the toolkit for Rider, the VS Code toolkit focuses on the development of modern serverless and container-based applications. The toolkit offers an explorer pane with the capability to list resources across multiple regions, similar to the single-region explorers available in the Visual Studio and Rider toolkits. The VS Code toolkit also offers local debugging of Lambda functions in a Lambda-like environment. As with Rider, the toolkit uses the AWS SAM CLI to support debugging and deployment of serverless applications, so you do need to install this dependency, and Docker as well, to take advantage of debugging support.

Credentials are, once again, handled using profiles, and the toolkit offers a command palette item that walks you through setting up a new profile if no profiles already exist on your machine. If you have existing profiles, the command simply loads the credential file into the editor, where you can paste the keys to create a new profile.

Figure 5 shows some of the available commands for the toolkit in the command palette.

Figure 5. Toolkit commands in the command palette

Figure 6 highlights the active credential profile and an explorer bound to multiple regions. Clicking the status bar indicator enables you to change the bound credentials. You show or hide additional regions in the explorer from a command, or using the toolbar in the explorer (click the button).

Figure 6. Active profile indicator and multi-region explorer

Obviously, I have not really touched on each of the toolkits in much detail. I will be doing that in future articles where I go much deeper into the capabilities, strengths, and weaknesses of the various toolkits and how they may affect your ability to interact with the AWS services directly from within your IDE. Know now, however, that if you are a .NET developer that uses one of these common IDEs (yes, there are still some devs that do development in Notepad) that there is an AWS toolkit that will help you as you develop.

.NET and Containers on AWS (Part 2: AWS App Runner)

The newest entry into AWS container management does a lot to remove the amount of configuration and management that you must use when working with containers. App Runner is a fully managed service that automatically builds and deploys the application as well creating the load balancer. App Runner also manages the scaling up and down based upon traffic. What do you, as a developer, have to do to get your container running in App Runner? Let’s take a look.

First, log into AWS and go to the App Runner console home. If you click on the All Service link you will notice a surprising AWS decision in that App Runner is found under the Compute section rather than the Containers section, even though its purpose is to easily run containers. Click on the Create an App Runner Service button to get the Step 1 page as shown below:

Creating an App Runner service

The first section, Source, requires you to identify where the container image that you want to deploy is stored. You currently, at the time of this writing, can choose either a container registry, Amazon ECR, or a source code repository. Since we have already loaded an image in ECR, let us move forward with this option by ensuring Container registry and Amazon ECR are selected, and then clicking the Browse button to bring up the image selection screen as shown below.

Selecting a container image from ECR

In this screen we selected the “prodotnetonaws” image repository that we created in the last post and the container image with the tag of “latest”.

Once you have completed the Source section, the next step is to determine the Deployment settings for your container. Here, your choices are to use the Deployment trigger of Manual, which means that you must fire off each deployment yourself using the App Runner console or the AWS CLI, or Automatic, where App Runner watches your repository, deploying the new version of the container every time the image changes. In this case, we will choose Manual so that we have full control of the deployment.

Warning: When you have your deployment settings set to Automatic, every time the image is updated will trigger a deployment. This may be appropriate in a development or even test environment, but it is unlikely that you will want to use this setting in a production setting.

The last data that you need to enter on this page is to give App Runner an ECR access role that App Runner will use to access ECR. In this case, we will select Create new service role and App Runner will pre-select a Service name role. Click the Next button when completed.

The next step is entitled Configure service and is designed to, surprisingly enough, help you configure the service. There are 5 sections on this page, Service settings, Auto scaling, Health check, Security, and Tags. The only section that is expanded is the first, all of the other sections need to be expanded before you see the options.

The first section, Service settings, with default settings can be seen below.

Service settings in App Runner

Here you set the Service name, select the Virtual CPU & memory, configure any optional Environmental variables that you may need, and determine the TCP Port that your service will use. If you are using the sample application that we loaded into ECR earlier in the chapter you will need to change the port value from the default 8080 to port 80 so that it will serve the application we configured in the container. You also have the ability, under Additional configuration, to add a Start command which will be run on launch. This is generally left blank if you have configured the entry point within the container image. We gave the service the name “ProDotNetOnAWS-AR” and let all the rest of the settings in this section remain as default.

The next section is Auto scaling, and there are two major options, Default configuration and Custom configuration, each of which provide the ability to set the auto scaling values as shown below.

Setting the Auto scaling settings in App Runner

The first of these auto scaling values is Concurrency. This value represents the maximum number of concurrent requests that an instance can process before App Runner scales up the service. The default configuration has this set at 100 requests that you can customize if using the Custom configuration setting.

The next value is Minimum size, or the number of instances that App Runner provisions for your service, regardless of concurrent usage. This means that there may be times where some of these provisioned instances are not being use. You will be charged for the memory usage of all provisioned instances, but only for the CPU of those instances that are actually handling traffic. The default configuration for minimum size is set to 1 instance.

The last value is Maximum size. This value represents the maximum number of instances to which your service will scale; once your service reaches the maximum size there will be no additional scaling no matter the number of concurrent requests. The default configuration for maximum size is 25 instances.

If any of the default values do not match your need, you will need to create a custom configuration, which will give you control over each of these configuration values. To do this, select Custom configuration. This will display a drop-down that contains all of the App Runner configurations you have available (currently it will only have “DefaultConfiguration”) and an Add new button. Clicking this button will bring up the entry screen as shown below.

Customizing auto scaling in App Runner

The next section after you configure auto scaling is Health check. The first value you set in this section is the Timeout, which describes the amount of time, in seconds, that the load balancer will wait for a health check response. The default timeout is 5 seconds. You also can set the Interval, which is the number of seconds between health checks of each instance and is defaulted to 10 seconds. You can also set the Unhealthy and Health thresholds, where the unhealthy threshold is the number of consecutive health check failures that means that an instance is unhealthy and needs to be recycled and the health threshold is the number of consecutive successful health checks necessary for an instance to be determined to be healthy. The default for these values is 5 requests for unhealthy and 1 request for healthy.

You next can assign an IAM role to the instance in the Security section. This IAM role will be used by the running application if it needs to communicate to other AWS services, such as S3 or a database server. The last section is Tags, where you can enter one or more tags to the App Runner service.

Once you have finished configuring the service, clicking the Next button will bring you to the review screen. Clicking the Create and deploy button on this screen will give the approval for App Runner to create the service, deploy the container image, and run it so that the application is available. You will be presented with the service details page and a banner that informs you that “Create service is in progress.” This process will take several minutes, and when completed will take you to the properties page as shown below.

After App Runner is completed

Once the service is created and the status is displayed as Running, you will see a value for a Default domain which represents the external-facing URL. Clicking on it will bring up the home page for your containerized sample application.

There are five tabs displayed under the domain, Logs, Activity, Metrics, Configuration, and Custom Domain. The Logs section displays the Event, Deployment, and Application logs for this App Runner service. This is where you will be able to look for any problems during deployment or running of the container itself. You should be able, under the Event log section, to see the listing of events from the service creation. The Activity tab is very similar, in that it displays a list of activities taken by your service, such as creation and deployment.

The next tab, Metrics, tracks metrics related to the entire App Runner service. This where you will be able to see information on HTTP connections and requests, as well as being able to track the changes in the number of used and unused instances. By going into the sample application (at the default domain) and clicking around the site, you should see these values change and a graph become available that provides insight into these various activities.

The Configuration tab allows you to view and edit many of the settings that we set in the original service creation. There are two different sections where you can make edits. The first is at the Source and deployment level, where you can change the container source and whether the deployment will be manual or automatically happen when the repository is updated. The second section where you can make changes is at the Configure service level where you are able to change your current settings for autoscaling, health check, and security.

The last tab on the details page is Custom domain. The default domain will always be available for your application; however, it is likely that you will want to have other domain names pointed to it – we certainly wouldn’t want to use https://SomeRandomValue.us-east-2.awsapprunner.com for our company’s web address. Linking domains is straightforward from the App Runner side. Simply click the Link domain button and input the custom domain that you would like to link, we of course used “prodotnetonaws.com”. Note that this does not include “www”, because this usage is currently not supported through the App Runner console. Once you enter the customer domain name, you will be presented the Configure DNS page as shown below.

Configuring a custom domain in App Runner

This page contains a set of certificate validation records that you need to add to your Domain Name System (DNS) so that App Runner can validate that you own or control the domain. You will also need to add CNAME records to your DNS to target the App Runner domain; you will need to add one record for the custom domain and another for the www subdomain if so desired. Once the certificate validation records are added to your DNS, the customer domain status will become Active and traffic will be directed to your App Runner instance. This evaluation can take anywhere from minutes to up to 48 hours.

Once your App Runner instance is up and running this first time, there are several actions that you can take on it as shown in the upper right corner as shown a couple of images ago. The first is the orange Deploy button. This will deploy the container from either the container or source code repository depending upon your configuration. You also can Delete the service, which is straightforward, as well as Pause the service. There are some things to consider when you pause your App Runner service. The first is that your application will lose all state – much as if you were deploying a new service. The second consideration is that if you are pausing your service because of a code defect, you will not be able to redeploy a new (presumably fixed) container without first resuming the service.

.NET and Containers on AWS (Part 1: ECR)

Amazon Elastic Container Registry (ECR) is a system designed to support storing and managing Docker and Open Container Initiative (OCI) images and OCI-compatible artifacts. ECR can act as a private container image repository, where you can store your own images, a public container image repository for managing publicly accessible images, and to manage access to other public image repositories. ECR also provides lifecycle policies, that allow you to manage the lifecycles of images within your repository, image scanning, so that you can identify any potential software vulnerabilities, and cross-region and cross-account image replication so that your images can be available wherever you need them.

As with the rest of AWS services, ECR is built on top of other services. For example, Amazon ECR stores container images in Amazon S3 buckets so that server-side encryption comes by default. Or, if needed, you can use server-side encryption with KMS keys stored in AWS Key Management Service (AWS KMS), all of which you can configure as you create the registry. As you can likely guess, IAM manages access rights to the images, supporting everything from strict rules to anonymous access to support the concept of a public repository.

There are several different ways to create a repository. The first is through the ECR console by selecting the Create repository button. This will take you to the Create repository page as shown below:

Through this page you can set the Visibility settings, Image scan settings, and Encryption settings. There are two visibility settings, Private and Public. Private repositories are managed through permissions managed in IAM and are part of the AWS Free Tier, with 500 MB-month of storage for one year for your private repositories. Public repositories are openly visible and available for open pulls. Amazon ECR offers you 50 GB-month of always-free storage for your public repositories, and you can transfer 500 GB of data to the internet for free from a public repository each month anonymously (without using an AWS account.) If authenticating to a public repository on ECR, you can transfer 5 TB of data to the internet for free each month and you get unlimited bandwidth for free when transferring data from a public repository in ECR to any AWS compute resources in any region  

Enabling Scan on push on means that every image that is uploaded to the repository will be scanned. This scanning is designed to help identify any software vulnerabilities in the uploaded image, and will automatically run every 24 hours, but turning this on will ensure that the image is checked before it can ever be used. The scanning is being done using the Common Vulnerabilities and Exposures (CVEs) database from the Clair project, outputting a list of scan findings.

Note: Clair is an open-source project that was created for the static analysis of vulnerabilities in application containers (currently including OCI and docker). The goal of the project is to enable a more transparent view of the security of container-based infrastructure – the project was named Clair after the French term which translates to clear, bright, or transparent.

The last section is Encryption settings. When this is enabled, as shown below, ECR will use AWS Key Management Service (KMS) to manage the encryption of the images stored in the repository, rather than the default encryption approach.

You can use either the default settings, where ECR creates a default key (with an alias of aws/ecr) or you can Customize encryption settings and either select a pre-existing key or create a new key that will be used for the encryption.

Other approaches for creating an ECR repo

Just as with all the other services that we have talked about so far, there is the UI-driven way to build an ECR like we just went through and then several other approaches to creating a repo.

AWS CLI

You can create an ECR repository in the AWS CLI using the create-repository command as part of the ECR service.

C:\>aws ecr create-repository

    –repository-name prodotnetonaws

    –image-scanning-configuration scanOnPush=true

    –region us-east-1

You can control all of the basic repository settings through the CLI just as you can when creating the repository through the ECR console, including assigning encryption keys and defining the repository URI.

AWS Tools for PowerShell

And, as you probably aren’t too surprised to find out, you can also create an ECR repository using AWS Tools for PowerShell:

C:\> New-ECRRepository

-RepositoryName prodotnetonaws

-ImageScanningConfiguration_ScanOnPush true

Just as with the CLI, you have the ability to fully configure the repository as you create it.

AWS Toolkit for Visual Studio

As well as using the AWS Toolkit for Visual Studio, although you must depend upon the extension’s built-in default values because the only thing that you can control through the AWS Explorer is the repository name as shown below. As you may notice, the AWS Explorer does not have its own node for ECR and instead puts the Repositories sub-node under the Amazon Elastic Container Service (ECS) node. This is a legacy from the past, before ECR really became its own service, but it is still an effective way to access and work with repositories in Visual Studio.

Once you create a repository in VS, going to the ECR console and reviewing the repository that was created will show you that it used the default settings, so it was a private repository with both “Scan on push” and “KMS encryption” disabled.

At this point, the easiest way to show how this will all work is to create an image and upload it into the repository. We will then be able to use this container image as we go through the various AWS container management services.

Note: You will not be able to complete many of these exercises with Docker already running on your machine. You will find download and installation instructions for Docker Desktop at https://www.docker.com/products/docker-desktop. Once you have Desktop installed you will be able to locally build and run container images.

We will start by creating a simple .NET ASP.NET Core sample web application in Visual Studio through File -> New Project and selecting the ASP.NET Core Web App (C#) project template. You then name your project and select where to store the source code. Once that is completed you will get a new screen that asks for additional information as shown below. The checkbox for Enable Docker defaults as unchecked, so make sure you check it and then select the Docker OS to use, which in this case is Linux.

This will create a simple solution that includes a Docker file as shown below:

If you look at the contents of that generated Docker file you will see that it is very similar to the Docker file that we went through earlier, containing the instructions to restore and build the application, publish the application, and then copy the published application bits to the final image, setting the ENTRYPOINT.

FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["SampleContainer/SampleContainer.csproj", "SampleContainer/"]
RUN dotnet restore "SampleContainer/SampleContainer.csproj"
COPY . .
WORKDIR "/src/SampleContainer"
RUN dotnet build "SampleContainer.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "SampleContainer.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "SampleContainer.dll"]

If you look at your build options in Visual Studio as shown below, you will see additional ones available for containers. The Docker choice, for example, will work through the Docker file and start the final container image within Docker and then connect the debugger to that container so that you can debug as usual.

The next step is to create the container image and persist it into the repository. To do so, right click the project name in the Visual Studio Solution Explorer and select Publish Container to AWS to bring up the Publish Container to AWS wizard as shown below.

The image above shows that the repository that we just created is selected as the repository for saving, and the Publish only the Docker image to Amazon Elastic Container Registry option in the Deployment Target was selected (these are not the default values for each of these options). Once you have this configured, click the Publish button. You’ll see the window in the wizard grind through a lot of processing, then a console window may pop up to show you the actual upload of the image, and then the wizard window will automatically close if successful.

Logging into Amazon ECR and going into the prodotnetonaws repository (the one we uploaded the image into), as shown below, will demonstrate that there is now an image available within the repository with a latest tag, just as configured in the wizard. You can click the icon with the Copy URI text to get the URL that you will use when working with this image. We recommend that you go ahead and do this at this point and paste it somewhere easy to find as that is the value you will use to access the image.

Now that you have a container image stored in the repository, we will look at how you can use it in the next article

Refactor vs Replacement: a Journey

Journeys around the refactoring of application and system architectures are dangerous.  Every experienced developer probably has a horror story around an effort to evolve an application that has gone wrong.  Because of this, any consideration around the refactoring effort of an application or system needs to start with a frank evaluation as to whether or not that application has hit the end of its lifecycle and should be retired.  You should only consider a refactor if you are convinced that the return on investment for that refactor is higher than the return on a rewrite or replace. Even then, it should not be an automatic decision.

First, let us define a refactor.  The multiple ways in which this word is used can lead to confusion.  The first approach is best described by Martin Fowler, who defined it as “A change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior.”  That is not the appropriate definition in this case.  Instead, refactor in this case refers to a process that evolves the software through multiple iterations, with each iteration getting closer to the end state.

One of the common misconceptions of an application refactor is that the refactor will successfully reduce problematic areas.  However, studies have shown that refactoring has a significantly high chance of introducing new problem areas into the code (Cedrim et al., 2017).  This risk increases if there is other work going on within the application, thus an application that requires continuing feature or defect resolution work will be more likely to have a higher number of defects caused by the refactoring process (Ferriera et al., 2018). Thus, the analysis regarding refactoring an application has to contain both an evaluation of the refactor work and the integration work for the new functionality that may span both refactored and refactored work.  Unanticipated consequences when changing or refactoring applications is so common that many estimation processes try to consider them.  Unfortunately, however, these processes only try to allow for the time necessary to understand and remediate these consequences rather than trying to determine where these efforts will occur.

Analyzing an application for replacement

Understanding the replacement cost of an application is generally easier than analyzing the refactor cost because there tends to be a deeper understanding of the development process as it pertains to new software as opposed to altering software; it is much easier to predict side effects in a system as you build it from scratch.  This approach also allows for “correction of error” where previous design decisions may no longer be appropriate.  These “errors” could be the result of changes in technology, changes in business, or simply changes in the way that you apply software design patterns to the business need; it is not a judgement that the previous approach was wrong, just that it is not currently the most appropriate approach.

Understanding the cost of replacing an application then becomes a combination of new software development cost plus some level of opportunity cost.  Using opportunity cost is appropriate because the developers that are working on the replacement application are no longer able to work on the current, to be replaced, version of the application. Obviously, when considering an application for replacement, the size and breadth of scope of the application is an important driver.  This is because the more use cases that a system has to satisfy; the more expensive it will be to replace that system.  Lastly, the implementation approach will matter as well.  An organization that can support two applications running in parallel and phasing processing from one to the other will have less of an expense than will an organization that turns the old system off while turning on the new system.  That you can phase the processing like that typically means that you can support reverting back to the previous version, before the last phasing-in part.  The hard switch over makes it less likely that you can simply “turn the old system back on.” 

Analyzing the Analysis

A rough understanding of the two approaches, and the costs and benefits of each approach, will generally lead to a clear “winner” where one approach is demonstrably superior to another.  In those other cases, you should give the advantage to that approach that demonstrates a closer adherence to modern architectural design – which would most likely be the replacement application.  This becomes the decision maker because of its impact to scalability, reliability, security, and cost, all of which define a modern cloud-based architecture.  This circles back to my earlier point on how the return on investment for a refactor must be higher than the return on investment for a rewrite\replace, as there is always a larger error when understanding the effort of a refactor as compared to green-field development.  You need to consider this fact when you perform your evaluation of the ROI of refactoring your application.  Otherwise, you will mistakenly refactor when you should have replaced, and that will lead to another horror story.  Our job is already scary enough without adding to the horror…

 References

Cedrim, D., Gheyi, R., Fonseca, B., Garcia, A., Sousa, L., Ribeiro, M., Mongiovi, M., de Mello, R., Chanvez, A. Understanding the Impact of Refactoring on Smells: A Longitudinal Study of 23 Software Projects, Proceedings of 2017 11th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, Paderborn, Germany, September 4–8, 2017 (ESEC/FSE’17), 11 pages. https://doi.org/10.1145/3106237.3106259

Ferreira, I., Fernandes, E., Cedrim, D., Uchoa, A., Bibiano, A., Garcia, A., Correia, J., Santos, F., Nunes, G., Barbosa, C., Fonseca, B., de Mello, R., Poster: The Buggy side of code refactoring: Understanding the relationship between refactoring and bugs.  40th International Conference of Software Engineering (ICSE), Posters Track. Gotherburg, Sweden. https://doi.org/10.1145/3183440.3195030

Single Page Applications and Content Management Systems

This weekend, over beers and hockey, I heard a comment about how many Content Management Systems (CMS) do not support the use of modern single-page application (SPA) approach.  Normally, this would not have bothered me, but since this was not the first time I heard that point this holiday season, I feel like I need to set the record straight.  A CMS manages content.  The CMS provides a way to manage that content, include placing it on a page.  All of that work is done within the constraints of the CMS tool (examples include Sitecore, Joomla, Adobe Experience Manager, Drupal, WordPress, etc.).  What that CMS does with that content, however, is completely up to the developer/implementer.

A well-designed CMS system depends upon a collection of components.  These components each fulfill a different set of needs, either layout\design (such as a block of text with a title and sub-title as one component and an image, header, and block of content as another component) or feature\functionality (email address collection form, credit card payment section).  Each of these components support different content.  This means that a component that is a title, a sub-title, and a block of content must be able to support different text in each of those areas.  Creating a website is as simple as selecting a page template (there may be more than one) and then using the CMS functionality to put the various components onto the page.  An example of what this could look like is shown below:

An example page broken down into various content sections

The de-facto way to manage this is for the developer to write one or more HTML\CSS page-level templates, which act as the container for the components that site authors put into onto that page.  Each of those components is defined the same way, as snippets of HTML\CSS.  This means that the request\response cycle looks like:

  1. User clicks a link into the website
  2. Request made to URL
  3. CMS determines page for request, extracts definition
  4. Pulls out page template HTML
  5. Inserts all component HTML as defined, adding any author-controlled data such as the text for the title, which image to display, etc.
  6. Responds with completed HTML
  7. User presented with completely new page

That this is the general approach is what leads to the assumption that CMS systems do not support SPA-type designs.  This assumption is wrong. Virtually all enterprise-level CMS systems will support a non-page refresh approach to design, and all it takes is thinking outside the “de-facto” box.

One example of thinking about this process differently:

Consider an approach where a link on the menu does not request a new page, but instead makes a RESTful request to the CMS that returns information about what components belong on the “selected page” and where they need to be placed on that page.  Assuming each component is able to populate itself (perhaps through an AJAX approach after the component gets the ID that defines its content and was provided in the original RESTful request in response to the click on menu item), then it becomes a simple programming problem to assemble it into a coherent structure that is completely CMS managed.  The flow below walks through what this could look like.

  1. User clicks a link into the website
  2. RESTful request made to URL
  3. CMS determines page for request, extracts definition
  4. Provides page template identifier
  5. Adds information about the components
  6. Responds by sending a JSON object that describes the page layout
  7. Client-side processes JSON object and includes all necessary components (HTML & business logic)
  8. Rendering starts for user
  9. Each component makes an AJAX query to get its own content (if necessary)
  10. User presented with fully-rendered page without a complete page refresh

Using this kind of approach means that the JSON object returned in (6) to represent the pay shown earlier may look something like this:

{
"pageTemplate": standard,
"content": [{
"component": {
"template": "title_subtitle",
"contentId": 12875342567,
"sortOrder": 1
},
"component (copy)": {
"template": "email_signup",
"contentId": 12875618367,
"sortOrder": 2
},
"component (copy 2)": {
"template": "calendar_events",
"contentId": "129UY54A7",
"sortOrder": 3
}
}]
}

Which should be simple to process, even in that un-typed crap show that is Javascript.

I know what you are thinking, “Hey Bill, if it’s so simple, why isn’t everyone doing it”.  That is actually easy to answer.  There are many reasons why a company would choose NOT to build a SPA-like façade over their CMS.  In no particular order:

  • Complexity – It is easier and cheaper to write HTML-based components where the CMS interlaces them for you.  That is what CMSs do – they manage the relationship of content.  Abstracting that out to a client system adds complexity.
  • Editor support – Many CMS systems use WYSIWYG editors, so there is a need to have a defined HTML structure in a component so authors can see everything as they build it.  Since that structure has to be there already, any changes to client-side rendering is all new work.
  • Requires simple graphical design – the more complex the graphical layout of a site, the more complex a SPA wrapper would need to be.  The example that was used above is simple, one column that contains several components.  What if there are multiple columns in a page?  What if one of the components is a multi-column control that allows the author to put multiple columns of content into a single page column?  As the components get more complex, so will the work you have to do on the client to process them correctly.
  • Search-engine optimization – The whole point of a CMS is to have fine control over messaging and content.  A SPA-like approach makes SEO optimization problematic as search robots still have problems understanding the interaction between page changes and content changes – so cannot properly index content.
  • Analytics – Most analytics tools will just work when looking at sites where new pages load in the browser.  They will not “just work” in a SPA environment, instead you will generally need to add client-side logic to ensure that clicks are properly understood, with the correct context, by the analytics package.
  • Initial load – the first visit to a SPA application is long, generally because all of the templates need to be downloading before the first page can render.  There is a lot of downloading that would need to happen in an enterprise-level website with 10 page templates and 100 different components.  Yes, it could be shortened by using background or just-in-time loading, but are you willing to take a slow-down in an ecommerce environment where the first few seconds of the interaction between visitor and site are critical?
  • 3rd party add-ins – Analytics was mentioned earlier as being a potential issue.  Similar problems will happen with common third party add-ins like Mousify or Optimizely; the expectations that those tools have around working within a well-understood DOM will affect the usefulness of those tools.

I know, I spent the first part of this post talking about how it is possible to do this, and then spend the second part seeming to say why it should not be done.  That is not what that second section is saying.  Other than the first three bullets, all of these points are issues that anyone considering a SPA site should understand and account for in their evaluation, regardless of a CMS.

What it really comes down to is if, after understanding the implicit limitations imposed by a SPA approach, you still feel that it is the best way to design your website, then rest assured that your CMS can support it.  You simply need to change the way that your CMS delivers content to match the new design paradigm. The easiest approach to understanding this is to build a simple POC.  There are two “web pages” below.  It is made up of 1 page template and 4 content components: block of text with title and sub-title, an email sign-up form, a calendar, and an image.  Note that the second page has 2 instances of the block of text component with different content. 

If you can make this work in your client-side framework d’jour, than you can do it with your CMS.

A discussion on data validation

Validation is a critical consideration when considering sourced data, that data over which the system designer does not have complete control.  Validation is also an essential part of interpersonal relationships, but that is out of scope for this article…  When looking at sourced data (and interpersonal relationships, actually), we can see that data can be made up of historical data, integration-related data, and\or user-entered data.  This data is called out because it was created by systems that are not necessarily making the same assumptions about data as your system.  Since there are different assumptions, there is a chance that this provided data will not work within your system.  This is where validation comes in; a way of examining data at run-time to ensure “correctness.”  This may be as simple as the length of a field to something more complicated and businessy, such as a verified email address or a post-office approved address.  This examination for “correctness” is validation. 

In the grand scheme of things, validation is simply the detection, measurement, correction, and prevention of defects caused directly, or indirectly, by weak quality data.  There are three different types of validation, internal validation, external validation, and process validation.  Internal validation methods are embedded in the production system at design time to ensure the internal validity of data by eliminating any inconsistencies. External validation, on the other hand, makes use of knowledge about the real world. This use of the real world means that, when looking at a data model, external validity of that model is determined by comparing it against the real world.  Finally, process validation focuses on actual data processing operations. This focus on operations means that all data defects are synonymous with process defects, and thus, correction of process defects effectively leads to the prevention of future data defects. Efforts around process correction are based upon the fact that once a piece of incomplete data is in a database, it will be challenging to fix.

lotsa words needs a picture of ducks!

Validation and correctness are somewhat vague words, so let us add some context around them.  The easiest way to do that would be to talk about the different types of validation that may need to be done.  The most common forms of validation are range and constraint checking, cross-reference checking, and consistency checking.

  • Range and constraint check – this approach is where the validation logic examines data for consistency with a minimum/maximum range, such as spatial coordinates that have a real boundary, or consistency with a test for evaluating a sequence of characters, such evaluating whether a US phone number has ten digits or a US postal code has either 5 or 9 numeric values.  A presence check, where a value is validated based on the expectation that it is populated, is a unique example of a range and constraint check.
  • Cross-reference check – this type of check is when a value is validated by determining if it exists elsewhere.  This is the kind of validation where a user-entered postal code would be checked against a database of postal codes do determine validity.  This is different from the constraint check, which was simply checking whether the characters in the entered string were appropriate and not making any determination as to whether those values are real.  Another example of a cross-reference check would be a uniqueness test, such as evaluating whether or not an email address has already been used within the system.  You can depend upon cross-reference checks being more expensive than are range and constraint checks.
  • Consistency check – where the system determines whether the data entered is consistent.  An example of this would be checking to see if the date shipped on an order is before the date the order was placed.

The last thing for consideration as part of validation is post-validation actions, the steps to be taken after data validation.  The easiest decision is when validation is successful, as the business process can continue.  The more significant decision is what to do when validation fails.  Three different actions can happen in the event of a validation failure, an enforcement action, and advisory action, or a verification action.  I guess there is always the option of taking no action at all, but that kind of makes the performed work to perform the validation of a big ole’ waste of time…

  • Enforcement action – When taking an enforcement action, the expectation is that data that vailed validation will be put into a correct state.  There are typically two different ways that could happen, either the user corrects the information based upon feedback from the validator, such as by providing a missing value OR the system automatically fixes the data, such as fixing the capitalization in the LastName field when entered with AllCaps.
  • Advisory action – this approach is when the data is either accepted or rejected by the system, with an advisory is issued about the failed validation.  This approach tends to be taken when there is no user involvement and thus no ability to perform an enforcement action and could be represented as an item being loaded with a status that indicates the validation problem.
  • Verification action – This type of approach could be considered a select type of advisory action.  Also, by design, it is not a strict validation approach.  An example of this approach is when a user is asked to verify that the entered data is what they wanted to enter.  Sometimes there be a suggestion to the contrary.  This approach is relatively common in those sites that validate addresses, for instance.  In that case, the address validation may indicate that the street name is incorrectly spelled, so you end up giving the user the option of accepting the recommendation and replacing the value with the corrected value or keeping their version.

Validation is an important consideration when building any application that interacts with data.  There are various types of validation, such as internal validation, external validation, and process validation.  There are several different types of checking that could be performed for the validation, including range and constraint check, cross-reference check, and consistency check.  Lastly, there are three major approaches for managing an error: and enforcement action, and advisory action, and a verification action.

The next posts in the Validation series will show how to implement differing validation strategies.  Stay tuned!

Configuring the AWS Toolkit in JetBrains’ Rider

JetBrains just announced their release of their Amazon Web Services Toolkit for Rider.  For those that aren’t familiar with it, Rider is JetBrains’ cross-platform .NET IDE. Rider is based on JetBrains’ IntelliJ platform and ReSharper, and can be considered a direct competitor to Visual Studio.  It is also considerably less expensive then is Visual Studio.  If you don’t already have Rider, you can download a 30-day trial version at their site.

NOTE:  There is actually more work that needs to be done to fully configure your system to support the AWS Toolkit in Rider.  These steps include:

  • An AWS account configured with the correct permissions. Create a free AWS account here.
  • The AWS CLI (Command Line Interface) and AWS SAM (Serverless Application Model) CLI tools, which the plugin uses to build, deploy and invoke your code. SAM Instructions are several steps long and not trivial.  The regular CLI install steps, shown here, are also not trivial.
  • .NET Core 2.0 or 2.1. It’s not enough to just have .NET Core 3.0 or above installed. Make sure you’re running tests on the same version you’re targeting. Instructions
  • The SAM CLI will download and create a container to invoke your Lambda locally. Instructions (Should be covered in the CLIs, but just in case)

Install the Toolkit

Before we can walk through the new toolkit, we have to get it installed.  Installing the toolkit is simple once all the prerequisites are completed.  The directions are available from the AWS site here.  I have created my own version of these directions below because the sample on the AWS site is based upon the IntelliJ IDE rather than the Rider IDE so some of the steps are slightly unclear or look different.

  1. Open Rider. You do the installation of tool kits through the IDE rather than installing them outside of the context of the IDE (like a lot of the Visual Studio tool kits)
  2. Select Configure on the splash screen as shown below.
    Rider-Configure
  3. Select Plugins on the drop-down menu. This will bring up the Plugins
  4. Search for “AWS”. You should get a list of plugins, including the AWS Toolkit.
    Rider-AWSToolkit
  5. Install it.
  6. After installing, you will be asked to restart the IDE. You should probably go ahead and follow those directions.  I didn’t try NOT doing it, but I am a firm believer in the usefulness of “turning it off and on”; so I am all over it when the manufacturer tells me to take that step.
  7. Once the application is restarted, selecting New Solution will show you that there is a new solution template available, the AWS Serverless Application, as shown below.
    Rider-CREATEAWSSolution
  8. Go ahead and Create the solution named HelloWorld. It may be a little predictable, but HelloWorld is still the gold standard of initial applications…  Rider will go ahead and load the solution once it is completed and then open the README.md file.

Configure your connection to AWS

If you have already connected to AWS through the CLI or another toolkit or IDE, you should see the following in the bottom left corner of the IDE:

Rider-CredentialsReloaded

If not, follow the instructions for Configuring the AWS CLI here.  This will setup your default profile.  You will probably have to restart Rider after setting up your credentials.

Once Rider is restarted, you will see a notification on the bottom-right of the window that says “No credentials selected.”  Before you do what I did and roll your eyes, muttering “OMG” under your breath, click on that message (which is really a button)

Rider-NoCredentialsSelected

And find out that it is actually a menu.  Select the credential from the drop-down (it is probably called Default)

Rider-SelectProfile

And you’ll see that the it no longer says “No credentials selected.”  Congratulations, you are connected!