Integrated Development Environment (IDE) Toolkits for AWS

As a developer myself, I have spent significant time working in so-called integrated development environments (IDEs) or source code editors. Many a time I’ve heard the expression “don’t make me leave my IDE” in relation to the need to perform some management task! For those of you with similar feelings, AWS offers integrations, known as “toolkits”, for the most popular IDEs and source code editors in use today in the .NET community – Microsoft Visual Studio, JetBrains Rider, and Visual Studio Code.

The toolkits vary in levels of functionality and the areas of development they target. All three however share a common ability of making it easy to package up and deploy your application code to a variety of AWS services.

AWS Toolkit for Visual Studio

Ask any longtime developer working with .NET and it’s almost certain they will have used Visual Studio. For many .NET developers it could well be the only IDE they have ever worked with in a professional environment. It’s estimated that almost 90% of .NET developers are using Visual Studio for their .NET work which is why AWS has supported an integration with Visual Studio since Visual Studio 2008. Currently, the AWS toolkit is available for the Community, Professional, and Enterprise editions of Visual Studio 2017 and Visual Studio 2019. The toolkit is available on the Visual Studio marketplace.

The AWS Toolkit for Visual Studio, as the most established of the AWS integrations, is also the most functionally rich and supports working with features of multiple AWS services from within the IDE environment. First, a tool window – the AWS Explorer – surfaces credential and region selection, along with a tree of services commonly used by developers as they learn, experiment, and work with AWS. You can open the explorer window using an entry on the IDE’s View menu. Figure 1 shows a typical view of the explorer, with its tree of services and controls for credential profile and region selection.

Figure 1. The AWS Explorer

The combination of selected credential profile and region scopes the tree of services and service resources within the explorer, and resource views opened from the explorer carry that combination of credential and region scoping with them. In other words, if the explorer is bound to (say) US East (N. Virginia) and you open a view onto EC2 instances, that view shows the instances in the US East (N. Virginia) region only, owned by the user represented by the selected credential profile. If the selected region, or credential profile, in the explorer changes the instances shown in the document view window do not – the instances view remains bound to the original credential and region selection.

Expanding a tree node (service) in the explorer will display a list of resources or resource types, depending on the service. In both cases, double clicking a resource or resource type node, or using the Open command in the node’s context menu, will open a view onto that resource or resource type. Consider the Amazon DynamoDB, Amazon EC2, and Amazon S3 nodes, shown in Figure 2.

Figure 2. Explorer tree node types

AWS Toolkit for JetBrains Rider

Rider is a relatively new, and increasingly popular, cross-platform IDE from JetBrains. JetBrains are the creators of the popular ReSharper plugin, and the IntelliJ IDE for Java development, among other tools. Whereas Visual Studio runs solely on Windows (excluding the “special” Visual Studio for Mac), Rider runs on Windows, Linux, and macOS.

Unlike the toolkit for Visual Studio, which is more established and has a much broader range of supported services and features, the toolkit for Rider focuses on features to support the development of serverless and container-based modern applications. You can install the toolkit from the JetBrains marketplace by selecting Plugins from the Configure link in Rider’s startup dialog.

To complement the toolkit’s features, you do need to install a couple of additional dependencies. First, the AWS Serverless Application Model (SAM) CLI because the Rider toolkit uses the SAM CLI to build, debug, package, and deploy serverless applications. In turn, SAM CLI needs Docker to be able to provide the Lambda-like debug environment. Of course, if you are already working on container-based applications you’ll likely already have this installed.

With the toolkit and dependencies installed, let’s first examine the AWS Explorer window, to compare it to the Visual Studio toolkit. Figure 3 shows the explorer, with some service nodes expanded.

Figure 3. The AWS Explorer in JetBrains Rider

We can see immediately that the explorer gives access to fewer services than the explorer in Visual Studio; this reflects the Rider toolkit’s focus on serverless and container development. However, it follows a familiar pattern of noting your currently active credential profile, and region, in the explorer toolbar.

Controls in the IDE’s status bar link to the credential and region fields in the explorer. This enables you to see at a glance which profile and region are active without needing to keep the explorer visible (this isn’t possible in Visual Studio, where you need to open the explorer to see the credential and region context). Figure 4 shows the status bar control in action to change region. Notice that the toolkit also keeps track of your most recently used profile and region, to make changing back-and-forth super quick.

Figure 4. Changing region or credentials using the IDE’s status bar

AWS Toolkit for Visual Studio Code

Visual Studio Code (VS Code) is an editor with plugin extensions, rather than a full-fledged IDE in the style of Visual Studio or Rider. However, the sheer range of available extensions make it a more than capable development environment for multiple languages, including C# / .NET development on Windows, Linux, and macOS systems.

Like the toolkit for Rider, the VS Code toolkit focuses on the development of modern serverless and container-based applications. The toolkit offers an explorer pane with the capability to list resources across multiple regions, similar to the single-region explorers available in the Visual Studio and Rider toolkits. The VS Code toolkit also offers local debugging of Lambda functions in a Lambda-like environment. As with Rider, the toolkit uses the AWS SAM CLI to support debugging and deployment of serverless applications, so you do need to install this dependency, and Docker as well, to take advantage of debugging support.

Credentials are, once again, handled using profiles, and the toolkit offers a command palette item that walks you through setting up a new profile if no profiles already exist on your machine. If you have existing profiles, the command simply loads the credential file into the editor, where you can paste the keys to create a new profile.

Figure 5 shows some of the available commands for the toolkit in the command palette.

Figure 5. Toolkit commands in the command palette

Figure 6 highlights the active credential profile and an explorer bound to multiple regions. Clicking the status bar indicator enables you to change the bound credentials. You show or hide additional regions in the explorer from a command, or using the toolbar in the explorer (click the button).

Figure 6. Active profile indicator and multi-region explorer

Obviously, I have not really touched on each of the toolkits in much detail. I will be doing that in future articles where I go much deeper into the capabilities, strengths, and weaknesses of the various toolkits and how they may affect your ability to interact with the AWS services directly from within your IDE. Know now, however, that if you are a .NET developer that uses one of these common IDEs (yes, there are still some devs that do development in Notepad) that there is an AWS toolkit that will help you as you develop.

Refactor vs Replacement: a Journey

Journeys around the refactoring of application and system architectures are dangerous.  Every experienced developer probably has a horror story around an effort to evolve an application that has gone wrong.  Because of this, any consideration around the refactoring effort of an application or system needs to start with a frank evaluation as to whether or not that application has hit the end of its lifecycle and should be retired.  You should only consider a refactor if you are convinced that the return on investment for that refactor is higher than the return on a rewrite or replace. Even then, it should not be an automatic decision.

First, let us define a refactor.  The multiple ways in which this word is used can lead to confusion.  The first approach is best described by Martin Fowler, who defined it as “A change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior.”  That is not the appropriate definition in this case.  Instead, refactor in this case refers to a process that evolves the software through multiple iterations, with each iteration getting closer to the end state.

One of the common misconceptions of an application refactor is that the refactor will successfully reduce problematic areas.  However, studies have shown that refactoring has a significantly high chance of introducing new problem areas into the code (Cedrim et al., 2017).  This risk increases if there is other work going on within the application, thus an application that requires continuing feature or defect resolution work will be more likely to have a higher number of defects caused by the refactoring process (Ferriera et al., 2018). Thus, the analysis regarding refactoring an application has to contain both an evaluation of the refactor work and the integration work for the new functionality that may span both refactored and refactored work.  Unanticipated consequences when changing or refactoring applications is so common that many estimation processes try to consider them.  Unfortunately, however, these processes only try to allow for the time necessary to understand and remediate these consequences rather than trying to determine where these efforts will occur.

Analyzing an application for replacement

Understanding the replacement cost of an application is generally easier than analyzing the refactor cost because there tends to be a deeper understanding of the development process as it pertains to new software as opposed to altering software; it is much easier to predict side effects in a system as you build it from scratch.  This approach also allows for “correction of error” where previous design decisions may no longer be appropriate.  These “errors” could be the result of changes in technology, changes in business, or simply changes in the way that you apply software design patterns to the business need; it is not a judgement that the previous approach was wrong, just that it is not currently the most appropriate approach.

Understanding the cost of replacing an application then becomes a combination of new software development cost plus some level of opportunity cost.  Using opportunity cost is appropriate because the developers that are working on the replacement application are no longer able to work on the current, to be replaced, version of the application. Obviously, when considering an application for replacement, the size and breadth of scope of the application is an important driver.  This is because the more use cases that a system has to satisfy; the more expensive it will be to replace that system.  Lastly, the implementation approach will matter as well.  An organization that can support two applications running in parallel and phasing processing from one to the other will have less of an expense than will an organization that turns the old system off while turning on the new system.  That you can phase the processing like that typically means that you can support reverting back to the previous version, before the last phasing-in part.  The hard switch over makes it less likely that you can simply “turn the old system back on.” 

Analyzing the Analysis

A rough understanding of the two approaches, and the costs and benefits of each approach, will generally lead to a clear “winner” where one approach is demonstrably superior to another.  In those other cases, you should give the advantage to that approach that demonstrates a closer adherence to modern architectural design – which would most likely be the replacement application.  This becomes the decision maker because of its impact to scalability, reliability, security, and cost, all of which define a modern cloud-based architecture.  This circles back to my earlier point on how the return on investment for a refactor must be higher than the return on investment for a rewrite\replace, as there is always a larger error when understanding the effort of a refactor as compared to green-field development.  You need to consider this fact when you perform your evaluation of the ROI of refactoring your application.  Otherwise, you will mistakenly refactor when you should have replaced, and that will lead to another horror story.  Our job is already scary enough without adding to the horror…

 References

Cedrim, D., Gheyi, R., Fonseca, B., Garcia, A., Sousa, L., Ribeiro, M., Mongiovi, M., de Mello, R., Chanvez, A. Understanding the Impact of Refactoring on Smells: A Longitudinal Study of 23 Software Projects, Proceedings of 2017 11th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, Paderborn, Germany, September 4–8, 2017 (ESEC/FSE’17), 11 pages. https://doi.org/10.1145/3106237.3106259

Ferreira, I., Fernandes, E., Cedrim, D., Uchoa, A., Bibiano, A., Garcia, A., Correia, J., Santos, F., Nunes, G., Barbosa, C., Fonseca, B., de Mello, R., Poster: The Buggy side of code refactoring: Understanding the relationship between refactoring and bugs.  40th International Conference of Software Engineering (ICSE), Posters Track. Gotherburg, Sweden. https://doi.org/10.1145/3183440.3195030