My content goes here! AWS Serverless Application Model (SAM) Command Line Interface – Build, Test, and Debug Serverless Apps Locally

My content goes here! Decades ago, I wrote page after page of code in 6502 assembly language. After assembling and linking the code, I would load it into memory, set breakpoints at strategic locations, and step through to make sure that everything worked as intended. These days, I no longer have the opportunity to write or debug any non-trivial code, so I was a bit apprehensive when it came time to write this blog post (truth be told, I have been procrastinating for several weeks).

SAM CLI
I want to tell you about the new Serverless Application Model (SAM) Command Line Interface, and to gain some confidence in my ability to build something using AWS Lambda as I do so! Let’s review some terms to get started:

AWS SAM, short for Serverless Application Model, is an open source framework you can use to build serverless applications on AWS. It provides a shorthand syntax you can use to describe your application (Lambda functions, API endpoints, DynamoDB tables, and other resources) using a simple YAML template. During deployment, SAM transforms and expands the shorthand SAM syntax into an AWS CloudFormation template. Then, CloudFormation provisions your resources in a reliable and repeatable fashion.

The AWS SAM CLI, formerly known as SAM Local, is a command-line interface that supports building SAM-based applications. It supports local development and testing, and is also an active open source project. The CLI lets you choose between Python, Node, Java, Go, .NET, and includes a healthy collection of templates to help get you started.

The sam local command in the SAM CLI, delivers support for local invocation and testing of Lambda functions and SAM-based serverless applications, while running your function code locally in a Lambda-like execution environment. You can also use the sam local command to generate sample payloads locally, start a local endpoint to test your APIs, or automate testing of your Lambda functions.

Installation and Setup
Before I can show you how to use the SAM CLI, I need to install a couple of packages. The functions provided by sam local make use of Docker, so I need to work in a non-virtualized environment for a change! Here’s an overview of the setup process:

Docker – I install the Community Edition of Docker for Windows (a 512 MB download), and run docker ps to verify that it is working:

Python – I install Python 3.6 and make sure that it is on my Windows PATH:

Visual Studio Code – I install VS Code and the accompanying Python Extension.

AWS CLI – I install the AWS CLI:

And configure my credentials:

SAM – I install the AWS SAM CLI using pip:

Now that I have all of the moving parts installed, I can start to explore SAM.

Using SAM CLI
I create a directory (sam_apps) for my projects, and then I run sam init to create my first project:

This creates a sub-directory (sam-app) with all of the necessary source and configuration files inside:

I create a build directory inside of hello_world, and then I install the packages defined in requirements. The build directory contains the source code and the Python packages that are loaded by SAM Local:

And one final step! I need to copy the source files to the build directory in order to deploy them:

My app (app.py and an empty __init__.py) is ready to go, so I start up a local endpoint:

At this point, the endpoint is listening on port 3000 for an HTTP connection, and a Docker container will launch when the connection is made. The build directory is made available to the container so that the Python packages can be loaded and the code in app.py run.

When I open http://127.0.0.1:3000/hello in my browser, the container image is downloaded if necessary, the code is run, and the output appears in my browser:

Here’s what happens on the other side. You can see all of the important steps here, including the invocation of the code, download of the image, mounting the build directory in the container, and the request logging:

I can modify the code, refresh the browser tab, and the new version is run:

The edit/deploy/test cycle is incredibly fast, and you will be more productive than ever!

There is one really important thing to remember here. The initial app.py file was created in the hello_world directory, and I copied it to the build directory a few steps ago. I can do this deployment step each time, or I can simply decide that the code in the build directory is the real deal and edit it directly. This will affect my source code control plan once I start to build and version my code.

What’s Going On
Now that the sample code is running, let’s take a look at the SAM template (imaginatively called template.yaml). In the interest of space, I’ll skip ahead to the Resources section:

Resources:

    HelloWorldFunction:
        Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
        Properties:
            CodeUri: hello_world/build/
            Handler: app.lambda_handler
            Runtime: python3.6
            Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
                Variables:
                    PARAM1: VALUE
            Events:
                HelloWorld:
                    Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
                    Properties:
                        Path: /hello
                        Method: get

This section defines the HelloWorldFunction, indicates where it can be found (hello_world/build/), how to run it (python3.6), and allows environment variables to be defined and set. Then it indicates that the function can process the HelloWorld event, which is generated by a GET on the indicated path (/hello).

This template is not reloaded automatically; if I change it I will need to restart SAM Local. I recommend that you spend some time altering the names and paths here and watching the errors that arise. This will give you a good understanding of what is happening behind the scenes, and will improve your productivity later.

The remainder of the template describes the outputs from the template (the API Gateway endpoint, the function’s ARN, and the function’s IAM Role). These values do not affect local execution, but are crucial to a successful cloud deployment.

Outputs:

    HelloWorldApi:
      Description: "API Gateway endpoint URL for Prod stage for Hello World function"
      Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"

    HelloWorldFunction:
      Description: "Hello World Lambda Function ARN"
      Value: !GetAtt HelloWorldFunction.Arn

    HelloWorldFunctionIamRole:
      Description: "Implicit IAM Role created for Hello World function"
      Value: !GetAtt HelloWorldFunctionRole.Arn

You can leave all of these as-is until you have a good understanding of what’s going on.

Debugging with SAM CLI and VS Code
Ok, now let’s get set up to do some interactive debugging! This took me a while to figure out and I hope that you can benefit from my experience. The first step is to install the ptvsd package:

Then I edit requirements.txt to indicate that my app requires ptvsd (I copied the version number from the package name above):

requests==2.18.4
ptvsd==4.1.4

Next, I rerun pip to install this new requirement in my build directory:

Now I need to modify my code so that it can be debugged. I add this code after the existing imports:

import ptvsd
ptvsd.enable_attach(address=('0.0.0.0', 5858), redirect_output=True)
ptvsd.wait_for_attach()

The first statement tells the app that the debugger will attach to it on port 5858; the second pauses the code until the debugger is attached (you could make this conditional).

Next, I launch VS Code and select the root folder of my application:

Now I need to configure VS Code for debugging. I select the debug icon, click the white triangle next to DEBUG, and select Add Configuration:

I select the Python configuration, replace the entire contents of the file (launch.json) with the following text, and save the file (File:Save).

{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [

        {
            "name": "Debug with SAM CLI (Remote Debug)",
            "type": "python",
            "request": "attach",
            "port": 5858,
            "host":  "localhost",
            "pathMappings": [
                {
                "localRoot": "${workspaceFolder}/hello_world/build",
                "remoteRoot" : "/var/task"
                }
            ]
        }
    ]
}

Now I choose this debug configuration from the DEBUG menu:

Still with me? We’re almost there!

I start SAM Local again, and tell it to listen on the debug port:

I return to VS Code and set a breakpoint (good old F9) in my code:

One thing to remember — be sure to open app.py in the build directory and set the breakpoint there.

Now I return to my web browser and visit the local address (http://127.0.0.1:3000/hello) again. The container starts up to handle the request and it runs app.py. The code runs until it hits the call to wait_for_attach, and now I hit F5 in VS Code to start debugging.

The breakpoint is hit, I single-step across the requests.get call, and inspect the ip variable:

Then I hit F5 to continue, and the web request completes. As you can see, I can use the full power of the VS Code debugger to build and debug my Lambda functions. I’ve barely scratched the surface here, and encourage you to follow along and pick up where I left off. To learn more, read Test Your Serverless Applications Locally Using SAM CLI.

Cloud Deployment
The SAM CLI also helps me to package my finished code, upload it to S3, and run it. I start with an S3 bucket (jbarr-sam) and run sam package. This creates a deployment package and uploads it to S3:

This takes a few seconds. Then I run sam deploy to create a CloudFormation stack:

If the stack already exists, SAM CLI will create a Change Set and use it to update the stack. My stack is ready in a minute or two, and includes the Lambda function, an API Gateway, and all of the supporting resources:

I can locate the API Gateway endpoint in the stack outputs:

And access it with my browser, just like I did when the code was running locally:

I can also access the CloudWatch logs for my stack and function using sam logs:

My SAM apps are now visible in the Lambda Console (this is a relatively new feature):

I can see the template and the app’s resources at a glance:

And I can see the relationship between resources:

There’s also a monitoring dashboard:

I can customize the dashboard by adding an Amazon CloudWatch dashboard to my template (read Managing Applications in the AWS Lambda Console to learn more).

That’s Not All
Believe it or not, I have given you just a taste of what you can do with SAM, SAM CLI, and the sam local command. Here are a couple of other cool things that you should know about:

Local Function Invocation – I can directly invoke Lambda functions:

Sample Event Source Generation – If I am writing Lambda functions that respond to triggers from other AWS services (S3 PUTs and so forth), I can generate sample events and use them to invoke my functions:

In a real-world situation I would redirect the output to a file, make some additional customization if necessary, and then use it to invoke my function.

Cookiecutter Templates – The SAM CLI can use Cookiecutter templates to create projects and we have created several examples to get you started. Take a look at Cookiecutter AWS Sam S3 Rekognition Dynamodb Python and Cookiecutter for AWS SAM and .NET to learn more.

CloudFormation Extensions – AWS SAM extends CloudFormation and lets you benefit from the power of infrastructure as code. You get reliable and repeatable deployments and the power to use the full suite of CloudFormation resource types, intrinsic functions, and other template features.

Built-In Best Practices – In addition to the benefits that come with an infrastructure as code model, you can easily take advantage of other best practices including code reviews, safe deployments through AWS CodePipeline, and tracing using AWS X-Ray.

Deep Integration with Development Tools – You can use AWS SAM with a suite of AWS tools for building serverless applications. You can discover new applications in the AWS Serverless Application Repository. For authoring, testing, and debugging SAM-based serverless applications, you can use the AWS Cloud9 IDE. To build a deployment pipeline for your serverless applications, you can use AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline. You can also use AWS CodeStar to get started with a project structure, code repository, and a CI/CD pipeline that’s automatically configured for you. To deploy your serverless application you can use the AWS SAM Jenkins plugin, and you can use Stackery.io’s toolkit to build production-ready applications.

Check it Out
I hope that you have enjoyed this tour, and that you can make good use of SAM in your next serverless project!

Jeff;

 

In the Works – AWS Region in South Africa

Last year we launched new AWS Regions in France and China (Ningxia), and announced that we are working on regions in Bahrain, Hong Kong SAR, Sweden, and a second GovCloud Region in the United States.

South Africa in Early 2020
Today, I am happy to announce that we will be opening an AWS Region in South Africa in the first half of 2020. The new Region will be based in Cape Town, will be comprised of three Availability Zones, and will give AWS customers and partners the ability to run their workloads and store their data in South Africa. The addition of the AWS Africa (Cape Town) Region will also enable organizations to provide lower latency to end users across Sub-Saharan Africa and will enable more African organizations to leverage advanced technologies such as Artificial Intelligence, Machine Learning, Internet of Things (IoT), mobile services, and more to drive innovation.

AWS customers are already making use of 55 Availability Zones across 19 infrastructure regions worldwide. Today’s announcement brings the total number of global regions (operational and in the works) up to 23.

A Growing Presence
The new Region is the latest of a series of investments in South Africa, and is part our commitment to support South Africa’s transformation. In 2004, Amazon opened a Development Center in Cape Town that focuses on building pioneering networking technologies, next generation software for customer support, and the technology behind Amazon EC2. AWS has also added a number of teams including account managers, customer services reps, partner managers, solutions architects, and more, helping customers of all sizes as they move to the cloud.

In 2015, we continued our expansion, opening an office in Johannesburg, and in 2017 we brought the Amazon Global Network to Africa through AWS Direct Connect. Earlier this year we launched infrastructure on the African continent introducing Amazon CloudFront to South Africa, with two new edge locations in Cape Town and Johannesburg. We also support the growth of technology education with AWS Academy and AWS Educate and have supported the growth of new businesses through AWS Activate in the country for many years.

The addition of the AWS Region in South Africa will help builders and entrepreneurs in enterprises, the public sector, and startups across Sub-Saharan Africa to innovate and grow their organizations.

Talk to Us
As always, we are looking forward to serving new and existing customers in South Africa and working with partners across the region. Of course, the new Region will also be open to existing AWS customers who would like to serve users in South Africa and across the African continent.

To learn more about the AWS South Africa Region feel free to contact our team at aws-in-africa@amazon.com. If you are interested in joining the team and would like to learn more about AWS positions in South Africa, take a look at the Amazon Jobs site.

Jeff;

Check it Out – New AWS Pricing Calculator for EC2 and EBS

The blog post that we published over a decade ago to launch the Simple Monthly Calculator still shows up on our internal top-10 lists from time to time! Since that post was published, we have extended, redesigned, and even rebuilt the calculator a time or two.

New Calculator
Starting with a blank screen, an empty code repo, and plenty of customer feedback, we are building a brand-new AWS Pricing Calculator. The new calculator is designed to help you estimate and understand your eventual AWS costs. We did our best to avoid excessive jargon and to make the calculations obvious, transparent, and accessible. You can see the options that are available to you, explore the associated costs, and make high-quality data-driven decisions.

We’re starting out with support for EC2 instances, EBS volumes, and a very wide variety of purchasing models, with plans to add support for more services as quickly as possible.

A Quick Tour
The new calculator lives at https://calculator.aws . Each estimate consists of one or more groups and the first one is created automatically:

Each group has a name, and has pricing for services in a particular AWS region. I click Edit group to change the name and pick a region, and click Apply:

Back at the main page of the calculator, I click Add service and choose to configure some EC2 instances. The group can contain multiple types and configurations of instances; I click Configure to move ahead:

At this point I can make a Quick estimate (the default), or supply more details as part of an Advanced estimate. I’ll start with a Quick estimate:

Here are a couple of things to keep in mind when you make a quick estimate:

Instance Type – I have two options for choosing EC2 instance types; I can enter my resource requirements (vCPU count, memory size, and GPU count) and have the calculator choose the option with the lowest price, or I can pick an EC2 instance type by name.

Pricing Strategy – I can choose to use On-Demand Instances, Convertible Reserved Instances, or Standard Reserved Instances, and can choose payment terms and options for RI’s.

EBS Volumes – I can choose the type and size of an EBS volume for the instance. Right now, the calculator allows you to associate one volume with each EC2 instance. If you need more than one, specify the total amount of storage you need across all volumes.

Details – I can expand the Show calculation section to see the math:

After I have made my choices, I click Add to my estimate to move ahead. My selections, along with their costs (annual, upfront, and monthly), are displayed:

I can go back and add another service, or create another group. I’ll add another EC2 instance, using an Advanced estimate this time around. Here’s where I start:

I have very fine-grained control over each aspect of my estimate. For example, I can characterize my workload in great detail. I click on Workload, and have the ability to select the graph that best represents my monthly workload:

I can even model workloads that have two or more independent daily (in this case) spike patterns. As I refine my model, the calculator figures out the most economical combination of On-Demand and Reserved Instances, and shows me the results:

The calculations are driven by the selection in the Pricing strategy. The default value, and the one that I used for the previous screen shot, is Cost optimized. I have other choices as well:

I can also model my data transfer in, out, and to other AWS regions:

Once I am happy with the results I click Add to my estimate, and take a look at my selections and their prices:

I can click Export to capture my estimate in spreadsheet form:

Here’s the data (I hid a few columns for clarity):

As you can see, the new calculator will quickly become a useful part of your planning and decision-making process.

One important thing to keep in mind: your estimates are stored in state that is local to the browser tab, and will be lost if you close the tab. The team is already hard at work on features that will allow you to save and even share your estimates, for launch in early 2019.

Stay Tuned
We will be adding more services and more features to the calculator in the months to come, and I’ll share some updates with you from time to time, either in this blog or via Twitter. If you have ideas, complaints, or other feedback, don’t hesitate to click on the Feedback link at the top of the page.

Jeff;

 

An Engineer and a tiger walk into a Clinic at the PASS Summit 2018

Sounds like there should be a punchline right? Well, there is. Microsoft will be in full force at PASS Summit 2018, and this is the third blog post in a series describing the involvement of the Microsoft SQL Server engineering team there. You can read more about our keynotes and our Modernizing pre-conference seminar from this series. In this post, Ill talk more about the SQL Server engineering team sessions and participation in the famous Data Clinic (aka the SQL Clinic) covering everything from SQL Server 2019 to the practical and to the modern.

SQL Server 2019

You may have seen the buzz about the launch of SQL Server 2019 preview a few weeks ago. Our team will immerse you in the strategy and the details of SQL Server 2019 with the following sessions:

The Tiger team

Many people have asked me whether the Tiger team is some special ops unit of SQL Server. Well in a way, yes: they are all members of the SQL Server engineering team but with very special skills and customer experience, and the team always puts on amazing presentations. Check out the lineup:

Don’t miss these

SQL Server is no longer just a relational database engine, it is a modern data platform. And innovative, new parts of that platform are technologies like graph, Machine Learning Services, Java language support, in-memory columnstore, and containers. I promise you will walk away from these sessions and learn something you didnt know.

The Data Clinic

In 2003, Ken Henderson approached me with the idea that we should not just speak at the PASS Summit, but also do something more interactive. That year we started something called the Service Center with just the two of us in a room answering questions. The first day, almost no one came. The next day it was announced at the keynote we would both be in a room answering questions. Later that day there were so many people in line to talk that the fire marshal had to kick people out of the room. Thus was born a unique opportunity to interact with Microsoft experts with no special add-on fees. Years later in 2009, Mark Souza approached me and asked if we could combine forces with the CAT team to create something even bigger. We called it the SQL Clinic. It grew into something legendary. Each year we had to find a bigger space to accommodate the crowds. CSS and CAT together would offer troubleshooting and architecture advice, ad-hoc with no special prep. We would help customers solve real-world problems or questions live, on laptop or whiteboards. Later, the rest of the SQL Engineering team joined the clinic to make it something special. Now at any time during the day, you can walk into the clinic area (a room is too small) and interact with SQL Engineering developers and program managers, architects, the Tiger Team, and the CAT team. And as the expansion of the SQL Server product grew into Azure, we renamed it the Data Clinic to account for BI and Azure services such as Azure SQL Database and CosmosDB. So whether you have a scratch, a broken leg, or need some serious surgery (ok Im just kidding, it is not a real clinic), come find us right across from the Expo entrance and get the answers you need or meet folks you have always wanted to meet from the engineering team. Or, drop by to chat during Data Clinic Happy Hour, 4-6 PM on Wednesday, November 7. Finally, you can read the history behind the clinic in the PASS v20 stories.

We will also staff an expo booth with demos and other opportunities to answer your questions. And we will also have short, 20-minute theater sessions in the booth, where will demonstrate all the technologies of SQL Server 2019 and our tools and youll get a chance to take home some fun giveaways.

Finally, I want to extend an invitation to anyone attending to the PASS Summit to come and meet our team. Whether you find us at the clinic, after a session, at the booth, or just in the hallways. Please dont hesitate to stop and meet us and ask us a question. SQL Server is only successful because of the incredible community that uses and supports this product. I can speak for everyone in our engineering team that we get it. We know this is your conference. And since this conference is for you, we dont want any question unanswered. This is my 16th PASS Summit and it is an event I look forward to every year. Our team hopes you walk away with the resources you need and enjoy all the content from Microsoft and our great community. See you in Seattle!

The post An Engineer and a tiger walk into a Clinic at the PASS Summit 2018 appeared first on SQL Server Blog.

An Engineer and a tiger walk into a Clinic at the PASS Summit 2018

Sounds like there should be a punchline right? Well, there is. Microsoft will be in full force at PASS Summit 2018, and this is the third blog post in a series describing the involvement of the Microsoft SQL Server engineering team there. You can read more about our keynotes and our Modernizing pre-conference seminar from this series. In this post, Ill talk more about the SQL Server engineering team sessions and participation in the famous Data Clinic (aka the SQL Clinic) covering everything from SQL Server 2019 to the practical and to the modern.

SQL Server 2019

You may have seen the buzz about the launch of SQL Server 2019 preview a few weeks ago. Our team will immerse you in the strategy and the details of SQL Server 2019 with the following sessions:

The Tiger team

Many people have asked me whether the Tiger team is some special ops unit of SQL Server. Well in a way, yes: they are all members of the SQL Server engineering team but with very special skills and customer experience, and the team always puts on amazing presentations. Check out the lineup:

Don’t miss these

SQL Server is no longer just a relational database engine, it is a modern data platform. And innovative, new parts of that platform are technologies like graph, Machine Learning Services, Java language support, in-memory columnstore, and containers. I promise you will walk away from these sessions and learn something you didnt know.

The Data Clinic

In 2003, Ken Henderson approached me with the idea that we should not just speak at the PASS Summit, but also do something more interactive. That year we started something called the Service Center with just the two of us in a room answering questions. The first day, almost no one came. The next day it was announced at the keynote we would both be in a room answering questions. Later that day there were so many people in line to talk that the fire marshal had to kick people out of the room. Thus was born a unique opportunity to interact with Microsoft experts with no special add-on fees. Years later in 2009, Mark Souza approached me and asked if we could combine forces with the CAT team to create something even bigger. We called it the SQL Clinic. It grew into something legendary. Each year we had to find a bigger space to accommodate the crowds. CSS and CAT together would offer troubleshooting and architecture advice, ad-hoc with no special prep. We would help customers solve real-world problems or questions live, on laptop or whiteboards. Later, the rest of the SQL Engineering team joined the clinic to make it something special. Now at any time during the day, you can walk into the clinic area (a room is too small) and interact with SQL Engineering developers and program managers, architects, the Tiger Team, and the CAT team. And as the expansion of the SQL Server product grew into Azure, we renamed it the Data Clinic to account for BI and Azure services such as Azure SQL Database and CosmosDB. So whether you have a scratch, a broken leg, or need some serious surgery (ok Im just kidding, it is not a real clinic), come find us right across from the Expo entrance and get the answers you need or meet folks you have always wanted to meet from the engineering team. Or, drop by to chat during Data Clinic Happy Hour, 4-6 PM on Wednesday, November 7. Finally, you can read the history behind the clinic in the PASS v20 stories.

We will also staff an expo booth with demos and other opportunities to answer your questions. And we will also have short, 20-minute theater sessions in the booth, where will demonstrate all the technologies of SQL Server 2019 and our tools and youll get a chance to take home some fun giveaways.

Finally, I want to extend an invitation to anyone attending to the PASS Summit to come and meet our team. Whether you find us at the clinic, after a session, at the booth, or just in the hallways. Please dont hesitate to stop and meet us and ask us a question. SQL Server is only successful because of the incredible community that uses and supports this product. I can speak for everyone in our engineering team that we get it. We know this is your conference. And since this conference is for you, we dont want any question unanswered. This is my 16th PASS Summit and it is an event I look forward to every year. Our team hopes you walk away with the resources you need and enjoy all the content from Microsoft and our great community. See you in Seattle!

The post An Engineer and a tiger walk into a Clinic at the PASS Summit 2018 appeared first on SQL Server Blog.

Amazon RDS Update – Console Update, RDS Recommendations, Performance Insights, M5 Instances, MySQL 8, MariaDB 10.3, and More

It is time for a quick Amazon RDS update. I’ve got lots of news to share:

Console Update – The RDS Console has a fresh, new look.

RDS Recommendations – You now get recommendations that will help you to configure your database instances per our best practices.

Performance Insights for MySQL – You can peer deep inside of MySQL and understand more about how your queries are processed.

M5 Instances – You can now use MySQL and MariaDB on M5 instances.

MySQL 8.0 – You can now use MySQL 8.0 in production form.

MariaDB 10.3 – You can now use MariaDB 10.3 in production form.

Let’s take a closer look…

Console Update
The RDS Console took on a fresh, new look earlier this year. We made it available to you in preview form during development, and it is now the standard experience for all AWS users. You can see an overview of your RDS resources at a glance, create a new database, access documentation, and more, all from the home page:

You also get direct access to Performance Insights and to the new RDS Recommendations.

RDS Recommendations
We want to make it easy for you to take our best practices in to account when you configure your RDS database instances, even as those practices improve. The new RDS Recommendations feature will periodically check your configuration, usage, and performance data and display recommended changes and improvements, focusing on performance, stability, and security. It works with all of the database engines, and is very easy to use. Open the RDS Console and click Recommendations to get started:

I can see all of the recommendations at a glance:

I can open a recommendation to learn more:

I have four options that I can take with respect to this recommendation:

Fix Immediately – I select some database instances and click Apply now.

Fix Later – I select some database instances and click Schedule for the next maintenance window.

Dismiss – I select some database instances and click Dismiss to indicate that I do not want to make any changes, and to acknowledge that I have seen the recommendation.

Defer – If I do nothing, the recommendations remain active and I can revisit them at another time.

Other recommendations may include other options, or might require me to take some other actions. For example, the procedure for enabling encryption depends on the database engine:

RDS Recommendations are available today at no charge in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific (Singapore) Regions. We plan to add additional recommendations over time, and also expect to make the recommendations available via an API.

Performance Insights for MySQL
I can now peek inside of MySQL to see which queries, hosts, and users are consuming the most time, and why:

You can identify expensive SQL queries and other bottlenecks with a couple of clicks, looking back across the timeframe of your choice: an hour, a day, a week, or even longer.

This feature was first made available for PostgreSQL (both RDS and Aurora) and is now available for MySQL (again, both RDS and Aurora). To learn more, read Using Amazon RDS Performance Insights.

M5 Instances
The M5 instances deliver improved price/performance compared to M4 instances, and offer up to 10 Gbps of dedicated network bandwidth for database storage.

You can now launch M5 instances (including the new high-end m5.24xlarge) when using RDS for MySQL and RDS for MariaDB. You can scale up to these new instance types by modifying your existing DB instances:

MySQL 8
Version 8 of MySQL is now available on Amazon RDS. This version of MySQL offers better InnoDB performance, JSON improvements, better GIS support (new spatial datatypes, indexes, and functions), common table expressions to reduce query complexity, window functions, atomic DDLs for faster online schema modification, and much more (read the documentation to learn more).

MariaDB 10.3
Version 10.3 of MariaDB is now available on Amazon RDS. This version of MariaDB includes a new temporal data processing feature, improved Oracle compatibility, invisible columns, performance enhancements including instant ADD COLUMN operations & fast-fail DDL operations, and much more (read the documentation for a detailed list).

Available Now
All of the new features, engines, and instance types are available now and you can start using them today!

Jeff;

 

 

The October release of Azure Data Studio is now available

We are excited to announce the October release of Azure Data Studio (formerly known as SQL Operations Studio) is now available.

Download Azure Data Studio and review the Release Notes to get started.

Note: If you are currently using the preview version, SQL Operations Studio, and would like to retain your settings when you upgrade to the latest version, follow these instructions. When you download Azure Data Studio, remember to enable preview features by default on first launch, and then you can disable in settings if you dont need it otherwise you will be missing preview experiences like Query Plans, certain extension support, and more.

Azure Data Studio is a new cross-platform desktop environment for data professionals using the family of on-premises and cloud data platforms on Windows, MacOS, and Linux. To learn more, visit our GitHub.

Azure Data Studio was announced Generally Available last month at Microsoft Ignite. If you missed the GA announcement, you can see “Azure Data Studio for SQL Server” on the SQL Server blog. You wont want to miss the great orthogonality matrix included comparing SSMS and Azure Data Studio and answers to many of your questions.

Check out this video for a general overview of Azure Data Studio.

In the Octobers version of the monthly release blog, we will be covering features shipped in the September GA release as well as what is new in the October release.

This includes:

For complete updates, refer to the Release Notes.

SQL Server 2019 Preview extension

As announced at Microsoft Ignite, one of the most exciting extensions to share in our September GA release was the release of the SQL Server 2019 Preview extension. If you were following the blog announcements, starting with SQL Server 2019 preview, SQL Server big data clusters allow you to deploy scalable clusters of SQL Server, Spark, and HDFS Docker containers running on Kubernetes.

These components are running side by side to enable you to read, write, and process big data from Transact-SQL or Spark. SQL Server big data clusters allow you to easily combine and analyze your high-value relational data with high-volume big data. To learn about all the excitement of SQL Server Big Data Clusters, follow the documentation here.

These experiences are built as an extension to Azure Data Studio. We can go into full depth about all the great capabilities this extension includes, but deep-diving into any one of these features can be a full blog post itself. Here is a high-level summary of these features, and then you can see a full demo of the features below.

  • Support for SQL Server 2019 features including big data cluster support
    • Connect to the HDFS/Spark Gateway shipped with SQL Server 2019
    • Browse HDFS, upload files, save files, and launch useful actions such as Analyze in Notebook for CSV files
    • Submit Spark jobs from the dashboard or right-click on a HDFS/Spark connection in Object Explorer
  • Azure Data Studio Notebooks
    • Create or open Notebooks using an integrated Notebook viewer. In this release the Notebook viewer supports connecting to local kernels and the SQL Server 2019 big data cluster only.
    • Use the PROSE Code Accelerator libraries in your Notebook to learn file format and data types for fast data preparation.
  • SQL Server Polybase Create External Table Wizard
    • Create an external table and its supporting metadata structures with an easy to use wizard. In this release, remote SQL Server and Oracle servers are supported.

Demo of SQL Server 2019 preview extension capabilities:

To download the extension, follow the instructions here.

Introducing the Azure Resource Explorer

As part of our goal to unify data management experiences, we have made it easier to manage your Azure subscriptions through the Azure Resource Explorer. Originally shipped as an extension, this feature is now built into the core product of Azure Data Studio.

After downloading the latest version, you will now see an Azure icon on the left bar, which you can click on to navigate to the Azure Resource Explorer.

With this feature, you can now manage your Azure SQL Server, Azure SQL Database, and the recently GAed Azure SQL Managed Instance resources easily. By clicking the filter icon to the right of the explorer, you can select which subscriptions you want to have displayed.

After drilling down to your target SQL instance through the explorer, you can then click on the plug icon next to each SQL instance to open up the connection dialog to directly connect to that resource and instantly start querying.

To learn more about the Azure Resource Explorer, check out our documentation.

SQL Server Agent extension improvements

One of our engineering focuses is to improve our first party extensions, which include SQL Server Agent, SQL Server Profiler, and SQL Server Import. As one of the first steps, we have brought a lot of UI and functionality fixes in SQL Server Agent, particularly in the Edit Job experience.

Now you can edit your Job steps, schedules, alerts, and notifications within the dialog.

If you are an avid user of SQL Server Agent, this is your chance to have a say in the new Agent experience in Azure Data Studio. You can report an issue directly on GitHub or go to Help->Report an issue to report directly from the product. Let us know your daily scenarios and how we can help empower you to use SQL Agent on Azure Data Studio daily.

To learn more about SQL Agent or how to acquire the extension, check out our documentation.

Improve Object Explorer and Query Editor connectivity robustness

As part of addressing customer reported issues, we put an emphasis on improving connectivity robustness across Object Explorer and Query Editor. In particular, queries that lose a connection will automatically attempt to reconnect.

To see a full list of the connection investments, see below:

Custom connection name option to provide alternative name

As requested on our GitHub issues page, you can now provide friendly connection names for your connections. This is particularly useful if your connection instance is an ip address, very long, or want to hide the name of the server in a public facing demo or docs.

This shows up as the last input box in the connection dialog as you can see in the screenshot below:

This will then appear in your Servers pane.

VS Code refresh from 1.23 to 1.26.1

Since Azure Data Studio forks from Visual Studio Code, our team continues to periodically refresh Azure Data Studio with stable and mature VS Code releases. This directly benefits users especially in editor and configuration experience.

The latest refresh picks up the latest changes from the July release of Visual Studio Code. This was implemented in the September release, but is still good to highlight for those coming from SQL Operations Studio.

A summary of changes:

To see the full list of changes, you can view the updates at the Visual Studio Code updates page. Be sure to view the changes in also the May and June changes.

Thank you to contributors

If you would like to help make Azure Data Studio a great product, share any feedback or report issues through our Issues page. Our engineering team is regularly going through the untriaged issues and assigning issues into different monthly milestones so that you can know we are working on it. Your votes on issues helps us prioritize.

In addition to submitting issues, users can also contribute by submitting pull requests for potential quick fixes, and we welcome those submissions. Here is a shout-out to some of the customers who have submitted PRs that have been included in the product:

  • AlexFsmnFeature: Ability to add connection name (#2332)
  • AlexFsmnDisabled connection name input when connecting to a server. (#2566)
  • philoushka forcenter the icon(#2760)
  • anthonypants forTypo(#2775)
  • kstolte forFix Invalid Configuration in Launch.json(#2789)
  • kstolte forFixing a reference to SQL Ops Studio(#2788)

Contact us

If you have any feature requests or issues, please submit to our Github issues page. For any questions, feel free to comment below, message us on Gitter, or tweet us @AzureDataStudio.

 

The post The October release of Azure Data Studio is now available appeared first on SQL Server Blog.

DBA essentials—SQL Server 2017 security, performance tuning, and more—inside out

Whether youre an experienced DBA with multiple certifications or just starting on your SQL Server DBA journey, you face challenges constantly for putting your organizations data to work. To turn the next-to-impossible into the practical, you need a comprehensive resource at your fingertips: SQL Server 2017 Administration Inside Out, by William Assaf, Randolph West, Sven Aelterman, and Mindy Curnutt. With this guide, you can get to know new and expanded SQL Server features like Columnstore, memory-optimized indexes, and Query Store, plus understand how to administer Microsoft Azure SQL Database or SQL Server 2017 virtual machines running on Azure.

Heres just some of what youll find in the ultimate guide to SQL Server 2017 (spoiler alertget free access to these chapters below):

Database server components

Even if you have a firm foundation in SQL Server configuration and administration, you want to keep the basics at-hand so you can easily refresh your memory. SQL Server 2017 Administration Inside Out provides a solid foundation in the infrastructure that makes up a database and how it works. The guide also offers helpful tips and best practices every DBA should know, like:

  • Keep in mind that In-Memory OLTP requires RAM overhead thats two times the size of a memory-optimized object.
  • Disable power saving everywhere. Power-saving settings in your operating system (OS) can result in poor query performance. Turn on High Performance at the OS and BIOS levels.
  • Realize that Redundant Array of Independent Disks (RAID) shouldnt take the place of performing backups. It doesnt provide 100 percent protection from data loss.
  • Ensure all TCP/IP traffic to and from SQL Server is encrypted. If youre using the Shared Memory Protocol with applications that are all on the same server, this isnt required.

Securing the server and data

Recognizing that more frequent and complex cyberattacks are targeting your environment, SQL Server 2017 Administration Inside Out covers the security capabilities of SQL Server 2017 comprehensively. The guide helps you play defense to protect your data from attack and minimize the damage should you suffer a data breach. For example, youll learn (or be reminded) that you can:

  • Prevent common SQL injection attacks by making sure that all data is escaped, sanitized, and validated before input and that all SQL Server queries are parameterized.
  • Use newer capabilities like Row-Level Security to restrict access through security policies based on group membership or execution context.
  • Start SQL Server in single-user mode or with minimal configuration and remove the offending audit if that audit shuts down your SQL Server instance.

Performance tuning SQL Server

Ensuring your database performs at top speed is one of your highest priorities. You want to have an accurate and complete understanding of performance tuning concepts, the objects typically associated with tuning SQL Server database performance, and best practices that help your databases run at top speed. In SQL Server 2017 Administration Inside Out, youll gain this critical knowledge, including:

  • The NOLOCK hint (or READ UNCOMMITTED isolation level) can return invalid data. Not only can uncommitted data be read, but committed data can be read twice or skipped. You also run the risk of returning corrupt data or finding the query has failed.
  • You can use the Query Store feature to help you analyze execution plan performance by looking at live Query Store data as it happens as well as at the history of statement execution. Query Store is one of the features that first appeared in Azure SQL Database and then moved to SQL Server on-premises.

All this infoall in one place

This is only a small sample of the in-depth information you can find in SQL Server 2017 Administration Inside Out. Take the book for a test drive by downloading a free custom excerpt, including the three full chapters previewed above.

With this excerpt, youll also get an exclusive 50-percent discount code for the full 14-chapter e-book.

Download the free excerpt from SQL Server 2017 Administration Inside Out today.

The post DBA essentials—SQL Server 2017 security, performance tuning, and more—inside out appeared first on SQL Server Blog.

New – Managed Databases for Amazon Lightsail

Amazon Lightsail makes it easy for you to get started with AWS. You choose the operating system (and optional application) that you want to run, pick an instance plan, and create an instance, all in a matter of minutes. Lightsail offers low, predictable pricing, with instance plans that include compute power, storage, and data transfer:

Managed Databases
Today we are making Lightsail even more useful by giving you the ability to create a managed database with a couple of clicks. This has been one of our top customer requests and I am happy to be able to share this news.

This feature is going to be of interest to a very wide range of current and future Lightsail users, including students, independent developers, entrepreneurs, and IT managers. We’ve addressed the most common and complex issues that arise when setting up and running a database. As you will soon see, we have simplified and fine-tuned the process of choosing, launching, securing, accessing, monitoring, and maintaining a database!

Each Lightsail database bundle has a fixed, monthly price that includes the database instance, a generous amount of SSD-backed storage, a terabyte or more of data transfer to the Internet and other AWS regions, and automatic backups that give you point-in-time recovery for a 7-day period. You can also create manual database snapshots that are billed separately.

Creating a Managed Database
Let’s walk through the process of creating a managed database and loading an existing MySQL backup into it. I log in to the Lightsail Console and click Databases to get started. Then I click Create database to move forward:

I can see and edit all of the options at a glance. I choose a location, a database engine and version, and a plan, enter a name, and click Create database (all of these options have good defaults; a single click often suffices):

We are launching with support for MySQL 5.6 and 5.7, and will add support for PostgreSQL 9.6 and 10 very soon. The Standard database plan creates a database in one Availability Zone with no redundancy; the High Availability plan also creates a presence in a second AZ, and is recommended for production use.

Database creation takes just a few minutes, the status turns to Available, and my database is ready to use:

I click on Database-Oregon-1, and I can see the connection details, and have access to other management information & tools:

I’m ready to connect! I create an SSH connection to my Lightsail instance, ensure that the mysql package is installed, and connect using the information above (read Connecting to Your MySQL Database to learn more):

Now I want to import some existing data into my database. Lightsail lets me enable Data import mode in order to defer any backup or maintenance operations:

Enabling data import mode deletes any existing automatic snapshots; you may want to take a manual snapshot before starting your import if you are importing fresh data into an existing database.

I have a large (13 GB) , ancient (2013-era) MySQL backup from a long-dead personal project; I download it from S3, uncompress it, and import it:

I can watch the metrics while the import is underway:

After the import is complete I disable data import mode, and I can run queries against my tables:

To learn more, read Importing Data into Your Database.

Lightsail manages all routine database operations. If I make a mistake and mess up my data, I can use the Emergency Restore to create a fresh database instance from an earlier point in time:

I can rewind by up to 7 days, limited to when I last disabled data import mode.

I can also take snapshots, and use them later to create a fresh database instance:

Things to Know
Here are a couple of things to keep in mind when you use this new feature:

Engine Versions – We plan to support the two latest versions of MySQL, and will do the same for other database engines as we make them available.

High Availability – As is always the case for production AWS systems, you should use the High Availability option in order to maintain a database footprint that spans two Zones. You can switch between Standard and High Availability using snapshots.

Scaling Storage – You can scale to a larger database instance by creating and then restoring a snapshot.

Data Transfer – Data transfer to and from Lightsail instances in the same AWS Region does not count against the usage that is included in your plan.

Amazon RDS – This feature shares core technology with Amazon RDS, and benefits from our operational experience with that family of services.

Available Now
Managed databases are available today in all AWS Regions where Lightsail is available:

Jeff;