Amazon AppStream 2.0 – New Application Settings Persistence and a Quick Launch Recap

Amazon AppStream 2.0 gives you access to Windows desktop applications through a web browser. Thousands of AWS customers, including SOLIDWORKS, Siemens, and MathWorks are already using AppStream 2.0 to deliver applications to their customers.

Today I would like to bring you up to date on some recent additions to AppStream 2.0, wrapping up with a closer look at a brand new feature that will automatically save application customizations (preferences, bookmarks, toolbar settings, connection profiles, and the like) and Windows settings between your sessions.

The recent additions to AppStream 2.0 can be divided into four categories:

User Enhancements – Support for time zone, locale, and language input, better copy/paste, and the new application persistence feature.

Admin Improvements – The ability to configure default application settings, control access to some system resources, copy images across AWS regions, establish custom branding, and share images between AWS accounts.

Storage Integration – Support for Microsoft OneDrive for Business and Google Drive for G Suite.

Regional Expansion – AppStream 2.0 recently became available in three additional AWS regions in Europe and Asia.

Let’s take a look at each item and then at application settings persistence….

User Enhancements
In June we gave AppStream 2.0 users control over the time zone, locale, and input methods. Once set, the values apply to future sessions in the same AWS region. This feature (formally known as Regional Settings) must be enabled by the AppStream 2.0 administrator as detailed in Enable Regional Settings for Your AppStream 2.0 Users.

In July we added keyboard shortcuts for copy/paste between your local device and your AppStream 2.0 sessions when using Google Chrome.

Admin Improvements
In February we gave AppStream 2.0 administrators the ability to copy AppStream 2.0 images to other AWS regions, simplifying the process of creating and managing global application deployments (to learn more, visit Tag and Copy an Image):

In March we gave AppStream 2.0 administrators additional control over the user experience, including the ability to customize the logo, color, text, and help links in the application catalog page. Read Add Your Custom Branding to AppStream 2.0 to learn more.

In May we added administrative control over the data that moves to and from the AppStream 2.0 streaming sessions. AppStream 2.0 administrators can control access to file upload, file download, printing, and copy/paste to and from local applications. Read Create AppStream 2.0 Fleets and Stacks to learn more.

In June we gave AppStream 2.0 administrators the power to configure default application settings (connection profiles, browser settings, and plugins) on behalf of their users. Read Enabling Default OS and Application Settings for Your Users to learn more.

In July we gave AppStream 2.0 administrators the ability to share AppStream 2.0 images between AWS accounts for use in the same AWS Region. To learn more, take a look at the UpdateImagePermissions API and the update-image-permissions command.

Storage Integration
Both of these launches provide AppStream 2.0 users with additional storage options for the documents that they access, edit, and create:

Launched in June, the Google Drive for G Suite support allows users to access files on a Google Drive from inside of their applications. Read Google Drive for G Suite is now enabled on Amazon AppStream 2.0 to learn how to enable this feature for an AppStream application stack.

Similiarly, the Microsoft OneDrive for Business Support that was launched in July allows users to access files stored in OneDrive for Business accounts. Read Amazon AppStream 2.0 adds support for OneDrive for Business to learn how to set this up.

 

Regional Expansion
In January we made AppStream 2.0 available in the Asia Pacific (Singapore) and Asia Pacific (Sydney) Regions.

In March we made AppStream 2.0 available in the Europe (Frankfurt) Region.

See the AWS Region Table for the full list of regions where AppStream 2.0 is available.

Application Settings Persistence
With the past out of the way, let’s take a look at today’s new feature, Application Settings Persistence!

As you can see from the launch recap above, AppStream 2.0 already saves several important application and system settings between sessions. Today we are adding support for the elements that make up the Windows Roaming Profile. This includes:

Windows Profile – The contents of C:usersuser_nameappdata .

Windows Profile Folder – The contents of C:usersuser_name .

Windows Registry – The tree of registry entries rooted at HKEY_CURRENT_USER .

This feature must be enabled by the AppStream 2.0 administrator. The contents of the Windows Roaming Profile are stored in an S3 bucket in the administrator’s AWS account, with an initial storage allowance (easily increased) of up to 1 GB per user. The S3 bucket is configured for Server Side Encryption with keys managed by S3. Data moves between AppStream 2.0 and S3 across a connection that is protected by SSL. The administrator can choose to enable S3 versioning to allow recovery from a corrupted profile.

Application Settings Persistence can be enabled for an existing stack, as long as it is running the latest version of the AppStream 2.0 Agent. Here’s how it is enabled when creating a new stack:

Putting multiple stacks in the same settings group allows them to share a common set of user settings. The settings are applied when the user logs in, and then persisted back to S3 when they log out.

This feature is available now and AppStream 2.0 administrators can enable it today. The only cost is for the S3 storage consumed by the stored profiles, charged at the usual S3 prices.

Jeff;

PS – Follow the AWS Desktop and Application Streaming Blog to make sure that you know about new features as quickly as possible.

 

AWS X-Ray Now Supports Amazon API Gateway and New Sampling Rules API

My colleague Jeff first introduced us to AWS X-Ray almost 2 years ago in his post from AWS re:Invent. If you’re not already aware, AWS X-Ray helps developers analyze and debug everything from simple web apps to large and complex distributed microservices, both in production and in development. Since X-Ray became generally available in 2017, we’ve iterated rapidly on customer feedback and continued to make enhancements to the service like encryption with AWS Key Management Service (KMS), new SDKs and language support (Python!), open sourcing the daemon, and latency visualization tools. Today, we’re adding two new features:

    • Support for Amazon API Gateway, making it easier to trace and analyze requests as they travel through your APIs to the underlying services.
    • We also recently launched support for controlling sampling rules in the AWS X-Ray console and API.

Let me show you how to enable tracing for an API.

Enabling X-Ray Tracing

I’ll start with a simple API deployed to API Gateway. I’ll add two endpoints. One to push records into Amazon Kinesis Data Streams and one to invoke a simple AWS Lambda function. It looks something like this:

After deploying my API, I can go to the Stages sub console, and select a specific stage, like “dev” or “production”. From there, I can enable X-Ray tracing by navigating to the Logs/Tracing tab, selecting Enable X-Ray Tracing and clicking Save Changes.

After tracing is enabled, I can hop over to the X-Ray console to look at my sampling rules in the new Sampling interface.

I can modify the rules in the console and, of course, with the CLI, SDKs, or API. Let’s take a brief interlude to talk about sampling rules.

Sampling Rules
The sampling rules allow me to customize, at a very granular level, the requests and traces I want to record. This allows me to control the amount of data that I record on-the-fly, across code running anywhere (AWS Lambda, Amazon ECS, Amazon Elastic Compute Cloud (EC2), or even on-prem) – all without having to rewrite any code or redeploy an application. The default rule that is pictured above states that it will record the first request each second, and five percent of any additional requests. We talk about that one request each second as the reservoir, which ensures that at least one trace is recorded each second. The five percent of additional requests is what we refer to as the fixed rate. Both the reservoir and the fixed rate are configurable. If I set the reservoir size to 50 and the fixed rate to 10%, then if 100 requests per second match the rule, the total number of requests sampled is 55 requests per second. Configuring my X-Ray recorders to read sampling rules from the X-Ray service allows the X-Ray service to maintain the sampling rate and reservoir across all of my distributed compute. If I want to enable this functionality, I just install the latest version of the X-Ray SDK and daemon on my instances. At the moment only the GA SDKs are supported with support for Ruby and Go on the way. With services like API Gateway and Lambda, I can configure everything right in the X-Ray console or API. There’s a lot more detail on this feature in the documentation, and I suggest taking the time to check it out.

While I can, of course, use the sampling rules to control costs, the dynamic nature and the granularity of the rules is also extremely powerful for debugging production systems. If I know one particular URL or service is going to need extra monitoring I can specify that as part of the sampling rule. I can filter on individual stages of APIs, service types, service names, hosts, ARNs, HTTP methods, segment attributes, and more. This lets me quickly examine distributed microservices at 30,000 feet, identify issues, adjust some rules, and then dive deep into production requests. I can use this to develop insights about problems occurring in the 99th percentile of my traffic and deliver a better overall customer experience. I remember building and deploying a lot of ad-hoc instrumentation over the years, at various companies, to try to support something like this, and I don’t think I was ever particularly successful. Now that I can just deploy X-Ray and adjust sampling rules centrally, it feels like I have a debugging crystal ball. I really wish I’d had this tool 5 years ago.

Ok, enough reminiscing, let’s hop back to the walkthrough.

I’ll stick with the default sampling rule for now. Since we’ve enabled tracing and I’ve got some requests running, after about 30 seconds I can refresh my service map and look at the results. I can click on any node to view the traces directly or drop into the Traces sub console to look at all of the traces.

From there, I can see the individual URLs being triggered, the source IPs, and various other useful metrics.

If I want to dive deeper, I can write some filtering rules in the search bar and find a particular trace. An API Gateway segment has a few useful annotations that I can use to filter and group like the API ID and stage. This is what a typical API Gateway trace might look like.

Adding API Gateway support to X-Ray gives us end-to-end production traceability in serverless environments and sampling rules give us the ability to adjust our tracing in real time without redeploying any code. I had the pleasure of speaking with Ashley Sole from Skyscanner, about how they use AWS X-Ray at the AWS Summit in London last year, and these were both features he asked me about earlier that day. I hope this release makes it easier for Ashley and other developers to debug and analyze their production applications.

Available Now

Support for both of these features is available, today, in all public regions that have both API Gateway and X-Ray. In fact, X-Ray launched their new console and API last week so you may have already seen it! You can start using it right now. As always, let us know what you think on Twitter or in the comments below.

Randall

Extending AWS CloudFormation with AWS Lambda Powered Macros

Today I’m really excited to show you a powerful new feature of AWS CloudFormation called Macros. CloudFormation Macros allow developers to extend the native syntax of CloudFormation templates by calling out to AWS Lambda powered transformations. This is the same technology that powers the popular Serverless Application Model functionality but the transforms run in your own accounts, on your own lambda functions, and they’re completely customizable. CloudFormation, if you’re new to AWS, is an absolutely essential tool for modeling and defining your infrastructure as code (YAML or JSON). It is a core building block for all of AWS and many of our services depend on it.

There are two major steps for using macros. First, we need to define a macro, which of course, we do with a CloudFormation template. Second, to use the created macro in our template we need to add it as a transform for the entire template or call it directly. Throughout this post, I use the term macro and transform somewhat interchangeably. Ready to see how this works?

Creating a CloudFormation Macro

Creating a macro has two components: a definition and an implementation. To create the definition of a macro we create a CloudFormation resource of a type AWS::CloudFormation::Macro, that outlines which Lambda function to use and what the macro should be called.

Type: "AWS::CloudFormation::Macro"
Properties:
  Description: String
  FunctionName: String
  LogGroupName: String
  LogRoleARN: String
  Name: String

The Name of the macro must be unique throughout the region and the Lambda function referenced by FunctionName must be in the same region the macro is being created in. When you execute the macro template, it will make that macro available for other templates to use. The implementation of the macro is fulfilled by a Lambda function. Macros can be in their own templates or grouped with others, but you won’t be able to use a macro in the same template you’re registering it in. The Lambda function receives a JSON payload that looks like something like this:

{
    "region": "us-east-1",
    "accountId": "$ACCOUNT_ID",
    "fragment": { ... },
    "transformId": "$TRANSFORM_ID",
    "params": { ... },
    "requestId": "$REQUEST_ID",
    "templateParameterValues": { ... }
}

The fragment portion of the payload contains either the entire template or the relevant fragments of the template – depending on how the transform is invoked from the calling template. The fragment will always be in JSON, even if the template is in YAML.

The Lambda function is expected to return a simple JSON response:

{
    "requestId": "$REQUEST_ID",
    "status": "success",
    "fragment": { ... }
}

The requestId needs to be the same as the one received in the input payload, and if status contains any value other than success (case-insensitive) then the changeset will fail to create. Now, fragment must contain the valid CloudFormation JSON of the transformed template. Even if your function performed no action it would still need to return the fragment for it to be included in the final template.

Using CloudFormation Macros


To use the macro we simply call out to Fn::Transform with the required parameters. If we want to have a macro parse the whole template we can include it in our list of transforms in the template the same way we would with SAM: Transform: [Echo]. When we go to execute this template the transforms will be collected into a changeset, by calling out to each macro’s specified function and returning the final template.

Let’s imagine we have a dummy Lambda function called EchoFunction, it just logs the data passed into it and returns the fragments unchanged. We define the macro as a normal CloudFormation resource, like this:

EchoMacro:
  Type: "AWS::CloudFormation::Macro"
  Properties:
    FunctionName: arn:aws:lambda:us-east-1:1234567:function:EchoFunction
	Name: EchoMacro

The code for the lambda function could be as simple as this:

def lambda_handler(event, context):
    print(event)
    return {
        "requestId": event['requestId'],
        "status": "success",
        "fragment": event["fragment"]
    }

Then, after deploying this function and executing the macro template, we can invoke the macro in a transform at the top level of any other template like this:

AWSTemplateFormatVersion: 2010-09-09 
 Transform: [EchoMacro, AWS::Serverless-2016-10-31]
 Resources:
    FancyTable:
      Type: AWS::Serverless::SimpleTable

The CloudFormation service creates a changeset for the template by first calling the Echo macro we defined and then the AWS::Serverless transform. It will execute the macros listed in the transform in the order they’re listed.

We could also invoke the macro using the Fn::Transform intrinsic function which allows us to pass in additional parameters. For example:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  MyS3Bucket:
    Type: 'AWS::S3::Bucket'
    Fn::Transform:
      Name: EchoMacro
      Parameters:
        Key: Value

The inline transform will have access to all of its sibling nodes and all of its children nodes. Transforms are processed from deepest to shallowest which means top-level transforms are executed last. Since I know most of you are going to ask: no you cannot include macros within macros – but nice try.

When you go to execute the CloudFormation template it would simply ask you to create a changeset and you could preview the output before deploying.

Example Macros

We’re launching a number of reference macros to help developers get started and I expect many people will publish others. These four are the winners from a little internal hackathon we had prior to releasing this feature:

Name Description Author
PyPlate Allows you to inline Python in your templates Jay McConnell – Partner SA
ShortHand Defines a short-hand syntax for common cloudformation resources Steve Engledow – Solutions Builder
StackMetrics Adds cloudwatch metrics to stacks Steve Engledow and Jason Gregson – Global SA
String Functions Adds common string functions to your templates Jay McConnell – Partner SA

Here are a few ideas I thought of that might be fun for someone to implement:

If you end up building something cool I’m more than happy to tweet it out!

Available Now

CloudFormation Macros are available today, in all AWS regions that have AWS Lambda. There is no additional CloudFormation charge for Macros meaning you are only billed normal AWS Lambda function charges. The documentation has more information that may be helpful.

This is one of my favorite new features for CloudFormation and I’m excited to see some of the amazing things our customers will build with it. The real power here is that you can extend your existing infrastructure as code with code. The possibilities enabled by this new functionality are virtually unlimited.

Randall

SSMS 17.9 is now available

We are excited to announce the release of SQL Server Management Studio (SSMS) 17.9!

Download SSMS 17.9 and review the Release Notes to get started.

SSMS 17.9 provides support for almost all feature areas on SQL Server 2008 through the latest SQL Server 2017, which is now generally available.

In addition to enhancements and bug fixes, SSMS 17.9 comes with several new features:

  • ShowPlan improvements
  • Azure SQL support for vCore SKUs
  • Bug Fixes

View the Release Notes for more information.

ShowPlan improvements

Graphical Showplan now shows the new row mode memory grant feedback attributes when the feature is activated for a specific plan: IsMemoryGrantFeedbackAdjusted and LastRequestedMemory added to the MemoryGrantInfo query plan XML element. For more on row mode memory grant feedback, see Books Online.

For more on row mode memory grant feedback, view the Adaptive Query Processing documentation.

Image from Adaptive Query Processing

Azure SQL support for vCore SKUs

Added support for vCore SKUs in Azure DB creation. For more information on vCore, check out the full blog.

Image from Azure SQL DB blog

Bug fixes

In SSMS 17.9, there were many bug fixes.

  • Replication Monitor
    • Fixed an issue that was causing Replication Monitor (SqlMonitor.exe) not to start (User Voice item)
  • Import Flat File Wizard
    • Fixed the link to the help page for “Flat File Wizard” dialog
    • Fixed issue where the wizard did not allow changing the destination table when the table already existed: this allows users to retry without having to exit the wizard, delete the failed table, and then re-enter the information into the wizard (User Voice item)
  • Import/Export Data-Tier Application
    • Fixed an issue (in DacFx) which was causing the import of a .bacpac could fail with a message like “Error SQL72014: .Net SqlClient Data Provider: Msg 9108, Level 16, State 10, Line 1 This type of statistics is not supported to be incremental. ” when dealing with tables with partitions defined and no indexes on the table
  • Intellisense
    • Fixed an issue where Intellisense completion was not working when using AAD with MFA.
  • Object Explorer
    • Fixed an issue where the “Filter Dialog” was displayed on random monitors instead of the monitor where SSMS was running (multi-monitor systems)
  • Azure SQL
    • Fixed an issue related to enumeration of databases in the “Available Databases” where “master” was not displayed in the dropdown when connected to a specific database.
    • Fixed an issue where trying to generate a script (“Data” or “Schema and Data”) was failing then connected to the SQL Azure DB using AAD with MFA.
    • Fixed an issue in the View Designer (Views) where it was not possible to select “Add Tables” from the UI when connected to a SQL Azure DB.
    • Fixed an issue where SSMS Query Editor was silently closing and reopening connections during MFA token renewal. This will prevent side effects unbeknownst to the user (like closing a transaction and never reopening again) from happening. The change adds the token expiration time to the properties window.
    • Fixed an issue where SSMS was not enforcing password prompts for imported MSA accounts for AAD with MFA login
  • Activity Monitor
    • Fixed an issue that was causing “Live Query Statistics” to hang when launched from Activity Monitor and SQL Authentication was used.
  • Microsoft Azure integration
    • Fixed an issue where SSMS only shows the first 50 subscriptions (Always Encrypted dialogs, Backup/Restore from URL dialogs, etc)
    • Fixed an issue where SSMS was throwing an exception (“Index out of range”) while trying to log on to a Microsoft Azure account which did not have any storage account (in Restore Backup from URL dialog)
  • Object Scripting
    • When scripting “Drop and Create”, SSMS now avoids generating dynamic T-SQL
    • When scripting a database object, SSMS now does not generate script to set database scoped configurations, if they are set to default values
  • Help
    • Fixed a long outstanding issue where “Help on Help” was not honoring the online/offline mode
    • When clicking on “Help | Community Projects and Samples” SSMS now opens the default browser that points to a Git page and shows no errors/warnings due to the old browser being used

To learn more about other bug fixes covered in this release, check the Release Notes.

Call to action

Try it out and let us know what you think! You can message us on our twitter @SQLDataTools or reach out to Ken Van Hynings twitter @SQLToolsGuy.

 

Chat with the Alexa Prize Finalists Today

The Alexa Prize is an annual competition designed to spur academic research and development in the field of conversational artificial intelligence. This year, students are working to build socialbots that can engage in a fun, high-quality conversation on popular societal topics for up to 20 minutes. In order to succeed at this task, the teams must innovate in a broad range of areas including knowledge acquisition, natural language understanding, natural language generation, context modeling, common-sense reasoning, and dialog planning. They use the Alexa Skills Kit (ASK) to construct their bot and to receive real-time feedback on its performance.

Last month the socialbots from Heriot-Watt University (Alana), Czech Technical University (Alquist), and UC Davis (Gunrock) were chosen as the finalists (watch the Twitch stream to learn more). The competition was tough, with points assigned for the potential scientific contribution to the field, the technical merit of the approach, the overall novelty of the idea, and the team’s ability to deliver on their vision.

Time to Chat
We’re now ready for the final round.

Step up to your nearest Alexa-powered device and say “Alexa, let’s chat!” You will be connected to one of the three socialbots (chosen at random) and can converse with it for as long as you would like. When you are through, say “Alexa stop,” and rate the socialbot when prompted. You can also provide additional feedback for the team. We’ll announce the winner at AWS re:Invent 2018 in Las Vegas.

Jeff;

PS – If you are ready to build your very own Alexa Skill, check out the Alexa Skills Kit Tutorials and subscribe to the Alexa Blogs.

 

The August release of SQL Operations Studio is now available

We are excited to announce the August release of SQL Operations Studio is now available.

Download SQL Operations Studio and review the Release Notes to get started.

SQL Operations Studio is a data management tool that enables you to work with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux. To learn more, visit our GitHub.

SQL Operations Studio was announced for Public Preview on November 15th at Connect(), and this August release is the ninth major update since the announcement. If you missed it, the July release announcement is available here.

Highlights for this release include the following.

  • Announcing the SQL Server Import extension
  • SQL Server Profiler Session management
  • New community extension: First responder kit
  • Quality of Life improvements: Connection strings
  • Bug bash galore

For complete updates, refer to the Release Notes.

Announcing the SQL Server Import extension

It all started from a simple idea: Take the #1 most used wizard in SSMS in the past year and bring this wizard to SQL Operations Studio. When we first released our Wizard and Dialog extensibility APIs in June, this was the perfect candidate to test our wizards and highlight to the community that these UI components are ready to incorporate in community extensions.

To provide some background, the Import Flat File Wizard was first released and announced on October 2017 in SSMS 17.3 (shameless plug alert: coincidentally my first SQL Server blog post and first project at Microsoft.) Outside of featuring in a Channel9 video, the wizard did not receive any additional marketing. Fast forward a few months, and it was suddenly the #1 most used wizard in SSMS. How did this happen?

A very common scenario for SQL Server users is that they simply want to take a .txt or .csv file and import it to their SQL database as a table. As much as we love the ever reliable Import and Export Wizard, for users unfamiliar with the wizard, there were many configuration options that can make first experience difficult. If a user simply wants to import a text file, how can we make a simple scenario easier? By creating a whole new wizard of course!

The Import Flat File Wizard utilizes a Microsoft Research framework known as Program Synthesis using Examples (PROSE) to import .txt and .csv files into a SQL table. It is a powerful framework for data wrangling, and it is the same technology that powers Flash Fill in Microsoft Excel and featured in many publications and demos led by Sumit Gulwani. This technology turns the Import Flat File experience into a 6 click experience to go from selecting a file and importing into your database. Clearly, incorporating PROSE into everyday database tasks is a delighter for our users, and we will continue to put investment into creating experiences with PROSE.

Logically, it made perfect sense to have an AI-powered feature be our first wizard experience in SQL Operations Studio, but our engineers were at full capacity, so we had to be a little creative to make this possible.

Every year since Satya Nadella became CEO, Microsoft holds a global, company-wide hackathon where employees spend 3 days working on any project ranging from Hack for Good projects, VP-sponsored challenges, or from a random idea on the drive back home. This was the perfect opportunity to pitch bringing the Import Flat File Wizard to SQL Operations Studio, while also promoting cross-platform and open source development to our fellow co-workers.

By the time of the Hackathon, we had 4 interns and 7 full-time employees signed up for the project. More importantly, we asked why external team members chose our project, and we were blown away by the passion the interns and external team members had for SQL Server, and how they wanted to work on projects to improve SQL Server user experience.

Starting from mockups featured on PowerPoint slides, we shared the vision of the project while quickly onboarding new team members to our tech stack. It was not the most productive first day, but we did end up finding our rhythm. Using this momentum, we iterated quickly and were nailing our checkpoint sync-ups throughout the day, but there was still a lot to get done. However, with one hour to go before the Hackathon tents closed, we completed the first SQL Server Import experience in SQL Operations Studio end-to-end. Very proud of this team for getting a shippable deliverable within the allocated Hackathon time.

Our intern, Amir Omidi, worked on the fit and finish for the wizard for the remainder of his internship, and we are grateful for his hard work.

Now, we are ready to share this extension with the community. You can get this extension from the Extensions Manager. This feature not only brings the same simplicity as the SSMS wizard, but also brings this experience cross-platform to our macOS and Linux users. You can start the wizard with the same right click experience or press Ctrl + I.

Overall, this project taught us several things:

  • Our SQL Server users love AI-assisted features, and this is the first of many AI experiences in SQL Operations Studio.
  • Interns are very talented. Invest in their growth.
  • Keep things simple. Bring our users with us on our journey.

If you have ideas of what you would like to see in this extension, let us know through our community feedback. We look forward to bringing more PROSE experiences into SQL Operations Studio in the future.

SQL Server Profiler Session management

Since the June release, we have been making improvements with SQL Server Profiler. We are excited to announce the Profiler extension now supports session management. With session management, you can now configure your most popular sessions as you can in SSMS.

To try out this feature, you will need to make an active connection to a SQL Server instance. You can then launch Profiler by clicking on a server or database in the Object Explorer and pressing Alt + P.

This will pop up a New Session dialog as shown in the gif. Here you can give an easy to remember name like Profiler and hit create. If you dont want to create a new profiler session, simply press Cancel.

To select the session you just created, simply click on the dropdown next to Start/Stop and select Profiler.

You can now start profiling your SQL Server events. Note that there is also a new Create button where you can pop up the Create Session dialog.

With template support released last month and session support in this release, we hope to continue to improve the Profiler extension for all the avid Profiler users in the SQL world. A big thank you to Madeline MacDonald for her hard work in shipping Profiler over the course of her internship.

New community extensions: First Responder Kit

Continuing our extensibility story, our marketplace now includes Brent Ozars First Responder Kit. For those unfamiliar with the First Responder Kit, this toolkit helps users understand why their SQL Server is down or slow. Specifically, there are five main scripts featured:

  • sp_blitz: Overall health check
  • sp_blitzcache: Most resource-intensive queries
  • sp_blitzfirst: Why is server slow
  • sp_blitzindex: Indexes missing or slow
  • sp_blitzwho: Queries currently running

To leverage these features, you fill first need internet connection. Then, open the command palette with Ctrl+Shift+P and type > first responder kit: import to see a list of scripts to import. Then select the script with arrow keys and press enter.

Once the scripts are loaded to the database, you can run the scripts by again opening command palette and type > first responder kit: run to view the list of available scripts to run. Then select the script with arrow keys and press enter.

A big thank you to Drew Skwiers-Koballa for using our extensibility APIs to create a SQL Operations Studio extension. Also thank you to Brent Ozar Unlimited team for having these awesome scripts easily available for the community.

In addition to having a great new extension, Drew shared his story for creating a SQL Operations Studio extension through a detailed blog post. If you are interested in leveraging extensions APIs or have a great idea for an extension, we would highly recommend checking out his blog.

Quality of Life improvements: Connection Strings

As requested by the community, we have also made it easier for you to handle connection strings in SQL Operations Studio.

Generate Connection String

If you need to quickly generate a connection string, you can follow these three steps:

  1. Open a query editor with an active connection.
  2. Open Command Palette (Ctrl+Shift+P), and type Get Current Connection String and then press Enter.
  3. Copy connection string from notification pop-up.

Note: Password will be removed from the returned string.

You can now use or share the connection string.

Populate info from Connection String

If you have a valid connection string such as from the Azure Portal, you can now copy your connection string and paste the string into the connection dialog and it will auto-populate the fields based on the connection string.

Bug bash galore

In addition to the new features, we dedicated time to fix many of the top user reported bugs.

To highlight high impact ones:

  • Cursor position no longer loses context when switching between tabs #1744
  • Script As now auto-connects to the server connection. #825
  • .sql files now are associated with SQL Operations Studio #1836

All fixed customer reported issues:

  • Parse SQL in a Query Editor window by using the Parse Syntax command
  • Save edit data scroll position when switching tabs #2129
  • View as Chart options are cut off at the bottom #1497
  • Cancel change connection disconnects current connection #1474
  • Bug: Error message when saving Excel file second (and subsequent) time #1748
  • Update document icon for Dashboard and Profiler documents #2107
  • SQL Tab DB Icon is red #387
  • Added more saveAsCsv options #2099
  • Feature Suggestion: Get Connection String for existing connection #1620
  • Agent: Enabled button to import queries from sql files #2042
  • Copy from query results grid is off by 1 column #1985
  • Add VS Code version to About dialog #1998
  • double-click not selecting @ in variable name #143
  • Typing N” autocompletes to N”’ #1850
  • Results Grid Row Indicator Zero Based #2152
  • Fix the decimal separator #1317
  • SelectBox doesn’t change color when disabled #1624
  • Save as JSON/EXCEL/CSV not work #1728
  • Shell/Dashboard: Main viewlet icons are draggable and can crash the app #1524
  • Can’t use Ctrl+C shortcut to copy from result pane #2091
  • Updating causes application icon to be removed/replaced in Windows #1285
  • Not able to expand/collapse remote file browser folder by clicking name #1578
  • sqlops.desktop [Desktop Entry] – redundant value for Name & Comment #1278
  • Edit data: cell doesn’t revert to original value on hitting Escape key #1782

Contact us

If you have any feature requests or issues, please submit to our GitHub issues page. For any questions, feel free to comment below, message us on Gitter, or tweet us @SQLOpsStudio.

Cloud data and AI services training roundup August 2018

To help you stay up to date on online training opportunities, were releasing a monthly list of the latest free Data and Artificial Intelligence (AI) sessions in one convenient post.

SQL Server

Build modern applications using the language of your choice, on-premises and in the cloud, now on Windows, Linux, and Docker containers.

  • Prepare for Windows Server 2008 and SQL Server 2008 End of Support
    Support for SQL Server 2008/ 2008 R2 and Windows Server 2008 will end in July 2019 and January 2020, respectively, which means youll no longer receive security patches for these versions. When you join this session, youll learn how to migrate your applications and data, avoid business disruptions, and adopt the most current security technologies. You will also receive guidance for your migration and find resources to help you move quickly.

Azure Database services for PostgreSQL and MySQL

Azure Database Services for PostgreSQL and MySQL provide fully managed, enterprise-ready community PostgreSQL/MySQL database as a service. These community editions help you easily lift and shift to the cloud, using languages and frameworks of your choice. On top of that, you get built-in high availability and capability to scale in seconds, helping you easily adjust to changes in customer demands.

  • How Open Source Database engines help you migrate to Azure
    Learn how to take advantage of fully managed, enterprise-ready PostgreSQL and MySQL community database engines. Join us as we cover how to use Azure Database Migration Service and what incentives are in place to help you in your migration journey.

Azure Cosmos DB

Azure Cosmos DB offers the first globally distributed, multi-model database service for building planet-scale apps.

  • Controlling your application experience with Azure Cosmos DBs consistency models
    The ability to control your application experience by changing your consistency model has been lackinguntil now. Azure Cosmos DB offers five well-defined and preconfigured consistency models, helping you navigate the tradeoffs between data consistency and app availability. In this session, learn the key differences between the five consistency models, which applications are best suited for each model and how to configure the models to ensure high performance.

Big Data and analytics

Deliver better experiences and make better decisions by analyzing massive amounts of data in real time. Get the insight you need to deliver intelligent actions that improve customer engagement, increase revenue, and lower costs.

  • Making R-based analytics easier and more scalable
    R is an increasingly popular programming language for running predictive analytics workloads. If you are looking to scale out R-based advanced analytics to big data, Azure Databricks starts in seconds, integrates with RStudio, and automatically executes R workloads at unprecedented scale across single or multiple nodes. Join us to see how to get the ideal dataset for your needs and a detailed demonstration of the entire solution.

The July release of SQL Operations Studio is now available

We are excited to announce the July release of SQL Operations Studio is now available.

Download SQL Operations Studio and review the Release Notes to get started.

SQL Operations Studio is a data management tool that enables you to work with SQL Server, Azure SQL DB and SQL DW from Windows, macOS, and Linux. To learn more, visit our GitHub.

SQL Operations Studio was announced for Public Preview on November 15th at Connect(), and this June release is the eighth major update since the announcement. If you missed it, the June release announcement is available here.

Highlights for this release include the following.

  • SQL Server Agent preview extension Job configuration support
  • SQL Server Profiler preview extension Improvements
  • Combine Scripts Extension
  • Wizard and Dialog Extensibility
  • Social content
  • Fix GitHub Issues

For complete updates, refer to the Release Notes.

SQL Server Agent configuration

As part of our story of bringing over SSMS features and improving user experience, we are excited to introduce SQL Agent configuration support.

Summary of changes include:

  • Added view of Alerts, Operators, and Proxies and icons on left pane
  • Added dialogs for New Job, New Job Step, New Alert, and New Operator
  • Added Delete Job, Delete Alert, and Delete Operator (right-click)
  • Added Previous Runs visualization
  • Added Filters for each column name

In addition to jobs, users can now view Alerts, Operators, and Proxies through the icons on the left pane as demonstrated in the gif above.

We also made several improvements for the Job View. Previous Runs visualization can now be seen so that a user can quickly see a jobs history of past runs if they passed or failed.

This release also made it easier to find specific jobs in a large list of jobs. Imagine you had a list of 100+ jobs and you only wanted to see the failed jobs? Now you can by checking out the gif below using the filter column option.

With all the improvements in Views, we have added new dialogs so that users can now add Jobs, Alerts, and Operators without having to go to SSMS. To open each dialog, click New Job above each respective view.

For all the SQL Agent enthusiasts out there, we would love for you to try out the new SQL Server Agent experience and let us know what you like and what is still missing for you to use Agent day to day. As part of doing our engineering out in the open, we need your feedback so that we can create experiences that empower you to do your job (pun intended).

To learn more about SQL Server Agent, check out the documentation.

SQL Server Profiler improvements

With the release of SQL Server Profiler extension last month, our team has been working hard on improvements, especially making launching Profiler quickly.

Summary of changes include:

  • Added Hotkeys to quickly launch and start/stop Profiler
  • Added 5 Default Templates to view Extended Events
  • Added Server/Database connection name
  • Added support for Azure SQL Database instances
  • Added suggestion to exit Profiler when the tab is closed and Profiler is still running

As seen in this gif, you can quickly get Profiler open after making a server/database connection. With this release, we added Keyboard Shortcuts to Launch Profiler (Windows: Alt + P Mac: Ctrl+ALT+P) and Start/Stop Profiler (Windows: Alt + S Mac: Ctrl+ALT+S). From our user survey, the highest priority for users is to be able to start Profiling as quickly as possible. Now with two keyboard strokes, you can start Profiler.

In addition, Profiler now has added Default templates for five different views: Standard, TSQL, Tuning, TSQL_Locks, and TSQL_Duration. When you click on each one, a different list of columns will generate in your Profiler view so that you can focus on the areas that you are investigating. At the moment, it will reset the view each time.

In addition, each Profiler tab will show the server/database the Profiler instance is connected to. You can see the name in the top right of the above screenshot, which is localhost/Adventureworks2014.

Please let us know what you think and what you would like to see in Profiler.

Combine Scripts Extension

We have a new community extension published in our Extensions Manager. Created by Cobus Kruger, the Combine Scripts Extension for SQL Operations Studio is now available.

From the extension description: Ever needed to execute several scripts spread over several folders? Now you can select several files and folders, right click and click Combine Scripts, and generate a single combined file to execute or use any way you choose.

For those new to extensions, here are the instructions to access the Extensions Manager and download the Combine Scripts extension. For this extension, in particular, the install button will take you to a download link for the VSIX package. Download the VSIX, and then click File -> Install Extension from VSIX Package.

Dialog and Wizard extensibility

With this release, we are continuing to provide more opportunities for extension authors, which we highly encourage you to participate. The highlight for this release is we have now provided options for extension authors to incorporate Dialogs and Wizards in their extensions.

The differences between using dialogs and wizards are very similar to SSMS. Generally, use Wizards for step-by-step scenarios, and use dialogs for most other cases.

Extension authors can see the full list of Dialog and Wizard APIs.

To see this in action, check out our sample extension that includes this code.

We are excited to see what our extension authors can come up with these new extensibility points. If you arent an extension author but have ideas in mind, please feel free to share on Twitter or GitHub Issues.

Social content

Over the past month, we have seen a lot of great content about SQL Operations Studio as we monitored social media. We highly encourage the community that if you love this tool, consider using this tool in demos and blog posts. We will also make sure to share any of your content with the community through our Twitter handle (@sqlopsstudio).

If you would like to use SQL Operations Studio at sessions like SQL Saturdays or PASS Summit, feel free to reach out to our team and we can work with you. If there are any demo blockers, please submit an issue on our GitHub Issues. Our engineers will help unblock your scenarios.

With the launch of the Data Double-Click channel, our Principal PM Lead, Vicky Harp, discussed SQL Operations Studio with Scott Klein. Check out the conversation below.

In addition, Vicky was also interviewed by Joey DAntoni for Redmond Mag, covering the current state of SQL Server Tools development.

SQL Ops Studio also had a presence at OSCON in Portland this year, where Shayne Boyer shared SQL Operations Studio and mssql-cli.

Fixed GitHub Issues

Here is a summary of issues addressed:

  • #728No response to Add Connection on macOS
  • #1718Unable to connect to any data source
  • #1713Number of rows affected
  • #1843Better Table organization
  • #1847MFA Login to Azure SQL Databases
  • #1845Bug Scroll change tab query
  • #1612Results grid text display is messed up by international characters
  • #1749BUG: HTML data in a column gets interpreted
  • #1830Setting iconPath in ButtonComponent after component() is called does not change icon
  • #1789Extensibility: if you add a connection provider uninstall will never remove it from the list
  • #1799Top 10 DB Size chart does not work on ccase-sensitive instances
  • #1724Extension dialogs have stopped working
  • #1719TypeError when Connecting to Server
  • #1693Backup dialog: File browser UI is broken
  • #1817Error de Ortografia
  • #1791Sqlops Extensions: queryeditor.connect() connects to the target database, but UI does not show the editor is connected
  • #1814d.ts typo causing implicit ‘any’ type definition

Contact us

If you have any feature requests or issues, please submit to our GitHub issues page. For any questions, feel free to comment below, message us on Gitter, or tweet us.

 

Red Hat OpenStack Platform 13: five things you need to know about networking

Red Hat OpenStack Platform 13, based on the upstream Queens release, is now Generally Available. Of course this version brings in many improvements and enhancements across the stack, but in this blog post I’m going to focus on the five biggest and most exciting networking features found this latest release.

franck-v-705445-unsplash
Photo by Franck V. on Unsplash

ONE: Overlay network management – bringing consistency and better operational experience

Offering solid support for network virtualization was always a priority of ours. Like many other OpenStack components, the networking subsystem (Neutron) is pluggable so that customers can choose the solution that best fits their business and technological requirements. Red Hat OpenStack Platform 13 adds support for Open Virtual Network (OVN), a network virtualization solution which is built into the Open vSwitch (OVS) project. OVN supports the Neutron API, and offers a clean and distributed implementation of the most common networking capabilities such as bridging, routing, security groups, NAT, and floating IPs. In addition to OpenStack, OVN is also supported in Red Hat Virtualization (available with Red Hat Virtualization 4.2 which was announced earlier this year), with support for Red Hat OpenShift Container Platform expected down the road. This marks our efforts to create consistency and a more unified operational experience between Red Hat OpenStack Platform, Red Hat OpenShift, and Red Hat Virtualization.     

OVN was available as a technology preview feature with Red Hat OpenStack Platform 12, and is now fully supported with Red Hat OpenStack Platform 13. OVN must be enabled as the overcloud Neutron backend from Red Hat OpenStack Platform director during deployment time, as the default Neutron backend is still ML2/OVS. Also note that migration tooling from ML2/OVS to OVN is not supported with Red Hat OpenStack Platform 13, and is expected to be offered in a future release, and so OVN is only recommended for new deployments.

TWO: Open source SDN Controller

OpenDaylight is a flexible, modular, and open software-defined networking (SDN) platform, which is now fully integrated and supported with Red Hat OpenStack Platform 13. The Red Hat offering combines carefully selected OpenDaylight components that are designed to enable the OpenDaylight SDN controller as a networking backend for OpenStack, giving it visibility into, and control over, OpenStack networking, utilization, and policies.

OpenDaylight is co-engineered and integrated with Red Hat OpenStack Platform, including Red Hat OpenStack Platform director for automated deployment, configuration and lifecycle management.

The key OpenDaylight project used in this solution is NetVirt, offering support for the OpenStack Neutron API on top of OVS. For telecommunication customers this support extends to OVS-DPDK implementations. Also available in technology preview, customers can leverage OpenDaylight with OVS hardware offload on capable network adapters to offload the virtual switch data path processing to the network card, further optimizing the server footprint.

 

2OpenStack_OpenDaylight-Product-Guide_437720_0217_ECE_Architecture

THREE: Cloud ready load balancing as a service

Load balancing is a fundamental service of any cloud. It is a key element essential for enabling automatic scaling and availability of applications hosted in the cloud, and is required for both “three tier” apps, as well as for emerging cloud native, microservices based, app architectures.

During the last few development cycles, the community has worked on a new load balancing as a service (LBaaS) solution based on the Octavia project. Octavia provides tenants with a load balancing API, as well as implements the delivery of load balancing services via a fleet of service virtual machine instances, which it spins up on demand. With Red Hat OpenStack Platform 13, customers can use the OpenStack Platform director to easily deploy and setup Octavia and expose it to the overcloud tenants, including setting up a pre-created, supported and secured Red Hat Enterprise Linux based service VM image.

OpenStack_Networking-Guide_450456_0617_ECE_LBaaS
Figure 2. Octavia HTTPS traffic flow through to a pool member

FOUR: Integrated networking for OpenStack and OpenShift

OpenShift Container Platform, Red Hat’s enterprise distribution of Kubernetes optimized for continuous application development, is infrastructure independent. You can run it on public cloud, virtualization, OpenStack or anything that can boot Red Hat Enterprise Linux. But in order to run Kubernetes and application containers, you need control and flexibility at scale on the infrastructure level. Many of our customers are looking into OpenStack as a platform to expose VM and bare metal resources for OpenShift to provide Kubernetes clusters to different parts of the organization – nicely aligning with the strong multi-tenancy and isolation capabilities of OpenStack as well as its rich APIs.     

As a key contributor to both OpenStack and Kubernetes, Red Hat is shaping this powerful combination so that enterprises can not only deploy OpenShift on top of OpenStack, but also take advantage of the underlying infrastructure services exposed by OpenStack. A good example of this is through networking integration. Out of the box, OpenStack provides overlay networks managed by Neutron. However, OpenShift, based on Kubernetes and the Container Network Interface (CNI) project, also provides overlay networking between container pods. This results in two, unrelated, network virtualization stacks that run on top of each other and make the operational experience, as well as the overall performance of the solution, not optimal. With Red Hat OpenStack Platform 13, Neutron was enhanced so that it can serve as the networking layer for both OpenStack and OpenShift, allowing a single network solution to serve both container and non-container workloads. This is done through project Kuryr and kuryr-kubernetes, a CNI plugin that provides OpenStack networking to Kubernetes objects.

Customers will be able to take advantage of Kuryr with an upcoming Red Hat OpenShift Container Platform release, where we will also release openshift-ansible support for automated deployment of Kuryr components (kuryr-controller, kuryr-cni) on OpenShift Master and Worker nodes.   

Screen Shot 2018-07-12 at 3.13.30 pm
Figure 3. OpenShift and OpenStack

FIVE: Deployment on top of routed networks

As data center network architectures evolve, we are seeing a shift away from L2-based network designs towards fully L3 routed fabrics in an effort to create more efficient, predictable, and scalable communication between end-points in the network. One such trend is the adoption of leaf/spine (Clos) network topology where the fabric is composed of leaf and spine network switches: the leaf layer consists of access switches that connect to devices like servers, and the spine layer is the backbone of the network. In this architecture, every leaf switch is interconnected with each and every spine switch using routed links. Dynamic routing is typically enabled throughout the fabric and allows the best path to be determined and adjusted automatically. Modern routing protocol implementations also offers Equal-Cost Multipathing (ECMP) for load sharing of traffic between all available links simultaneously.

Originally, Red Hat OpenStack Platform director was designed to use shared L2 networks between nodes. This significantly reduces the complexity required to deploy OpenStack, since DHCP and PXE booting are simply done over a shared broadcast domain. This also makes the network switch configuration straightforward, since typically there is only a need to configure VLANs and ports, but no need to enable routing between all switches. This design, however, is not compatible with L3 routed network solutions such as the leaf/spine network architecture described above.

With Red Hat OpenStack Platform 13, director can now deploy OpenStack on top of fully routed topologies, utilizing its composable network and roles architecture, as well as a DHCP relay to support provisioning across multiple subnets. This provides customers with the flexibility to deploy on top of L2 or L3 routed networks from a single tool.

OpenStack_NFV_Mobile_Networks_438707_0317_ECE_Figure12

Learn more

Learn more about Red Hat OpenStack Platform:


For more information on Red Hat OpenStack Platform and Red Hat Virtualization contact your local Red Hat office today!

SQL Server 2008 end of support is the first step to tomorrow’s database

Today, Takeshi Numoto blogged about the upcoming SQL Server 2008 and 2008 R2 end of support. If youve been thinking about what to do with your SQL Server 2008 and 2008 R2 databases, youre not alone. These databases reach end of support on July 9, 2019, and many organizations have started planning in earnest for this milestone.

Which brings us to Will, a database admin at that development shop downtown. Bucking the job-hopping trends of today, Wills been a stalwart for his company, wrangling data for them since the mid-2000s. He remembers leading the installation of SQL Server 2008. Back then, connections were speedy, requests processed swiftly. Security, efficiency, and good clean design existed in abundancethere was a place for every byte and every byte was in its place. App developers loved Will because he made sure their data was right where they needed it, when they needed it. What an idyllic, peaceful time!

A lot has happened in the decade since. Over the years, Wills information transfer network has grown and evolvedforking and pooling and spidering as it has had to make new, less-efficient references. The schema, once so clear, is now muddied and confusing, obscuring paths for formerly responsive queries. Latency abounds. And lurking in every shadow? The threat of unidentified, potentially insecure, rogue requests for information. Those developers arent nearly so happy anymore. Ugh.

Fortunately, hope is not lost for Will. He can restore the organizational marvel that was yesterdays landscapeand layer on even more improvements. Yes, modernization to SQL Server 2017 or Azure SQL Database must be undertaken carefully, with planning and analysis and meticulous list-making. But its a journey worth taking. And luckily for Willand youtheres a free databasemigration guide that provides step-by-step instructions for getting from here to there.

Plus, Wills options arent limited to a single choice. The new SQL Server runs on numerous hosting environments and operating systemson-premises or in the cloud, virtual machine or Azure data services, Windows or Linux. As soon as its time to reroute those data streams, clear away the puddles, and brighten up the place, were here to help.

End of support is coming soon

As mentioned, SQL Server 2008 and 2008 R2 reach the end of support on July 9, 2019. We know it can be difficult to upgrade everything before end-of-support deadlines like this. Thus, were offering extended security updates for SQL Server 2008 when you rehost your database in Azure Virtual Machineswith no application code changes needed. Youll gain the critical patches you need to help keep your data safe for three more years after the end-of-support deadline, giving you time to plan and implement your next move. Find out all the details you need in the end-of-support blog post.

All this is to say, the future of data is bright for Will, his data-loving development team, and you.

Get started today