Red Hat OpenStack Platform 13: five things you need to know about networking

Red Hat OpenStack Platform 13, based on the upstream Queens release, is now Generally Available. Of course this version brings in many improvements and enhancements across the stack, but in this blog post I’m going to focus on the five biggest and most exciting networking features found this latest release.

franck-v-705445-unsplash
Photo by Franck V. on Unsplash

ONE: Overlay network management – bringing consistency and better operational experience

Offering solid support for network virtualization was always a priority of ours. Like many other OpenStack components, the networking subsystem (Neutron) is pluggable so that customers can choose the solution that best fits their business and technological requirements. Red Hat OpenStack Platform 13 adds support for Open Virtual Network (OVN), a network virtualization solution which is built into the Open vSwitch (OVS) project. OVN supports the Neutron API, and offers a clean and distributed implementation of the most common networking capabilities such as bridging, routing, security groups, NAT, and floating IPs. In addition to OpenStack, OVN is also supported in Red Hat Virtualization (available with Red Hat Virtualization 4.2 which was announced earlier this year), with support for Red Hat OpenShift Container Platform expected down the road. This marks our efforts to create consistency and a more unified operational experience between Red Hat OpenStack Platform, Red Hat OpenShift, and Red Hat Virtualization.     

OVN was available as a technology preview feature with Red Hat OpenStack Platform 12, and is now fully supported with Red Hat OpenStack Platform 13. OVN must be enabled as the overcloud Neutron backend from Red Hat OpenStack Platform director during deployment time, as the default Neutron backend is still ML2/OVS. Also note that migration tooling from ML2/OVS to OVN is not supported with Red Hat OpenStack Platform 13, and is expected to be offered in a future release, and so OVN is only recommended for new deployments.

TWO: Open source SDN Controller

OpenDaylight is a flexible, modular, and open software-defined networking (SDN) platform, which is now fully integrated and supported with Red Hat OpenStack Platform 13. The Red Hat offering combines carefully selected OpenDaylight components that are designed to enable the OpenDaylight SDN controller as a networking backend for OpenStack, giving it visibility into, and control over, OpenStack networking, utilization, and policies.

OpenDaylight is co-engineered and integrated with Red Hat OpenStack Platform, including Red Hat OpenStack Platform director for automated deployment, configuration and lifecycle management.

The key OpenDaylight project used in this solution is NetVirt, offering support for the OpenStack Neutron API on top of OVS. For telecommunication customers this support extends to OVS-DPDK implementations. Also available in technology preview, customers can leverage OpenDaylight with OVS hardware offload on capable network adapters to offload the virtual switch data path processing to the network card, further optimizing the server footprint.

 

2OpenStack_OpenDaylight-Product-Guide_437720_0217_ECE_Architecture

THREE: Cloud ready load balancing as a service

Load balancing is a fundamental service of any cloud. It is a key element essential for enabling automatic scaling and availability of applications hosted in the cloud, and is required for both “three tier” apps, as well as for emerging cloud native, microservices based, app architectures.

During the last few development cycles, the community has worked on a new load balancing as a service (LBaaS) solution based on the Octavia project. Octavia provides tenants with a load balancing API, as well as implements the delivery of load balancing services via a fleet of service virtual machine instances, which it spins up on demand. With Red Hat OpenStack Platform 13, customers can use the OpenStack Platform director to easily deploy and setup Octavia and expose it to the overcloud tenants, including setting up a pre-created, supported and secured Red Hat Enterprise Linux based service VM image.

OpenStack_Networking-Guide_450456_0617_ECE_LBaaS
Figure 2. Octavia HTTPS traffic flow through to a pool member

FOUR: Integrated networking for OpenStack and OpenShift

OpenShift Container Platform, Red Hat’s enterprise distribution of Kubernetes optimized for continuous application development, is infrastructure independent. You can run it on public cloud, virtualization, OpenStack or anything that can boot Red Hat Enterprise Linux. But in order to run Kubernetes and application containers, you need control and flexibility at scale on the infrastructure level. Many of our customers are looking into OpenStack as a platform to expose VM and bare metal resources for OpenShift to provide Kubernetes clusters to different parts of the organization – nicely aligning with the strong multi-tenancy and isolation capabilities of OpenStack as well as its rich APIs.     

As a key contributor to both OpenStack and Kubernetes, Red Hat is shaping this powerful combination so that enterprises can not only deploy OpenShift on top of OpenStack, but also take advantage of the underlying infrastructure services exposed by OpenStack. A good example of this is through networking integration. Out of the box, OpenStack provides overlay networks managed by Neutron. However, OpenShift, based on Kubernetes and the Container Network Interface (CNI) project, also provides overlay networking between container pods. This results in two, unrelated, network virtualization stacks that run on top of each other and make the operational experience, as well as the overall performance of the solution, not optimal. With Red Hat OpenStack Platform 13, Neutron was enhanced so that it can serve as the networking layer for both OpenStack and OpenShift, allowing a single network solution to serve both container and non-container workloads. This is done through project Kuryr and kuryr-kubernetes, a CNI plugin that provides OpenStack networking to Kubernetes objects.

Customers will be able to take advantage of Kuryr with an upcoming Red Hat OpenShift Container Platform release, where we will also release openshift-ansible support for automated deployment of Kuryr components (kuryr-controller, kuryr-cni) on OpenShift Master and Worker nodes.   

Screen Shot 2018-07-12 at 3.13.30 pm
Figure 3. OpenShift and OpenStack

FIVE: Deployment on top of routed networks

As data center network architectures evolve, we are seeing a shift away from L2-based network designs towards fully L3 routed fabrics in an effort to create more efficient, predictable, and scalable communication between end-points in the network. One such trend is the adoption of leaf/spine (Clos) network topology where the fabric is composed of leaf and spine network switches: the leaf layer consists of access switches that connect to devices like servers, and the spine layer is the backbone of the network. In this architecture, every leaf switch is interconnected with each and every spine switch using routed links. Dynamic routing is typically enabled throughout the fabric and allows the best path to be determined and adjusted automatically. Modern routing protocol implementations also offers Equal-Cost Multipathing (ECMP) for load sharing of traffic between all available links simultaneously.

Originally, Red Hat OpenStack Platform director was designed to use shared L2 networks between nodes. This significantly reduces the complexity required to deploy OpenStack, since DHCP and PXE booting are simply done over a shared broadcast domain. This also makes the network switch configuration straightforward, since typically there is only a need to configure VLANs and ports, but no need to enable routing between all switches. This design, however, is not compatible with L3 routed network solutions such as the leaf/spine network architecture described above.

With Red Hat OpenStack Platform 13, director can now deploy OpenStack on top of fully routed topologies, utilizing its composable network and roles architecture, as well as a DHCP relay to support provisioning across multiple subnets. This provides customers with the flexibility to deploy on top of L2 or L3 routed networks from a single tool.

OpenStack_NFV_Mobile_Networks_438707_0317_ECE_Figure12

Learn more

Learn more about Red Hat OpenStack Platform:


For more information on Red Hat OpenStack Platform and Red Hat Virtualization contact your local Red Hat office today!

Virtualize your OpenStack control plane with Red Hat Virtualization and Red Hat OpenStack Platform 13

With the release of Red Hat OpenStack Platform 13 (Queens) we’ve added support to Red Hat OpenStack Platform director to deploy the overcloud controllers as virtual machines in a Red Hat Virtualization cluster. This allows you to have your controllers, along with other supporting services such as Red Hat Satellite, Red Hat CloudForms, Red Hat Ansible Tower, DNS servers, monitoring servers, and of course, the undercloud node (which hosts director), all within a Red Hat Virtualization cluster. This can reduce the physical server footprint of your architecture and provide an extra layer of availability.

Please note: this is not using Red Hat Virtualization as an OpenStack hypervisor (i.e. the compute service, which is already nicely done with nova via libvirt and KVM) nor is this about hosting the OpenStack control plane on OpenStack compute nodes.

Video courtesy: Rhys Oxenham, Manager, Field & Customer Engagement

Benefits of virtualization

Red Hat Virtualization (RHV) is an open, software-defined platform built on Red Hat Enterprise Linux and the Kernel-based Virtual Machine (KVM) featuring advanced management tools.  RHV gives you a stable foundation for your virtualized OpenStack control plane.

By virtualizing the control plane you gain instant benefits, such as:

  • Dynamic resource allocation to the virtualized controllers: scale up and scale down as required, including CPU and memory hot-add and hot-remove to prevent downtime and allow for increased capacity as the platform grows.
  • Native high availability for Red Hat OpenStack Platform director and the control plane nodes.
  • Additional infrastructure services can be deployed as VMs on the same RHV cluster, minimizing the server footprint in the datacenter and making an efficient use of the physical nodes.
  • Ability to define more complex OpenStack control planes based on composable roles. This capability allows operators to allocate resources to specific components of the control plane, for example, an operator may decide to split out networking services (Neutron) and allocate more resources to them as required. 
  • Maintenance without service interruption: RHV supports VM live migration, which can be used to relocate the OSP control plane VMs to a different hypervisor during their maintenance.
  • Integration with third party and/or custom tools engineered to work specifically with RHV, such as backup solutions.

Benefits of subscription

There are many ways to purchase Red Hat Virtualization, but many Red Hat OpenStack Platform customers already have it since it’s included in our most popular OpenStack subscription bundles, Red Hat Cloud Infrastructure and Red Hat Cloud Suite. If you have purchased OpenStack through either of these, you already own RHV subscriptions!

Logical Architecture

This is how the architecture looks when splitting the overcloud between Red Hat Virtualization for the control plane and utilizing bare metal for the tenants’ workloads via the compute nodes.

Screen Shot 2018-07-10 at 1.22.13 pm

Installation workflow

A typical installation workflow looks like this:

RHVOSP integration Blog post

Preparation of the Cluster/Host networks

In order to use multiple networks (referred to as “network isolation” in OpenStack deployments), each VLAN (Tenant, Internal, Storage, …) will be mapped to a separate logical network and allocated to the hosts’ physical nics. Full details are in the official documentation.

Preparation of the VMs

The Red Hat OpenStack Platform control plane usually consists of one director node and (at least) three controller nodes. When these VMs are created in RHV, the same requirements we have for these nodes on bare metal apply.

The director VM should have a minimum of 8 cores (or vCPUs), 16 GB of RAM and 100 GB of storage. More information can be found in the official documentation.

The controllers should have at least 32 GB of RAM and 16 vCPUs. While the same amount of resources are required for virtualized controllers, by using RHV we gain the ability to better optimize that resource consumption across our underlying hypervisors

Red Hat Virtualization Considerations

Red Hat Virtualization needs to be configured with some specific settings to host the VMs for the controllers:

Anti-affinity for the controller VMs

We want to ensure there is only one OpenStack controller per hypervisor so that in case of a hypervisor failure, the service level disruption minimalized to a single controller. This allows for HA to be taken care of using the different levels of high availability mechanisms already built in to the system. For this to work we use RHV to configure an affinity group with “soft negative affinity,” effectively giving us “anti-affinity!” Additionally it provides the flexibility to override this rule in case of system constraints.

VM network configuration

One vNIC per VLAN

In order to use multiple networks (referred to as “network isolation” in OpenStack deployments), each VLAN (Tenant, Internal, Storage, …) will be mapped to a separate virtual NIC (vNIC) in the controller VMs and VLAN “untagging” will be done at the hypervisor (cluster) and VM level.

Full details can be found in the official documentation.

Screen Shot 2018-07-10 at 11.35.44 (1)

Allow MAC Spoofing

For the virtualized controllers to allow the network traffic in and out correctly, the MAC spoofing filter must be disabled on the networks that are attached to the controller VMs. To do this we set no_filter in the vNIC of the director and controller VMs, then restart the VMs and disable the MAC anti-spoofing filter.

Important Note: If this is not done DHCP and PXE booting of the VMs from director won’t work.

Implementation in director

Red Hat OpenStack Platform director (TripleO’s downstream release) uses the Ironic Bare Metal provisioning component of OpenStack to deploy the OpenStack components on physical nodes. In order to add support for deploying the controllers on Red Hat Virtualization VMs, we enabled support in Ironic with a new driver named staging-ovirt.

This new driver manages the VMs hosted in RHV similar to how other drivers manage physical nodes using BMCs supported by Ironic, such as iRMC, iDrac or iLO. For RHV this is done by interacting with the RHV manager directly to trigger power management actions on the VMs.

Enabling the staging-ovirt driver in director

Director needs to enable support for the new driver in Ironic. This is done as you would do it for any other Ironic driver by simply specifying it in the undercloud.conf configuration file:

enabled_hardware_types = ipmi,redfish,ilo,idrac,staging-ovirt

After adding the new entry and running openstack undercloud install we can see the staging-ovirt driver listed in the output:

(undercloud) [stack@undercloud-0 ~]$ openstack baremetal driver list
+---------------------+-----------------------+
| Supported driver(s) | Active host(s)        |
+---------------------+-----------------------+
| idrac               | localhost.localdomain |
| ilo                 | localhost.localdomain |
| ipmi                | localhost.localdomain |
| pxe_drac            | localhost.localdomain |
| pxe_ilo             | localhost.localdomain |
| pxe_ipmitool        | localhost.localdomain |
| redfish             | localhost.localdomain |
| staging-ovirt       | localhost.localdomain |

Register the RHV-hosted VMs with director

When defining a RHV-hosted node in director’s instackenv.json file we simply set the power management type (pm_type) to the “staging-ovirt” driver, provide the relevant RHV manager host name, and include the username and password for the RHV account that can control power functions for the VMs.

{
    "nodes": [
        {
            "name":"osp13-controller-1",
            "pm_type":"staging-ovirt",
            "mac":[
                "00:1a:4a:16:01:39"
            ],
            "cpu":"2",
            "memory":"4096",
            "disk":"40",
            "arch":"x86_64",
            "pm_user":"admin@internal",
            "pm_password":"secretpassword",
            "pm_addr":"rhvm.lab.redhat.com",
            "pm_vm_name":"osp13-controller-1",
            "capabilities": "profile:control,boot_option:local"
        },
        {
            "name":"osp13-controller-2",
            "pm_type":"staging-ovirt",
            "mac":[
                "00:1a:4a:16:01:3a"
            ],
            "cpu":"2",
            "memory":"4096",
            "disk":"40",
            "arch":"x86_64",
            "pm_user":"admin@internal",
            "pm_password":"secretpassword",
            "pm_addr":"rhvm.lab.redhat.com",
            "pm_vm_name":"osp13-controller-2",
            "capabilities": "profile:control,boot_option:local"
        },
        {
            "name":"osp13-controller-3",
            "pm_type":"staging-ovirt",
            "mac":[
                "00:1a:4a:16:01:3b"
            ],
            "cpu":"2",
            "memory":"4096",
            "disk":"40",
            "arch":"x86_64",
            "pm_user":"admin@internal",
            "pm_password":"secretpassword",
            "pm_addr":"rhvm.lab.redhat.com",
            "pm_vm_name":"osp13-controller-3",
            "capabilities": "profile:control,boot_option:local"
        }
    ]
}

A summary of the relevant parameters required for RHV are as follows:

  • pm_user: RHV-M username.
  • pm_password: RHV-M password.
  • pm_addr: hostname or IP of the RHV-M server.
  • pm_vm_name: Name of the virtual machine in RHV-M where the controller will be created.

For more information on Red Hat OpenStack Platform and Red Hat Virtualization contact your local Red Hat office today!

Red Hat OpenStack Platform: Making innovation accessible for production

An OpenStack®️-based cloud environment can help you digitally transform to succeed in fast-paced, competitive markets. However, for many organizations, deploying open source software supported only by the community can be intimidating. Red Hat®️ OpenStack Platform combines community-powered innovation with enterprise-grade features and support to help your organization build a production-ready private cloud.

Through an open source development model, community leadership, and production-grade life-cycle options, Red Hat makes open source software more accessible for production use across industries and organizations of any size and type.

omar-albeik-589641-unsplash
Photo by Omar Albeik on Unsplash

Open source development model

In order for open source technologies to be effective in production, they must provide stability and performance while also delivering the latest features and advances. Our open source development model combines fast-paced, cross-industry community innovation with production-grade hardening, integrations, support, and services. We take an upstream-first approach by contributing all developments back to the upstream community. This makes new features immediately available and helps to drive the interoperability of Red Hat products with upstream releases. Based on community OpenStack releases, Red Hat OpenStack Platform is intensively tested and hardened to meet the rigors of production environments. Ongoing patching, bug fixes, and certification keep your environment up and running.

Community leadership

We know that open source technologies can be of the highest quality and work with communities to deliver robust code. Red Hat is the top code contributor to the OpenStack community. We are responsible for 28% of the code in the Queens release and 18% of the code across all releases. We collaborate with our customers, partners, and industry organizations to identify the features they need to be successful. We then work to add that functionality into OpenStack. Over time, these efforts have resulted in enhancements in OpenStack’s availability, manageability, and performance, as well as industry-specific additions like OpenDaylight support for telecommunications.

Production-grade life-cycle options

The OpenStack community delivers new releases every six months, which can be challenging for many organizations looking to deploy OpenStack-based production environments. We provide stable branch releases of OpenStack that are supported for an enterprise production life cycle—beyond the six-month release cycle of the OpenStack community. With Red Hat OpenStack Platform, we give you two life-cycle options that let you choose when to upgrade and add new features to your cloud environment.

  • Standard release cadence. Upgrade every six to twelve months between standard releases to stay aligned with the latest features as they become available. Standard releases include one year of support.
  • Long-life release cadence. Standardize on long-life releases for up to five years. Long-life releases include three years of support, with the option to extend support for an additional two years with extended life-cycle support (ELS), for up to five years of support total. All new features are included with each long-life release.

Red Hat OpenStack Platform director—an integrated deployment and life-cycle management tool—streamlines upgrades between standard releases. And, the new fast forward upgrade feature in director lets you easily transition between long-life releases, without the need to upgrade to each in-between release. So, if you are currently using Red Hat OpenStack Platform 10, you now have an easy upgrade path to Red Hat OpenStack Platform 13—with fewer interruptions, no need for additional hardware, and simpler implementation of containerized OpenStack services.

Fast forward upgrade diagram v1

Learn more

Red Hat OpenStack Platform can help you overcome the challenges of deploying OpenStack into production use. And, if you aren’t sure about how to build your cloud environment, don’t have the time or resources to do so, or just want some help on your cloud journey, we provide a variety of expert services and training.

Learn more about Red Hat OpenStack Platform:

Red Hat OpenStack Platform: Two life-cycle choices to fit your organization

OpenStack®️ is a powerful platform for building private cloud environments that support modern, digital business operations. However, the OpenStack community’s six-month release cadence can pose challenges for enterprise organizations that want to deploy OpenStack in production. Red Hat can help.

elizabeth-lies-20237-unsplash
Photo by elizabeth lies on Unsplash

Red Hat®️ OpenStack Platform is an intensely tested, hardened, and supported distribution of OpenStack based on community releases. In addition to production-grade features and functionality, it gives you two life-cycle choices to align with the way your organization operates:

  • Standard releases. These releases follow the six-month community release cadence and include one year of support.
  • Long-life releases. Starting with Red Hat OpenStack Platform 10, every third release is a long-life release. These include three years of support, with option to extend support for an additional two years with extended life-cycle support (ELS), for up to five years of support total.

Why does this matter? Different organizations have different needs when it comes to infrastructure life cycles and management. Some need to implement the latest innovations as soon as they are available, and have the processes in place to continuously upgrade and adapt their IT environment. For others, the ability to standardize and stabilize operations for long durations of time is paramount. These organizations may not need the newest features right away—periodic updates are fine.

tristan-colangelo-39719-unsplash
Photo by Tristan Colangelo on Unsplash

Red Hat OpenStack Platform life-cycle options accommodate both of these approaches. Organizations that need constant innovation can upgrade to the latest Red Hat OpenStack Platform release every six months to take advantage of new features as they become available. Organizations that prefer to use a given release for a longer time can skip standard releases and simply upgrade between long-life releases every 18 to 60 months.

Here’s a deeper look into each option and why you might choose one over the other.

Standard upgrade path

With this approach, you upgrade every six to twelve months as a new release of Red Hat OpenStack Platform is made available. Red Hat OpenStack Platform director provides upgrade tooling to simplify the upgrade process. As a result, you can adopt the latest features and innovations as soon as possible. This keeps your cloud infrastructure aligned closely with the upstream community releases, so if you’re active in the OpenStack community, you’ll be able to take advantage of your contributions sooner.

This upgrade path typically requires organizations to have processes in place to efficiently manage continuously changing infrastructure. If you have mature, programmatic build and test processes, you’re in good shape.

The standard upgrade path is ideal for organizations involved in science and research, financial services, and other fields that innovate fast and change quickly.

jordan-ladikos-62738-unsplash
Photo by Jordan Ladikos on Unsplash 

 

Long-life upgrade path

With this approach, you upgrade every 18 to 60 months between long-life releases of Red Hat OpenStack Platform, skipping two standard releases at a time. Starting with Red Hat OpenStack Platform 13, the fast forward upgrade feature in director simplifies the upgrade process by fully containerizing Red Hat OpenStack Platform deployment. This minimizes interruptions due to upgrading and eliminates the need for additional hardware to support the upgrade process. As a result, you can use a long-life release, like Red Hat OpenStack Platform 10 or 13, for an extended time to stabilize operations. Based on customer requests and feasibility reviews, select features in later standard releases may be backported to the last long-life release (Full Support phase only), so you can still gain access to some new features between upgrades.

The long-life upgrade path works well for organizations that are more familiar and comfortable with traditional virtualization and may still be adopting a programmatic approach to IT operations.

This path is ideal for organizations that prefer to standardize on infrastructure and don’t necessarily need access to the latest features right away. Organizations involved in telecommunications and other regulated fields often choose the long-life upgrade path.

Wrapping up

With two life-cycle options for Red Hat OpenStack Platform, Red Hat supports you no matter where you are in your cloud journey. If you have questions about which path is best for your organization, contact us and we’ll help you get started.

Learn more about Red Hat OpenStack Platform:

Red Hat OpenStack Platform 13 is here!

Accelerate. Innovate. Empower.

In the digital economy, IT organizations can be expected to deliver services anytime, anywhere, and to any device. IT speed, agility, and innovation can be critical to help stay ahead of your competition. Red Hat OpenStack Platform lets you build an on-premise cloud environment designed to accelerate your business, innovate faster, and empower your IT teams.

Logotype_RH_OpenStackPlatform_RGB_Black (2)

Accelerate. Red Hat OpenStack Platform can help you accelerate IT activities and speed time to market for new products and services. Red Hat OpenStack Platform helps simplify application and service delivery using an automated self-service IT operating model, so you can provide users with more rapid access to resources. Using Red Hat OpenStack Platform, you can build an on-premises cloud architecture that can provide resource elasticity, scalability, and increased efficiency to launch new offerings faster.

Innovate. Red Hat OpenStack Platform enables you differentiate your business by helping to make new technologies more accessible without sacrificing current assets and operations. Red Hat’s open source development model combines faster-paced, cross-industry community innovation with production-grade hardening, integrations, support, and services. Red Hat OpenStack Platform is designed to provide an open and flexible cloud infrastructure ready for modern, containerized application operations while still supporting the traditional workloads your business relies on.

Empower. Red Hat OpenStack Platform helps your IT organization deliver new services with greater ease. Integrations with Red Hat’s open software stack let you build a more flexible and extensible foundation for modernization and digital operations. A large partner ecosystem helps you customize your environment with third-party products, with greater confidence that they will be interoperable and stable.

With Red Hat OpenStack Platform 13, Red Hat continues to bring together community-powered innovation with the stability, support, and services needed for production deployment. Red Hat OpenStack Platform 13 is a long-life release with up to three years of standard support and an additional, optional two years of extended life-cycle support (ELS). This release includes many features to help you adopt cloud technologies more easily and support digital transformation initiatives.

Fast forward upgrades

With both standard and long-life releases, Red Hat OpenStack Platform lets you choose when to implement new features in your cloud environment:

  • Upgrade every six months and benefit from one year of support on each release.
  • Upgrade every 18 months with long-life releases and benefit from 3 years of support on that release, with an optional ELS totalling to up to 5 years of support. Long life releases include innovations from all previous releases.

Now, with the fast forward upgrade feature, you can skip between long-life releases on an 18-month upgrade cadence. Fast forward upgrades fully containerize Red Hat OpenStack Platform deployment to simplify the process of upgrading between long-life releases. This means that customers who are currently using Red Hat OpenStack Platform 10 have an easier upgrade path to Red Hat OpenStack Platform 13—with fewer interruptions and no need for additional hardware.

Fast forward upgrade diagram v1
Red Hat OpenStack Platform life cycle by version

Containerized OpenStack services

Red Hat OpenStack Platform now supports containerization of all OpenStack services. This means that OpenStack services can be independently managed, scaled, and maintained throughout their life cycle, giving you more control and flexibility. As a result, you can simplify service deployment and upgrades and allocate resources more quickly, efficiently, and at scale.

Red Hat stack integrations

The combination of Red Hat OpenStack Platform with Red Hat OpenShift provides a modern, container-based application development and deployment platform with a scalable hybrid cloud foundation. Kubernetes-based orchestration simplifies application portability across scalable hybrid environments, designed to provide a consistent, more seamless experience for developers, operations, and users.

Red Hat OpenStack Platform 13 delivers several new integrations with Red Hat OpenShift Container Platform:

  • Integration of openshift-ansible into Red Hat OpenStack Platform director eases troubleshooting and deployment.
  • Network integration using the Kuryr OpenStack project unifies network services between the two platforms, designed to eliminate the need for multiple network overlays and reduce performance and interoperability issues.  
  • Load Balancing-as-a-Service with Octavia provides highly available cloud-scale load balancing for traditional or containerized workloads.

Additionally, support for the Open Virtual Networking (OVN) networking stack supplies consistency between Red Hat OpenStack Platform, Red Hat OpenShift, and Red Hat Virtualization.

Security features and compliance focus

Security and compliance are top concerns for organizations deploying clouds. Red Hat OpenStack Platform includes integrated security features to help protect your cloud environment. It encrypts control flows and, optionally, data stores and flows, enhancing the privacy and integrity of your data both at rest and in motion.

Red Hat OpenStack Platform 13 introduces several new, hardened security services designed to help further safeguard enterprise workloads:

  • Programmatic, API-driven secrets management through Barbican
  • Encrypted communications between OpenStack services using Transport Layer Security (TLS) and Secure Sockets Layer (SSL)
  • Cinder volume encryption and Glance image signing and verification

Additionally, Red Hat OpenStack Platform 13 can help your organization meet relevant technical and operational controls found in risk management frameworks globally. Red Hat can help support compliance guidance provided by government standards organizations, including:

  • The Federal Risk and Authorization Management Program (FedRAMP) is a U.S. government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.
  • Agence nationale de la sécurité des systèmes d’information (ANSSI) is the French national authority for cyber-defense and network and information security (NIS).

A updated security guide is also available to help you when deploying a cloud environment.

Storage and hyperconverged infrastructure options

Red Hat Ceph Storage provides unified, highly scalable, software-defined block, object, and file storage for Red Hat OpenStack Platform deployments and services. Integration between the two enables you to deploy, scale, and manage your storage back end just like your cloud infrastructure. New storage integrations included in Red Hat OpenStack Platform 13 give you more choice and flexibility. With support for the OpenStack Manila project, you can use the CephFS NFS file share as a service to better support applications using file storage. As a result, you can choose the type of storage for each workload, from a unified storage platform.

Red Hat Hyperconverged Infrastructure for Cloud combines Red Hat OpenStack Platform and Red Hat Ceph Storage into a single offering with a common life cycle and support. Both Red Hat OpenStack Platform compute and Red Hat Ceph Storage functions are run on the same host, enabling consolidation and efficiency gains. NFV use cases for Red Hat Hyperconverged Infrastructure for Cloud include:

  • Core datacenters
  • Central office datacenters
  • Edge and remote point of presence (POP) environments
  • Virtual radio access networks (vRAN)
  • Content delivery networks (CDN)

You can also add hyperconverged capabilities to your current Red Hat OpenStack Platform subscriptions using an add-on SKU.

RHHCI use cases v0
Red Hat Hyperconverged Infrastructure for Cloud use cases

Telecommunications optimizations

Red Hat OpenStack Platform 13 delivers new telecommunications-specific features that allow CSPs to build innovative, cloud-based network infrastructure more easily:

  • OpenDaylight integration lets you connect your OpenStack environment with the OpenDaylight software-defined networking (SDN) controller, giving it greater visibility into and control over OpenStack networking, utilization, and policies.
  • Real-time Kernel-based Virtual Machine (KVM) support designed to deliver ultra-low latency for performance-sensitive environments.
  • Open vSwitch (OVS) offload support (tech preview) lets you implement single root input/output virtualization (SR-IOV) to help reduce the performance impact of virtualization and deliver better performance for high IOPS applications.
OpenStack_OpenDaylight-NetVirt_437720_0317-illustrated
Red Hat OpenStack Platform and OpenDaylight cooperation

Learn more

Red Hat OpenStack Platform combines community-powered innovation with enterprise-grade features and support to help your organization build a production-ready private cloud. With it, you can accelerate application and service delivery, innovate faster to differentiate your business, and empower your IT teams to support digital initiatives.

Learn more about Red Hat OpenStack Platform:

Red Hat Certified Cloud Architect – An OpenStack Perspective – Part Two

Previously we learned about what the Red Hat Certified Architect certification is and what exams are included in the “OpenStack-focused” version of the certification. This week we want to focus on personal experience and benefits from achieving this milestone.

Let’s be honest, even for the most skilled engineers the path to becoming an RHCA can be quite challenging and even a little bit intimidating!  Not only do the exams test your ability to perform specific tasks based on the certification requirements, but they also test your ability to repurpose that knowledge and combine it with the knowledge of other technologies while solving extremely complex scenarios.  This can make achieving the RHCA even more difficult; however, it also makes achieving the RHCA extremely validating and rewarding.

samuel-clara-69657-unsplash
Photo by Samuel Clara on Unsplash

Many busy professionals decide to prepare for the exams with Red Hat Online Learning (ROLE), which allows students to access the same robust course content and hands-on lab experience delivered in classroom training from the comfort of their own computer and at their own pace. This is made even easier through the Red Hat Learning Subscription (RHLS).

RHLS provides access to the entire Red Hat courseware catalog, including video classrooms, for a single, convenient price per year. This kind of access can help you prepare for all the certifications. We found that before sitting an exam, it was important to be able to perform 100 percent of the respective ROLE lab without referring back to any documentation for help; with RHLS this is much easier to do!  

While documentation and man pages are available during an exam, they should be used as a resource and not a replacement for deep knowledge. Indeed, it’s much better to make sure you know it by heart without needing to look! We also found that applying the comprehensive reviews found at the end of each ROLE course to real world scenarios helped us better understand how what we learned in the course applied to what we would do on a day-to-day basis.  

For example, when taking the Ansible ROLE course DO407, which uses a comprehensive virtual environment and a video classroom version, we were easily able to spawn instances in our own physical OpenStack environment and apply what we had learned in the course to the real world.  By putting the courseware into action in the real world it better allowed us to align the objectives of the course to real-life scenarios, making the knowledge more practical and easier to retain.

What about formal training?

nathan-dumlao-572047-unsplash
Photo by Nathan Dumlao on Unsplash

We wouldn’t recommend for anyone to just show up at the examination room without taking any formal training. Even if you feel that your level of proficiency in any of these technologies is advanced, keep in mind that Red Hat exams go very deep, covering large portions of the technology. For example, you might be an ‘Ansible Ninja’ writing playbooks for a living. But how often do you work with dynamic inventories or take advantage of delegation, vaults or parallelism? The same applies for any other technology you want to test yourself in, there is a good chance it will cover aspects you are not familiar with.

The value comes from having the combination of skills.  Take the example of an auto mechanic who is great at rebuilding a transmission, but may not know how to operate a manual transmission!  You can’t be an expert at one without knowing a lot about the other.

For us, this is where Red Hat training has been invaluable. With every exam there is a corresponding class provided. These classes not only cover each aspect of the technology (and beyond) that you will be tested on, but also provide self-paced lab modules and access to lab environments. They are usually offered with either a live instructor or via an online option so you can juggle the education activities with your ‘day job’ requirements!

More information about the classes for these exams can be found on the Red Hat Training site. 

How long does it take?

It doesn’t have to take long at all. If you already have an RHCE in Red Hat Enterprise Linux and OpenStack is not a new subject to you, the training will serve as an excellent reminder rather than something that you have to learn from scratch. Some people may even be able to complete all 5 exams in less then a month.

But does everyone want to go that fast? Probably not.

estee-janssens-396876-unsplash
Photo by Estée Janssens on Unsplash

When our customers ask us about what we recommend to achieve these certifications in a realistic timeframe we suggest the Red Hat Learning Subscription to them. As mentioned, it gives you amazing access to Red Hat courseware.

But it is more than that.

The Red Hat Learning Subscription is a program for individuals and organizations that not only provides the educational content to prepare you for the exams (including videos and lab access), but also, in some cases, may includes actual exams (and some retakes) at many Red Hat certified facilities. It is is valid for one year, which is plenty of time to work through all the courses and exams.

This kind of flexibility can help to shape an individual learning path.

For instance, imagine doing it like this:

With the Red Hat Learning subscription you could schedule all the exams in advance in two month intervals. These exams then become your milestones and give you a good predictable path for studying. You can always reschedule them if something urgent comes up. This lets you sign up for classes, but don’t take them too far apart before your exam. Then re-take all the self paced labs a week before your exam, without reading guided instructions. After that you should be in a position to assess your readiness for the exams and reach the ultimate goal of an RHCA.

Don’t get discouraged if you don’t pass on the first try, it’s not unusual even for subject experts to fail at first try! Simply close the knowledge gaps and retake the exam again. And with RHLS, you’ve got the access and time to do so!

The benefits of becoming RHCA can be substantial. Outside of gaining open source “street cred”, the most important aspect is, of course, for your career – it’s simple: you can get better at your job.

clark-tibbs-367075-unsplash
Photo by Clark Tibbs on Unsplash

And of course, being better at your job can translate to being more competitive in the job market, which can lead to being more efficient in your current role and potentially even bring additional financial compensation!

But becoming an RHCA is so much more. It helps to broaden your horizons. You can learn more ways to tackle real life business problems, including how to become more capable of taking leadership roles through translating problems into technology solutions.

As a proud Red Hat Certified Architect you will have the tools to help make the IT world a better place!

So what are you waiting for … go get it!


Icon_RH_Transportation_Space-Rocket_RGB_Flat (1)Ready to start your certification journey? Get in touch with the friendly Red Hatters at Red Hat Training in your local area today to find all the ways you can master the skills you need to accelerate your career and run your enterprise cloud!


About the authors:

Screen Shot 2018-06-20 at 12.23.14 pmChris Janiszewski is an Red Hat OpenStack Solutions Architect. He is proud to help his clients validate their business and technical use cases on OpenStack and supporting components like storage, networking or cloud automation and management. He is the father of two little kids and enjoys the majority of his free time playing with them.  When the kids are asleep he gets to put the “geek hat” on and build OpenStack labs to hack crazy use cases!


Screen Shot 2018-06-20 at 12.23.23 pmKen Holden is a Senior Solution Architect with Red Hat.  He has spent the past 3 years on the OpenStack Tiger Team with the primary responsibility of deploying Red Hat OpenStack Platform Proof-Of-Concept IaaS Clouds for Strategic Enterprise Customers across North America.  Throughout his 20 year career in Enterprise IT, Ken has focussed on Linux, Unix, Storage, Networking, and Security with the past 5 years being primarily focused on Cloud Solutions. Ken has achieved Red Hat Certified Architect status (RHCA 110-009-776) and holds Certified OpenStack Administrator status (COA-1700-0387-0100) with the OpenStack Foundation. Outside of work, Ken spends the majority of his time with his wife and two daughters, but also aspires to be the world’s most OK Guitar Player when time permits!

Red Hat OpenStack Platform fast forward upgrades: A step-by-step overview

New in Red Hat®️ OpenStack®️ Platform 13, the fast forward upgrade feature lets you easily move between long-life releases, without the need to upgrade to each in-between release. Fast forward upgrades fully containerize Red Hat OpenStack Platform deployment to simplify and speed the upgrade process while reducing interruptions and eliminating the need for additional hardware. Today, we’ll take a look at what the fast forward upgrade process from Red Hat OpenStack Platform 10 to Red Hat OpenStack Platform 13 looks like in practice.

Screen Shot 2018-03-22 at 9.33.50 am

There are six main steps in the process:

  1. Cloud backup. Back up your existing cloud.
  2. Minor update. Update to the latest minor release.
  3. Undercloud upgrade. Upgrade your undercloud.
  4. Overcloud preparation. Prepare your overcloud.
  5. Overcloud upgrade. Upgrade your overcloud.
  6. Convergence. Converge your environment.

Step 1: Back up your existing cloud

First, you need to back up everything in your existing Red Hat OpenStack Platform 10 cloud, including your undercloud, overcloud, and any supporting services. It’s likely that you already have these procedures in place, but Red Hat also provides comprehensive Ansible playbooks to simply the fast forward process even more.

Manual backup procedures are likewise supported by Red Hat’s Customer Experience and Engagement (CEE) group.

A typical OpenStack backup process may involve the following steps:

  1. Notify your users.
  2. Purge your databases, including any unnecessary data stored by Heat or other OpenStack services. This will help to streamline the backup and upgrade process.
  3. Run undercloud and overcloud backups. This will preserve an initial backup of the cloud – it may take some time if you don’t have another backup to reference to this point in time.

By performing a backup before starting the upgrade, you can speed the overall upgrade process by only requiring smaller backups later on.

Step 2: Update to the latest minor release

lucas-davies-500439-unsplash
Photo by Lucas Davies on Unsplash

Next, update your Red Hat OpenStack Platform environment to the latest minor release using the standard minor update processes. This step consolidates all undercloud and overcloud node reboots required for moving to Red Hat OpenStack Platform 13. This simplifies the overall upgrade, as no reboots are needed in later steps. For example, an upgrade from Red Hat OpenStack Platform 10 to the latest, fast forward-ready minor release will update Open vSwitch (OVS) to version 2.9, Red Hat Enterprise Linux to version 7.5, and Red Hat Ceph®️ Storage to version 2.5 in your overcloud. These steps do require node reboots, so you can live-migrate workloads prior rebooting nodes to avoid downtime.

Step 3: Upgrade your undercloud

In this step, you’ll upgrade Red Hat OpenStack Platform director, known as the undercloud, to the new long-life release. This requires manual rolling updates from Red Hat OpenStack Platform 10 to 11 to 12 to 13, but does not require any reboots, as they were completed in the previous minor update. The same action pattern is repeated for each release: enable the new repository, stop main OpenStack Platform services, upgrade director’s main packages, and upgrade the undercloud. Note that Red Hat OpenStack Platform director will not be able to manage the version 10 overcloud during or after these upgrades.

Step 4: Prepare your overcloud

Red Hat OpenStack Platform 13 introduces containerized OpenStack services to the long-life release cadence. This step goes through the process to create the container registry to support the deployment of these new services during the fast forward procedure.

arnel-hasanovic-673679-unsplash
Photo by Arnel Hasanovic on Unsplash

The first part of this step is to prepare the container images to accomplish this:

  1. Upload Red Hat OpenStack Platform 13 container images to your cloud environment. These can be stored on the director node or on additional hardware. If you choose to store them on your director node, ensure that the node has enough space available for the images. Note that during this part, your undercloud will be unable to scale your overcloud.

Next, you’ll prepare your overcloud for features introduced in Red Hat OpenStack Platform 11 and 12, including composable networks and roles:

  1. Include new services in any custom roles_data files.
  2. Edit any custom roles_data files to add composable networks (new for Red Hat OpenStack Platform 13) to each role.
  3. Remove deprecated services from any custom roles_data files and update deprecated parameters in custom environment files.

If you have a Red Hat OpenStack Platform director-managed Red Hat Ceph Storage cluster or storage backends, you’ll also need to prepare your storage nodes for new, containerized configuration methods.

  1. Install the ceph-ansible package of playbooks in your undercloud and check that you are using the latest resources and configurations in your storage environment file.
  2. Update custom storage backend environment files to include new parameters and resources for composable services. This applies to NetApp, Dell EMC, and Dell EqualLogic block storage backends using cinder.

Finally, if your undercloud uses SSL/TLS for its Public API, you’ll need to allow your overcloud to access your undercloud’s OpenStack Object Storage (swift) Public API during the upgrade process.

  1. Add your undercloud’s certificate authority to each overcloud node using an Ansible playbook.
  2. Perform one last backup. This is the final opportunity for backups before starting the overcloud upgrade.

Step 5: Upgrade your overcloud

eberhard-grossgasteiger-330357-unsplash
Photo by eberhard grossgasteiger on Unsplash

This step is the core of the fast forward upgrade procedure. Remember that director is unable to manage your overcloud until this step is completed. During this step you’ll upgrade all overcloud roles and services from version 10 to version 13 using a fully managed series of commands. Let’s take a look at the process for each role.

Controller nodes

First, you’ll upgrade your control plane. This is performed on a single controller node, but does require your entire control plane to be down. Even so, it does not affect currently running workloads. Upgrade the chosen controller node sequentially through Red Hat OpenStack Platform releases to version 13. Once the database on the upgraded controller has been updated, containerized Red Hat OpenStack Platform 13 services can be deployed to all other controllers.

Compute nodes

Next, you’ll upgrade your compute nodes. As with your controller nodes, only OpenStack services are upgraded—not the underlying operating system. Node reboots are not required and workloads are unaffected by the process. The upgrade process is very fast, as it adds containerized services alongside RPM-based services and then simply switches over each service. During the process, however, compute users will not be able to create new instances. Some network services may also be affected.

To get familiar with the process and ensure compatibility with your environment, we recommend starting with a single, low-risk compute node.

Storage (Red Hat Ceph Storage) nodes

Finally, you’ll upgrade your Red Hat Ceph Storage nodes. While this upgrade is slightly different than the controller and compute nodes, it is not disruptive to services and your data plane remains available throughout the procedure. Director uses the ceph-ansible installer making upgrading your storage nodes simpler. It uses a rolling upgrade process to first upgrade your bare-metal services to Ceph 3.0, and then containerizes Ceph services.

steve-johnson-541507-unsplash
Photo by Steve Johnson on Unsplash

Step 6: Converge your environment

At this point, you’re almost done with the fast forward process. The final step is to converge all components in your new Red Hat OpenStack Platform 13 environment. As mentioned previously, until all overcloud components are upgraded to the same version as your Red Hat OpenStack Platform director, you have only limited overcloud management capabilities. While your workloads are unaffected, you’ll definitely want to regain full control over your environment.

This step finishes the fast forward upgrade process. You’ll update your overcloud stack within your undercloud. This ensures that your undercloud has the current view of your overcloud and resets your overcloud for ongoing operation. Finally, you’ll be able to operate your Red Hat OpenStack Platform environment as normal: add nodes, upgrade components, scale services, and manage everything from director.

Conclusion

Fast forward upgrades simplify the process of moving between long-life releases of Red Hat OpenStack Platform. However, upgrading from Red Hat OpenStack Platform 10 to containerized architecture of Red Hat OpenStack Platform 13 is still a significant change. As always, Red Hat is ready to help you succeed with detailed documentation, subscription support, and consulting services.

Watch the OpenStack Upgrades Strategy: The Fast Forward Upgrade video from OpenStack Summit Vancouver 2018 to learn more about the fast forward upgrade approach.

Learn more about Red Hat OpenStack Platform:

Red Hat Certified Cloud Architect – An OpenStack Perspective – Part One

The Red Hat Certified Architect (RHCA) is the highest certification provided by Red Hat. To many, it can be looked at as a “holy grail” of sorts in open source software certifications. It’s not easy to get. In order to receive it, you not only need to already be a Red Hat Certified Engineer  (RHCE) for Red Hat Enterprise Linux (with the Red Hat Certified System Administrator, (RHCSA) as pre-requisite) but also pass additional exams from various technology categories.

vasily-koloda-620886-unsplash
Photo by Vasily Koloda on Unsplash

There are roughly 20 exams to choose from that qualify towards the RHCA. Each exam is valid for 3 years, so as long as you complete 5 exams within a 3 year period, you will qualify for the RHCA. With that said, you must keep these exams up to date if you don’t want to lose your RHCA status.

An RHCA for OpenStack!

Ok, the subtitle might be misleading – there is no OpenStack specific RHCA certification! However you can select exams that will test your knowledge in technologies needed to successfully build and run OpenStack private clouds. We feel the following certifications demonstrate skills that are crucial for OpenStack:

Let’s take a deeper look at each one.

The first two are strictly OpenStack-based. To become a Red Hat Certified System Administrator in Red Hat OpenStack, you need to know how to deploy and operate an OpenStack private cloud. It is also required that you have a good knowledge of Red Hat OpenStack Platform features and how to take advantage of them.


A Red Hat Certified Engineer in Red Hat OpenStack is expected to be able to deploy and work with Red Hat Storage as well as have strong troubleshooting skills, especially around networking. The EX310 exam has recently been refreshed with a strong emphasis on Network Functions Virtualization (NFV) and advanced networking – which can be considered ‘must have’ skills in many OpenStack Telco use cases in the real world.

Since Red Hat OpenStack Platform comes with Red Hat CloudForms, the knowledge of it can be as crucial as OpenStack itself. Some folks even go as far as saying CloudForms is OpenStack’s missing brother. The next certification on the list, the Red Hat Certified Specialist in Hybrid Cloud Management, focuses on managing infrastructure using Red Hat CloudForms.  Where OpenStack focuses on abstracting compute, network and storage, CloudForms takes care of the business side of the house. It manages compliance, policies, chargebacks, service catalogs, integration with public clouds, legacy virtualization, containers, and automation platforms. CloudForms really can do a lot, so you can see why it is essential for certification.

But what about … Ansible?!

jess-watters-483666-unsplash
Photo by Jess Watters on Unsplash

For workload orchestration in OpenStack you can, of course, natively use Heat. However, if you want to become a truly advanced OpenStack user, you should consider exploring Ansible for these tasks. The biggest advantages of Ansible are its simplicity and flexibility with other platforms (not just OpenStack). It is also popular within DevOps teams for on- and off-premises workload deployments. In fact, Ansible is also a core technology behind the Red Hat OpenStack director, CloudForms, and Red Hat OpenShift Container Platform. It’s literally everywhere in the Red Hat product suites!

Logotype_RH_AnsibleAutomation_RGB_Black (1)

One of the reasons for Ansible’s popularity is the amazing functionality it provides through many reusable modules and playbooks. The Red Hat Certified Specialist in Ansible Automation deeply tests your knowledge of writing Ansible playbooks for automation of workload deployments and system operation tasks.

Virtualization of the nation

The last three certifications on this list (the Specialist certifications in Virtualization, Configuration Management, and OpenShift Administration), although not as closely related to OpenStack as the other certifications described here, extend the capability of your OpenStack skill set.

Many OpenStack deployments are complemented by standalone virtualization solutions such as Red Hat Virtualization. This is often useful for workloads not yet ready for a cloud platform. And with CloudForms, Red Hat Virtualization (RHV) and Red Hat OpenStack Platform can both be managed from one place, so having a solid understanding of Red Hat Virtualization can be very beneficial. This is why being a Red Hat Certified Specialist in Virtualization can be so crucial. Being able to run and manage both cloud native workloads and traditional virtualization is essential to your OpenStack skillset.

ng-30950-unsplash
Photo by 贝莉儿 NG on Unsplash

Puppets and Containers

To round things off, since Red Hat OpenStack Platform utilizes Puppet, we recommend earning the Red Hat Certified Specialist in Configuration Management certification for a true OpenStack-focused RHCA. Through it you demonstrate skills and knowledge in the underlying deployment mechanism allowing for a much deeper understanding and skill set.

Finally, a popular use case for OpenStack is running containerized applications on top of it. Earning the Red Hat Certified Specialist in OpenShift Administration shows you know how to install and manage Red Hat’s enterprise container platform, Red Hat OpenShift Container Platform!

Reach for the stars!

Whether you are already an OpenStack expert or looking to become one, the Red Hat Certified Architect track from Red Hat Certification offers the framework to allow you to prove those skills through an industry-recognized premier certification program. And if you follow our advice here you will not only be perfecting your OpenStack skills, but mastering other highly important supporting technologies including CloudForms, Ansible, Red Hat Virtualization, OpenShift, and Puppet on your journey to the RHCA.

greg-rakozy-38802-unsplash
Photo by Greg Rakozy on Unsplash

So what is it like to actually GET these certifications? In the next part of our blog we share our accounts of achieving the RHCA! Check back soon and bookmark so you don’t miss it!


Icon_RH_Transportation_Space-Rocket_RGB_Flat (1)Ready to start your certification journey now!? Get in touch with the friendly Red Hatters at Red Hat Training in your local area today to find all the ways you can master the skills you need to accelerate your career and run your enterprise cloud!


About our authors:

Screen Shot 2018-06-20 at 12.23.14 pmChris Janiszewski is a Red Hat OpenStack Solutions Architect. He is proud to help his clients validate their business and technical use cases on OpenStack and supporting components like storage, networking or cloud automation and management. He is the father of two little kids and enjoys the majority of his free time playing with them.  When the kids are asleep he gets to put the “geek hat” on and build OpenStack labs to hack crazy use cases!



Screen Shot 2018-06-20 at 12.23.23 pmKen Holden is a Senior Solution Architect with Red Hat. He has spent the past 3 years on the OpenStack Tiger Team with the primary responsibility of deploying Red Hat OpenStack Platform Proof-Of-Concept IaaS Clouds for Strategic Enterprise Customers across North America.  Throughout his 20 year career in Enterprise IT, Ken has focussed on Linux, Unix, Storage, Networking, and Security with the past 5 years being primarily focused on Cloud Solutions. Ken has achieved Red Hat Certified Architect status (RHCA 110-009-776) and holds Certified OpenStack Administrator status (COA-1700-0387-0100) with the OpenStack Foundation. Outside of work, Ken spends the majority of his time with his wife and two daughters, but also aspires to be the world’s most OK Guitar Player when time permits!


 

“Ultimate Private Cloud” Demo, Under The Hood!

At the recent Red Hat Summit in San Francisco, and more recently the OpenStack Summit in Vancouver, the OpenStack engineering team worked on some interesting demos for the keynote talks.

I’ve been directly involved with the deployment of Red Hat OpenShift Platform on bare metal using the Red Hat OpenStack Platform director deployment/management tool, integrated with openshift-ansible. I’ll give some details of this demo, the upstream TripleO features related to this work, and insight around the potential use-cases.

TripleO & Ansible, a Powerful Combination!

For anyone that’s used Red Hat OpenStack Platform director (or the upstream TripleO project, upon which it is based), you’re familiar with the model of deploying a management node (“undercloud” in TripleO terminology), then deploying and managing your OpenStack nodes on bare metal.  However, TripleO also provides a very flexible and powerful combination of planning, deployment, and day-2 operations features. For instance, director allows us to manage and provision bare metal nodes, then deploy virtually any application onto those nodes via Ansible!

The “undercloud” management node makes use of several existing OpenStack services, including Ironic for discovery/introspection and provisioning of bare metal nodes, Heat, a declarative orchestration tool, and Mistral, a workflow engine.  It also provides a convenient UI, showcased in the demo, along with flexible CLI interfaces and standard OpenStack ReST APIs for automation.

As described in the demo, director has many useful features for managing your hardware inventory – you can either register or auto-discover your nodes, then do introspection (with optional benchmarking tests) to discover the hardware characteristics via the OpenStack ironic-inspector service.  Nodes can then be matched to a particular profile either manually or via rules implemented through the OpenStack Mistral workflow API. You are then ready to deploy an Operating System image onto the nodes using the OpenStack Ironic “bare metal-as-a-service” API.

When deciding what will be deployed onto your nodes, director has the concept of a “deployment plan,” which combines specifying which nodes/profiles will be used and which configuration will be applied, known as “roles” in TripleO terminology.

This is a pretty flexible system enabling a high degree of operator customization and extension through custom roles where needed, as well as supporting network isolation and custom networks (isolated networks for different types of traffic), declarative configuration of  network interfaces, and much more!

Deploying Red Hat OpenShift Container Platform on bare metal

What was new in the Summit demo was deploying OpenShift alongside OpenStack, both on bare metal, and both managed by Red Hat OpenStack Platform  director. Over the last few releases we’ve made good progress on ansible integration in TripleO, including enabling integration with “external” installers.  We’ve made use of that capability here to deploy OpenShift via TripleO, combining the powerful bare-metal management capabilities of TripleO with existing openshift-ansible management of configuration.

Integration between Red Hat OpenStack Platform and Red Hat OpenShift Container Platform

Something we didn’t have time to get into in great detail during the demo was the potential for integration between OpenStack and OpenShift – if you have an existing Red Hat OpenStack Platform deployment you can choose to deploy OpenShift with persistent volumes backed by Cinder (the OpenStack block storage service). And for networking integration, the Kuryr project, combined with OVN from OpenvSwitch, enables the sharing of a common overlay network between both platforms, without the overhead of double encapsulation.

This makes it easy to add OpenShift managed containers to your infrastructure, while almost seamlessly integrating them with VM workloads running on OpenStack. You can also take advantage of existing OpenStack capacity and vendor support while using the container management capabilities of OpenShift.

Container-native virtualization

After we deployed OpenShift we saw some exciting demos focussed on workloads running on OpenShift, including a preview of the new container native virtualization (CNV) feature. CNV uses the upstream KubeVirt project to run Virtual Machine (VM) workloads directly on OpenShift.

Unlike the OpenShift and OpenStack combination described above, here OpenShift manages the VM workloads, providing an easier way to  transition your VM workloads where no existing virtualization solution is in place. The bare-metal deployment capabilities outlined earlier are particularly relevant here, as you may want to run OpenShift worker nodes that host VMs on bare metal for improved performance. As the demo has shown,  the combination of director and openshift-ansible makes deploying, managing, and running OpenShift and OpenStack easier to achieve!

 

A modern hybrid cloud platform for innovation: Containers on Cloud with Openshift on OpenStack

Market trends show that due to long application life-cycles and the high cost of change, enterprises will be dealing with a mix of bare-metal, virtualized, and containerized applications for many years to come. This is true even as greenfield investment moves to a more container-focused approach.

Red Hat® OpenStack® Platform provides a solution to the problem of managing large scale infrastructure which is not immediately solved by containers or the systems that orchestrate them.

In the OpenStack world, everything can be automated. If you want to provision a VM, a storage volume, a new subnet or a firewall rule, all these tasks can be achieved using an easy to use UI or with a command line interface, leveraging Openstack API’s. All these infrastructure needs might require a ticket, some internal processing, and could take weeks. Now such provisioning could all be done with a script or a playbook, and could be completely automated. 

The applications and workloads can specify cloud resources to be provisioned and spun up from a definition file. This enables new levels of provision-as-you-need-it. As as demand increases, the infrastructure resources can be easily scaled! Operational data and meters can trigger and orchestrate new infrastructure provisioning automatically when needed.

On the consumption side, it is no longer a developer ssh’ing into a server and manually deploying an application server. Now, it’s simply run a few OpenShift commands, select from a list of predefined applications, language runtimes, databases, and then just have those resources provisioned, on top of the target infrastructure that was automatically provisioned and configured.

Red Hat OpenShift Container Platform gives you the ability to define an application from a single YAML file. This makes it convenient for a developer to share with other developers, allowing them to  launch an exact copy of that application, make code changes, and share it back. This capability is only possible when you have automation at this level.

Infrastructure and application platforms resources are now exposed differently in an easy and consumable way, and the days when you needed to buy a server, manually connect it to the network and install runtimes and applications manually are now very much a thing of the past.

With Red Hat OpenShift Container Platform on Red Hat OpenStack Platform you get:

A WORKLOAD DRIVEN I.T. PLATFORM: The underlying infrastructure doesn’t matter from a developer perspective. Container platforms exist to ensure the apps are the main focus. As a developer I only care about the apps and I want to have a consistent experience, regardless of the underlying infrastructure platform. Openstack provides this to Openshift.

DEEP PLATFORM INTEGRATION: Networks (kuryr), services (ironic, barbican, octavia), storage (cinder, ceph), installation (openshift-ansible) are all engineered to work together to provide the tightest integrations across the stack, right down to bare metal. All are based in Linux® and engineered in the open source community for exceptional performance

PROGRAMMATIC SCALE-OUT: OpenStack is 100% API driven across the infrastructure software stack. Storage, networking, compute VM’s or even bare metal all deliver the ability to scale out rapidly and programmatically. With scale under workloads, growth is easy.

ACROSS ANY TYPE OF INFRASTRUCTURE: OpenStack can utilise bare metal for virtualization or for direct consumption. It can interact with network switches and storage directly to ensure hardware is put to work for the workloads it supports.

FULLY MANAGED: Red Hat CloudForms and Red Hat Ansible Automation provide common tooling across multiple providers. Ansible is Red Hat’s automation engine for everything, and it’s present under the hood in Red Hat CloudForms. With Red Hat Openstack Platform, Red Hat CloudForms is deeply integrated into both the overcloud, the undercloud, and the container platform on top. Full stack awareness means total control. And our Red Hat Cloud Suite bundle of products provides access to OpenStack and OpenShift, as well as an array of supporting technologies. Red Hat Satellite, Red Hat Virtualization, Red Hat Insights, and even Red Hat CloudForms are included!

A SOLID FOUNDATION: All Red Hat products are co-engineered with Red Hat Enterprise Linux at their core. Fixes happen fast and accurately as all components of the stack are in unison and developmental harmony. Whether issues might lie at the PaaS, IaaS or underlying Linux layer, Red Hat will support you all the way!

Red Hat Services can help you accelerate your journey to Hybrid Cloud adoption, and realize the most value of best-of-breed open source technology platforms such as OpenShift on top of Openstack. Want to learn more about how we can help? Feel free to reach out to me directly for any questions, slefrere@redhat.com. Or download our Containers on Cloud datasheet