Azure Virtual Networking

Azure Virtual Network is a construct that allows you to connect virtual network interface cards to a virtual network to allow TCP/IP-based communications between network enabled devices. Azure Virtual Machines connected to an Azure Virtual Network are able to connect to devices on the same Azure Virtual Network, different Azure Virtual Networks, on the Internet or even on your own on-premises networks.

Azure Network security best practices

1. Logically segment subnets

The idea behind an Azure Virtual Network is that you create a single private IP address space-based network on which you can place all your Azure Virtual Machines. The idea behind an Azure Virtual Network is that you create a single private IP address space-based network on which you can place all your Azure Virtual Machines.

We segment the larger address space into subnets using CIDR based subnetting principles to create your subnets. Routing between subnets will happen automatically and you do not need to manually configure routing tables. In order to create network access controls between subnets, you’ll need to put Network Security Group(NSG)between the subnets.

NSGs are simple stateful packet inspection devices that use the 5-tuple (the source IP, source port, destination IP, destination port, and layer 4 protocol) approach to create allow/deny rules for network traffic.

For example, think of a simple 3-tier application that has a web tier, an application logic tier and a database tier. You put virtual machines that belong to each of these tiers into their own subnets. Then you use NSGs to control traffic between the subnets:

  • Web tier virtual machines can only initiate connections to the application logic machines and can only accept connections from the Internet
  • Application logic virtual machines can only initiate connections with database tier and can only accept connections from the web tier
  • Database tier virtual machines cannot initiate connection with anything outside of their own subnet and can only accept connections from the application logic tier

2. Control routing behavior

When you put a virtual machine on an Azure Virtual Network, you’ll notice that the virtual machine can connect to any other virtual machine on the same Azure Virtual Network,  even if the other virtual machines are on different subnets. The reason why this is possible is that there is a collection of system routes that are enabled by default that allow this type of communication.

3. Enable Forced Tunneling

Imagine that you establish a VPN connection from your hotel room to your corporate network. This connection allows you to access corporate resources and all communications to your corporate network go through the VPN tunnel. When split tunneling is enabled, those connections go directly to the Internet and not through the VPN tunnel. Some security experts consider this to be a potential risk and therefore recommend that split tunneling be disabled and all connections, those destined for the Internet and those destined for corporate resources, go through the VPN tunnel.

The default routes for an Azure Virtual Network allow virtual machines to initiate traffic to the Internet. This too can represent a security risk, as these outbound connections could increase the attack surface of a virtual machine and be leveraged by attackers. For this reason, we recommend that you enable forced tunneling on your virtual machines when you have cross-premises connectivity between your Azure Virtual Network and your on-premises network.

If you do not have a cross premises connection, make sure you take advantage of Network Security Groups or Azure virtual network security appliances to prevent outbound connections to the Internet from your Azure Virtual Machines.

4. Use virtual network appliances

While Network Security Groups and User Defined Routing can provide a certain measure of network security at the network and transport layers of the OSI model, there are going to be situations where you’ll want or need to enable security at high levels of the stack by using virtual network security appliances provided by Azure partners.

Some of the network security capabilities provided by virtual network security appliances include:

  • Firewalling
  • Intrusion detection/Intrusion Prevention
  • Vulnerability management
  • Application control
  • Network-based anomaly detection
  • Web filtering
  • Antivirus
  • Botnet protection

5. Deploy DMZs for security zoning

A DMZ or “perimeter network” is a physical or logical network segment that is designed to provide an additional layer of security between your assets and the Internet. The intent of the DMZ is to place specialized network access control devices on the edge of the DMZ network so that only desired traffic is allowed past the network security device and into your Azure Virtual Network.

DMZs are useful because you can focus your network access control management, monitoring, logging and reporting on the devices at the edge of your Azure Virtual Network. Here you would typically enable DDoS prevention, Intrusion Detection/Intrusion Prevention systems (IDS/IPS), firewall rules and policies, web filtering, network antimalware and more.

In the hybrid IT scenario, there is usually some type of cross-premises connectivity. This cross-premises connectivity allows the company to connect their on-premises networks to Azure Virtual Networks. There are two cross-premises connectivity solutions available:

  • Site-to-site VPN
  • ExpressRoute

Site-to-site VPN represents a virtual private connection between your on-premises network and an Azure Virtual Network. This connection takes place over the Internet and allows you to “tunnel” information inside an encrypted link between your network and Azure. While site-to-site VPN is a trusted, reliable, and established technology, traffic within the tunnel does traverse the Internet. In addition, bandwidth is relatively constrained to a maximum of about 200Mbps.

If you require an exceptional level of security or performance for your cross-premises connections, we recommend that you use Azure ExpressRoute for your cross-premises connectivity. ExpressRoute is a dedicated WAN link between your on-premises location or an Exchange hosting provider. Because this is a telco connection, your data doesn’t travel over the Internet and therefore is not exposed to the potential risks inherent in Internet communications.

Optimize uptime and performance

Load balancing is a method of distributing network traffic across servers that are part of a service. For example, if you have front-end web servers as part of your service, you can use load balancing to distribute the traffic across your multiple front-end web servers.

At the Azure Virtual Network level, Azure provides you with three primary load balancing options:

  • HTTP-based load balancing
  • External load balancing
  • Internal load balancing

HTTP-based Load Balancing

HTTP-based load balancing bases decisions about what server to send connections using characteristics of the HTTP protocol. Azure has an HTTP load balancer that goes by the name of Application Gateway.

  • Applications that require requests from the same user/client session to reach the same back-end virtual machine. Examples of this would be shopping cart apps and web mail servers.
  • Applications that want to free web server farms from SSL termination overhead by taking advantage of Application Gateway’s SSL offload feature.
  • Applications, such as a content delivery network, that require multiple HTTP requests on the same long-running TCP connection to be routed or load balanced to different back-end servers.

External Load Balancing

Azure External Load balancer is used when incoming connections from the Internet are load balanced among your servers located in an Azure Virtual Network and we recommend that you use it when you don’t require the sticky sessions or SSL offload.

Internal Load Balancing

Internal load balancer(similar to external LB but) accepts connections from virtual machines that are not on the Internet. Use internal load balancing for scenarios that benefit from this capability, such as when you need to load balance connections to SQL Servers or internal web servers.

 

 

 

Service Fabric Use case: E-commerce

To bring new services to their customers and optimize their operations, e-commerce/airline companies are adopting cloud computing as a key element in their long-term strategy. The systems supporting their website enable people to make add products to cart/reservations, look up products/travel information, manage their plans, get customer support, and more.

Customers provide personal information and make payments through the website, so the e-commerce companies design their system to be fault tolerant and extremely secure. One such architectural design is API-driven strategy. APIs support reuse and enable them to scale features and services independently. APIs also provide a single point of business logic regardless of the device customers use to access services.

This is one such use case to improve and modernize their implementation by porting discreet services into Azure. The process got a lot easier when they adopted Service Fabric, a microservices platform.

Evolution from monolithic to micro-services based service delivery

Traditionally development teams have been creating monolithic systems and hosting them on premises for years. To ease reuse and increase agility of specific business functions, the team has been gradually splitting many of these systems apart by porting aspects into discreet services. Microsoft development technologies and platforms with the help of Microsoft Azure provide an architectural approach that can help make this possible.

The most straightforward approach, they decided, was to pull out common aspects of their back-end systems and turn them into ASP.NET Web APIs. Over time supporting large number of customers and emerging mobile-based delivery models becomes an impossible task. Loss of services caused by networking, hardware, or virtual machine issues, and performance impacts due to governed usage to maintain up time requirements. To address these scale and capacity issues, one can extend their hosting model by adopting several Azure services–notably, Azure App Service for hosting their ported APIs, Azure API Management for secured access to those hosted APIs, and Redis Cache for session management.

The next challenge

Azure App Service makes it easy to extend web apps to support mobile clients but customer demand outpaces the system’s ability to scale the more demanding back-end services and high request volume for the API services. The problem was a design flaw that prevented the system from recovering gracefully from failures on top of the inconsistent performance of  APIs and downstream dependencies. The immediate resolution can be horizontal scaling.

For example: Imagine a Shopping Cart API, an absolutely critical business component, service disruptions aren’t acceptable. Typically shopping cart are api components which might not be built using microsoft based platform. Any downtime in the API means lost bookings, lost revenue, and unhappy customers. During peak usage times requires hundreds of virtual machines, or nodes, are needed to host this service to ensure zero interruptions. Azure App Services did not support this level of horizontal scaling.

A possible solution

Azure Service Fabric has rapidly gained traction with a wide variety of customers from financials, to healthcare, gaming, and especially IoT. Service fabric is container orchestrator for both Windows and Linux.

ServiceFabric-1

Service Fabric cluster can be self hosted on a local machine and in Azure. Service Fabric can be datacenter-agnostic and provide a way to move forward for hosting their ported APIs. This architecture also offered the capabilities and flexibility they needed for their new service delivery platform.

Taking the example of a shopping cart API, Say the The API required Internet Information Services (IIS) and is not built using .Net Core and it makes it incompatible with Service Fabric, which isn’t supported as an application host, since Service Fabric expects workloads to be self-hosted. This is yet again an other challenge.

The solution to this challenge involved yet another technology –Windows Server Containers. The combination of containers and Service Fabric provided a solution for the Shopping Cart API that didn’t require refactoring or rewriting the service. Containers combined with Service Fabric’s upgrade and management capabilities resolved another issue as well – helps in keeping their images up to date in production.

container

Instead of changing the core software or the platform (IIS) hosting their distribution, we can simply adde a few steps during their VSTS build pipeline, incorporating a Docker image build activity (that incorporated their software package and IIS distribution) and a push of that image to their private Azure Container Registry using PowerShell.

Logical architecture

Logical

Improved performance and reliability

Reliability at all times is one of the important factor deciding on containers and service fabric. Correcting errors manually is too slow to be acceptable. Service Fabric provides a highly available application up time environment by monitoring each node and self-correcting when a node or the services on the node fails. Service Fabric cluster relies on Virtual Machine Scale Sets to handle availability and auto-scaling based on custom metrics. For example, virtual machine instances can be restored within a cluster during a failure event. Service Fabric also re-balances every node in the cluster and ensures it’s operational before that node and its service are allowed to accept incoming requests.

Savings at scale

The Shopping Cart API isn’t necessarily CPU-intensive, so vertical scaling is less important than bandwidth for the many incoming connections and requests. Service Fabric guarantees that container services get deployed on all nodes in the cluster. Because scale sets are set up for the different types of nodes in their Service Fabric cluster, each type of node can be scaled out independently and easily managed either by specific performance characteristics or by predicted peak load times.

Use cases for Azure Container Service, Azure Service Fabric and Azure Functions

Azure Container Service (ACS) provides a bridge between Azure and existing containers ecosystem. It lets developers manage underlying infrastructure (VMs, Storage, Load Balancing, etc.) separately than the application.  It offers developers to select from 3 major container orchestrators available today i.e. (DC/OS, Swarm and Kubernetes). It collaborates with the the container ecosystem to contribute towards promise of application virtualization. It simply offers a glue between infrastructure and your favorite container orchestrator.

ACS focuses on smaller services. You can start with container-izing a monolith. It can be then gradually broken down into multiple smallerservices. This process of breaking down large monolith into smaller services in entirely under control of application architect/developers. ACS gives that flexibility to development team. ACS is content with a containerized application regardless of what application is written or how it is written.

So, ACS is primarily about Azure Infrastructure and an orchestrator!

Azure Service Fabric on the other hand, is more about just Azure Infrastructure and orchestrator. It is neutral on infrastructure. That’s why it can run on premise as well as on any cloud (including AWS and Azure).  This is a great option for teams who are looking to build an application using microservices architecture on premise. When the decisions on whether to move to cloud and which cloud are finalized, they simply port the application from on premise to cloud.  Unlike ACS, Service Fabric provides a prescriptive guidance on how application should be written. It provides a full blown programming model. This model proposes either Reliable Services or Actor Model to write application. Application written using this programming model, can either be stateful or stateless. It is more fault-tolerant and easy to scale. Multiple such services forms a microservices architecture based application. It gives more control (and ownership!) of the underlying infrastructure to developers.

Regardless of whether Azure Container Services or Azure Service Fabric is selected, as application architecture evolves and as these microservices are decomposed further and further, they are reduced essentially to a single function. This is where Azure Functions comes into picture.

Azure Function evolves from WebJobs, which are part of App Service. They are great in executing some logic either in response to a trigger or on a pre-defined schedule. They have a simple programming model that doesn’t need elaborate infrastructure set up. That’s why they fall into Serverless computing category.

So how do you make a decision?

Azure Container Service: If you are looking to deploy your application in Linux environment and are comfortable with an orchestrator such as Swarm, Kubernetes or DC/OS, use ACS. A typical 3 tier application (such as a web front end, a caching layer, a API layer and a database layer) can be easily container-ized with 1 single dockerfile (or docker-compose file). It can be continuously decomposed into smallerservices gradually. This approach provides an immediate benefit of portability of such an application. Containers is Open technology and there is great community support around containers.

Azure Service Fabric: If an application must have its state saved locally, then use Service Fabric. It is also a good choice if you are looking to deploy application in Windows server ecosystem(Linux support is in the works as well!). Refer to common workloads on Service Fabric for more discussion on applications that can benefit from Service Fabric. Biggest benefit is that Service Fabric applications can run on-premise, on Azure or even in other cloud platforms also.

Azure Functions: If an application needs an HTTP endpoint for a potentially long running  process without getting into a elaborate programming model, Azure Functions is a good option. They can be developed as an extension of an existing application. They support routing based endpoints similar to an API. They support AAD and other authentication, SSL, Custom Domain, RBAC, etc. They have a good CI/CD support as well. They are in the process of integrating .Net Core support. So if you are looking to get a simple application model and don’t want to get into setting up/managing underlying infrastructure, Azure Functions is good choice.

OMS Architecture

Operations Management Suite (OMS) is a collection of cloud-based services for managing your on-premises and cloud environments. This article describes the different on-premises and cloud components of OMS and their high-level cloud computing architecture.

Components:

  1. Log Analytics
  2. Azure Automation
  3. Azure backup
  4. Azure Site Recovery

Log Analytics

If you have no current monitoring in place for your Azure environment, you should start with Azure Monitor, which collects and analyzes monitoring data for your Azure resources. Log Analytics can collect data from Azure Monitor to correlate it with other data and provide additional analysis.

If you want to monitor your on-premises environment or you have existing monitoring using services such as Azure Monitor or System Center Operations Manager, then Log Analytics can add significant value.It can collect data directly from your agents and also from these other tools into a single repository. Analysis tools in Log Analytics such as log searches, views, and solutions work against all collected data providing you with centralized analysis of your entire environment.

All data collected by Log Analytics is stored in the OMS repository, which is hosted in Azure. Connected Sources generate data collected into the OMS repository.

Architecture

The deployment requirements of Log Analytics are minimal since the central components are hosted in the Azure cloud. This includes the repository in addition to the services that allow you to correlate and analyze collected data.

Log Analytics architecture

All data collected by Log Analytics is stored in the OMS repository, which is hosted in Azure. Connected Sources generate data collected into the OMS repository. There are currently three types of connected sources supported.

  • An agent installed on a Windows or Linux computer connected directly to OMS.
  • A System Center Operations Manager (SCOM) management group connected to Log Analytics . SCOM agents continue to communicate with management servers, which forward events and performance data to Log Analytics.
  • An Azure storage account that collects Azure Diagnostics data from a worker role, web role, or virtual machine in Azure.

Azure Log Analytics can also be used to monitor Hadoop cluster operations in HDInsight.

Azure HDInsight cluster. Currently, you can use Azure Operations Management Suite with the following HDInsight cluster types:

  • Hadoop
  • HBase
  • Interactive Query
  • Kafka
  • Spark
  • Storm

Create a free website or blog at WordPress.com.

Up ↑