Securing Multi-tenant Azure App services using Azure Private Link

Microsoft has recently released a Public Preview of Private Link for Azure App Service.  This preview is available in limited regions for all PremiumV2 Windows and Linux web apps. Until this point securing App Services through Virtual Network Isolation was only possible through App Service Environments(ASE). ASE’s are generally expensive and have long initial deployment cycles as a drawback.

Private Link exposes your app on an address in your VNet and removes it from public access. This not only secures the app but can also be combined with Network Security Groups to secure your network.

The feature is currently available in East US and West US 2. For the scope of this blog post i will be creating the azure resources in West US 2 region.

Create an P1V2 App Service Plan in West US 2 region and create a Private Endpoint in the same region.

Once the Private Link Endpoint is created, you would see a Network Interface created with a Virtual Network mapped to a subnet.

Public access to the App Service( will be disabled
Private DNS zone will be created mapping the App Service Host name to a Private IP that cannot be reached from Internet.

Typically testing such a topology can be done by creating a VM in the same virtual network under different subnet and updating Etc/Hosts file. But for Production scenarios i would suggest deployment of Application gateway and get the Private Endpoint mapped to the App gateway Backend to provide Production ready solution.

Create an Application Gateway(V1) Standard tier mapping the Virtual Network used earlier to create the private link end point.

Create a public IP for mapping the Private End point Private IP through Backend Pools
Add a BackEnd Pool using Hostname
as target type

Create a Https-Routing rule to map the Https-Listener and backend targets and the .pfx certificate if your website needs a secure HTTPS access.

While creating the HTTP/HTTPS Settings please ensure you configure

  1. Request time-out as 120 secs for the backend instances.
  2. Update the Host name domain to the App service host name to avoid 400 Invalid Host name error
  3. Add a custom probe for HTTP/HTTPS and map it to the setting

Important point to note while creating the HTTPS Probe is to create the probe with “PickHostNameFromBackendHTTPSettings” as Yes so that the host name is correctly picked from HTTPS Settings and avoid multiple Host name overrides.

Once the above steps are completed you can verify the Backend health of the Application Gateway and makesure the Status column shows healthy

Once the backend status shows healthy you should be able to access the website through App Gateway’s Public IP. So even though the website is publicly available through a public endpoint the Actual App Service hosting the website will be secured by a Private IP with in Azure Network Perimeter and the traffic communications goes through a Virtual Network

Hope this helps securing Multi-Tenant App Service deployments on Azure PAAS.

Reducing latency through Proximity Placement groups in Azure

As a followup to my earlier blog post on Accelerated networking on Azure i am looking at other options available in Azure to reduce latency for Azure VM’s.

When you place your Azure VM’s in a single region, the physical distance between VM’s is reduced. Placing them within a single availability zone is another step you can take to deploy your virtual machines closer to each other. However, as the Azure footprint grows, a single availability zone may span multiple physical data centres resulting in network latency that can impact your overall application performance. If a region does not support availability zones or if your application does not use availability zones, the latency between the application tiers may increase as a result.

Enabling Accelerated networking reduces latency for Azure VM’s to a certain extent but Proximity placement group(PPG) provides Azure Virtual Machine logical grouping capability to further decrease inter-VM network latency.

As per Microsoft documentation, PPG can be used in the following cases

  • Low latency between stand-alone VMs.
  • Low Latency between VMs in a single availability set or a virtual machine scale set.
  • Low latency between stand-alone VMs, VMs in multiple Availability Sets, or multiple scale sets. You can have multiple compute resources in a single placement group to bring together a multi-tiered application.
  • Low latency between multiple application tiers using different hardware types. 

For the scope of this blog post we will be looking at the point #02(PPG in a single availability set). In the case of availability sets and virtual machine scale sets, you should set the proximity placement group at the resource level rather than the individual virtual machines.

Please see the github repository here for more detailed deployment steps

When proximity groups are enabled along with accelerated networking and availability sets the latency improves from ~72.9 ms to ~68.8 ms for an average of 10 test results on a set of 2 D3v2 VM’s within a single region.

Proximity groups enables moving VM’s closer to reduce latency but it compromises High availability if the VM’s are not placed in availability zones. To address this issue proximity groups can also be configured such that the VM’s are placed in separate availability zones to provide HA.

Some best practices to follow while creating PPG’s for new or existing virtual machines here

Accelerated Networking in Azure

Azure Accelerated Networking is network throughput performance improvement feature provided by a Microsoft for Azure Linux & Windows Azure VM’s. This feature enables a high-performance path and bypasses the host from the datapath, reducing latency and CPU utilisation, for use with the most demanding network workloads on supported VM types


Without accelerated networking, all networking traffic in and out of the VM must traverse the host and the virtual switch.  For the best results, it is ideal to enable this feature on at least two VM’s connected to the same Azure Virtual Network. When communicating across VNets or connecting on-premises, this feature has minimal impact to overall latency.

Some of the Limitations & Constraints explained here and a list of list of supported VM instances here.

How do we test this feature for Azure Windows VM’s?

For the scope of this post the below are the parameters that will be tested with in the same virtual network using Standard D3 v2 (4 vcpus, 14 GiB memory) Azure VM’s .

  1. VM-VM Network performance throughput test – using iperf 
  2. Install Linux on both VM’s following a detailed steps here
  3. Install qperf
  4. VM-VM latency Test – using qperf

Before starting the above tests i have deployed the VM’s using below Azure CLI script in my Github repository here excluding the last 2 lines of code(so that we capture the results of latency tests and network performance test before enabling Accelerated networking).

Next few steps would involve you to download a few tools and deploy them on both the Azure VM’s. For the scope of the testing here i have considered below assumptions for latency and performance tests.

  • test-vm1 will act as a client
  • test-vm2 will be used for server communication

How to disable Accelerated Networking on your NIC??

Run the below Azure CLI commands


VM-VM Network performance Testing:

         Without Accelerated Networking                          With Accelerated Networking

iperf_before-after_acc_networking - combined

From the result, we can see D3 V2 VM egress throughput is ~2.82 Gbps with accelerated networking enabled while its ~2.01 Gbps with out the feature enabled.

VM-VM latency Test:

        Without Accelerated Networking                          With Accelerated Networking


From the result, we can see D3 V2 VM Network latency is ~72.9 microseconds. with accelerated networking enabled while its ~156.5 microseconds with out the feature enabled

High level charting to show that enabling accelerated networking for VM’s improves the overall network throughput and bandwidth

Data validated without Accelerated Networking

VM-Vm with out Acc Networking

Data validated with Accelerated Networking 

VM-Vm with Acc Networking


Securing Sitecore Topology deployed on Azure web apps(PAAS) using Application Gateway(WAF), Log analytics and Azure Monitor

Primary focus for the blog post would be to setup an Application gateway(WAF enabled) in front of a sitecore content delivery PAAS web app and test Azure WAF functiionality with SQL Injection attack using Log analytics and Azure Monitor(Log alerts) feature.

Update: WAF Support for Sitecore is officially available from SItecore 9.1 as mentioned in KB article

Sitecore environments deployed using the standard Sitecore ARM templates on Azure PAAS are deployed on Azure App Service in the multi-tenant model(using Basic, Standard, Premium service tiers) . Azure App Service is a standard Web App offering is a multi-tenant environment configured for public access (with a publicly accessible endpoint).

Some organizations (e.g. Government, Financial) have requirements that all access to applications come through a Trusted Internet Connection (TIC), which means that the web applications should not be accessible directly from the Internet through their public endpoints and are only routed either through a

  • On-prem network integrated VPN or Express Route connection
  • User Traffic routed to Virtual network which controls the inbound/outbound communication using a Network Security Group.

App Service Environments(ASE) provide the Network isolation as well as private access through ExpressRoute integration by deploying the Azure web apps inside a virtual network using only the Isolated web app tier.

ASE being a very expensive azure service and comes up with some additional complexities that may not fit for all customer requirements, the other available options to secure web app would be

  • Restrict the access to web app using IP Restrictions – This approach will work for restricting access to Sitecore content management web app to content authors only with in the organization.
  • Front-ending the Web App with an Azure Application Gateway and restricting access to the Web App such that only connections from the Gateway are allowed – This approach can secure the Content delivery web app by making sure the user traffic only reaches the Gateway Public endpoint and can be secured with a Azure WAF with out directly reaching the content delivery web app.


For the purpose of this blog post we will focus on the second option using Application gateway(WAF) to secure the content delivery web app in Azure cloud.

Steps to create/configure Application Gateway(WAF) for Sitecore environment:

  1. Make sure the the App Service plan for web app to be fronted with WAF is scaled-up to Standard Tier.
  2. Create a Application gateway in the same resource group and location as the web app.


In the 2nd step during the creation process the Virtual network and Public IP will be configured and make sure the virtual network is created in the same resource group where the web app exists.


Point to note: Please set the Firewall mode to Detection for the scope of the testing Sitecore content delivery website.

3. Review the results and click on OK to create the gateway.

Frontend IP Configuration

frontend conf

Once the Application gateway is created, you see the Frontend IP configuration is created. Frontend IP configuration shows how the gateway is exposed. It can be either either public or private or both. The same configuration can be used by more than one HTTP listener, using different port.

Configure the Application Gateway

Backend pools feature provided by the Application gateway is used to configure the azure resources to which the user traffic needs to be directed. The resources supported as of today are NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service.

For the context of this blog post, we will be using App Service. Once the Application gateway is created there will be an appGatewayBackendPool already created and we can use the same backend pool to configure the content delivery web app.


One the backend pool is assigned as shown below you can see the rule1 is created mapping to the HTTP url. Once he backend pool settings the incoming traffic that enters the application gateway would be routed to the backend address added here.


Configuring SSL Termination/Offloading

Application gateway can be configured to terminate the Secure Sockets Layer (SSL) session at the gateway to avoid costly task of decrypting HTTPS traffic off your web servers. Application gateway decrypts the request and sends it to backend server and re-encrypts the response before sending it back to the client.

To configure SSL offload with an application gateway, a certificate (pfx format) is required. This certificate is loaded on the application gateway and used to encrypt and decrypt the traffic sent via SSL. For the scope of this blog post i will be using a self-signed certificate.

Use the below powershell script to generate a self-signed certificate in order to use that during HTTPS Listener.

$thumbprint = (New-SelfSignedCertificate `
-Subject "CN=$env:COMPUTERNAME @ Sitecore, Inc." `
-Type SSLServerAuthentication `
-FriendlyName "$env:USERNAME Certificate").Thumbprint

$certificateFilePath = "D:\Powershell\XPARM\$thumbprint.pfx"
Export-PfxCertificate `
-cert cert:\LocalMachine\MY\$thumbprint `
-FilePath "$certificateFilePath" `
-Password (Read-Host -Prompt "Enter password that would protect the certificate" -AsSecureString)

HTTP/HTTPS Listener:

HTTP Listener combines a frontend IP configuration and port it also include a protocol (HTTP or HTTPS) and optionally an SSL certificate. It will look for traffic based on its configuration and helps route the traffic to the backend pools

An HTTP listener is what the Application Gateway is listening to, for example:

  • Public IP xx.xx.xx.xx on port 443, HTTPS with a given SSL certificate
  • Private IP x.x.x.x on port 80, HTTP


Using the self signed certificate created earlier, please create a Basic HTTPS Listener as shown below by pressing OK button at the bottom. This process take about 15-20 mins for you to proceed with the next steps.

add-basic listener

Create Rule for HTTPS Listener:

Once listener is created, you need to create a rule to handle the traffic from the listener. Click the Rules of the application gateway, and then click Basic option. Type in the friendly name for the rule and choose the listener created in the previous step. Choose the appropriate backend pool and http setting as “appgatewayBackendHttpSetting” for now and click OK to save.



Health Probe:

Once the HTTPS rule is saved, we can see that Health Probes menu option in the Application Gateway shows both the health probes created automatically. A  Health is described by a protocol, a URL, interval, timeout, etc.  .  Basically we can customize how a backend is determined to be healthy or not.


Once the HTTP/HTTPS probes are updated, you should see the HTTP Setting menu option with “appgatewayBackendHttpsSetting”setting set correctly or else you may have to manually upload the self-signed certificate(.CER file) and update the “appgatewayBackendHttpsSetting”


Backend health


Once the configuration is completed, please ensure WEBSITE_LOAD_CERTIFICATES app settings is added to the CD web app Application settings so that it accepts the self-signed cerificate


Once the complete configuration is completed, the Gateway IP address found in the App gateway “overview” menu can used tested https://xxx.xx.xx.xx. If we use a CA authority certified SSL cert, we wont be seeing the invalid certificate issue and we should be able to route the traffic from Gateway IP to the Content delivery web app.

The Next step would be to restrict the content delivery web app to disallow web app public endpoint access. This can be achieved using web app IP restrictions by setting the Gateway IP address in the IP Address Block.


For you to make it work in a local machine and test you need to add an Gateway IP address in Hosts file on your local machine and map it to a custom domain say ““. The path would be c:\Windows\System32\Drivers\etc\hosts.


SQL Injection Security violation monitoring using Log analytics and Azure Monitor

  • Make sure a OMS Log analytics workspace is created and Azure Activity analytics solution is added to the workspace.
  • Make sure the content delivery web app diagnostic logs are exported to a storage account and storage account is connected to the log analytics workspace
  • Azure Diagnostics logs get collected under the Azure metrics solution


In order to inject SQL Injection we will use a simple sql script and invoke a website request


Now if you go back to Azure Diagnostics as Log search query you should see the SQl Injection attack has been logged.

| where Message contains “injection” and action_s contains “detected”

log query.PNG

Detecting and notifying customers on SQL Injection attack

Azure Monitor provides alerts on top of Log search queries which can be configured in Azure Monitor UI and selecting log analytics resource and the corresponding resource groups and select Log Search as the condition for alert


Once the alert has been created and an action group configured in Azure to send emails/notifications for the alert configured, you will receive an email alert below.


Hope this helps in configuring and alerting customers on any security violations.



Monitoring Sitecore Topology on Azure – Azure Metrics, App Insights, Azure Monitor, Service Health Alerts

Sitecore topologies(9.0.2) officially provided at contains the below Azure resources

  • Azure web app
  • Azure SQL Database
  • Azure Search
  • Azure Redis cache
  • Azure Application Insights

Looking at an overall Sitecore topology the below would be an architectural representation of the monitoring story.  App Insights collects the telemetry from all the web apps and measures web app availability. Azure monitor helps in configuring alerts and metrics. Azure Health Alerts gives the flexibility to monitor the availability of azure specific services which can help to notify customers during an outage in a data center and in a scenario where only a specific azure service is out of service.

For the scope of this blog, i am not covering infrastructure monitoring using log analytics.

monitoring arch

Sitecore environments deployed a resource group can be monitoring using App Insights, Azure metrics and Azure Monitor and we use use the Azure Dashboards to give a unified view of the performance metrics and statistics of all the azure resources within a topology.

Some features that Sitecore developers have to enable to leverage the monitoring capabilities in Azure.

Application Insights

Enable the Application map feature in App Insights by uncommenting the below code snippet in the ApplicationInsights.Config in the sitecore web app instances

    <!-- Uncomment the following line to enable dependency tracking -->
    <!-- <Add Type="Microsoft.ApplicationInsights.DependencyCollector.DependencyTrackingTelemetryModule, Microsoft.AI.DependencyCollector"/> -->
    <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.PerformanceCollectorModule, Microsoft.AI.PerfCounterCollector">
    <! -- cut a lot if information -->

2. Configure azure availability web tests(URL ping tests) in the azure portal to monitor the overall website availability


3. Metrics for web apps, Azure SQL databases, Redis cache and Azure search can be configured in the azure portal. For all the above resources we can configure the azure metrics for each above mentioned azure resources using the Metrics menu option.

Add metric option allows the metric selection and ability to add multiple charts for the same resource.


Once the required dashboards are added, “Pin to current dashboard” option adds the chart to the default azure dashboard.


Once you start configuring the required metrics for azure resources, you should be able to see a complete dashboard as shown below. For more clarity and simplicity i have updated the dashboard json for the below sitecore environment in Github at Sitecore-Azure-Monitoring . Few steps to make it work for you

  • Make sure the sitecore topology is deployed in a resource group in your azure subscription using Marketplace Sitecore Experience cloud.
  • Download the Sitecore-XPTopology902-Monitoring.json file
  • Replace the subscription id “xxxxxxxxx-xxxx-xxx-xxxx-xxxxxxxxxxxx” with relevant subscription id that you will be using for creating the dashboard
  • Replace the text “resourceGroups/ga-sea-exsmall-xp-rg” with the “resourceGroups/your-resourcegroup-name” in the json file.
  • Replace the other relevant azure resource name as required and upload the updated Sitecore-XPTopology902-Monitoring.json file in azure portal.


Azure Monitor

Azure Monitor gives the capability for alerting and notifications on metrics and logs as described in monitoring data sources. Azure monitor gives a cumulative monitoring on top of the below services

  • Metric values supported by Azure Metrics
  • Log search queries supported by Log analytics
  • Activity Log events supported by Log analytics
  • Azure Service Health alerts
  • web site availability test supported by App Insights

For each azure resource while adding the metrics for monitoring, a new alert rule can be created by specifying a condition for webapp-pin-createalert

Action Groups

An action group is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor and Service Health alerts use action groups to notify users that an alert has been triggered.


Once the alert is configured along with alert condition and notification requirements, you would see the metric alert rule created as below.


Service Health Events

Service Health tracks three types of health events that may impact your resources:

  1. Service issues – Problems in the Azure services that affect you right now.
  2. Planned maintenance – Upcoming maintenance that can affect the availability of your services in the future.
  3. Health advisories – Changes in Azure services that require your attention. Examples include when Azure features are deprecated or if you exceed a usage quota.

By adding an Action group at a resource group level the action group can be used for any notifications related to the resource group.


Once the action group is created, the service alert can be created at a specific or multiple azure resources Ex: App Service , Web apps , SQL databases as shown below


Hope this help!!.



Saving costs for Sitecore environments on Azure PAAS using Automation

Azure Web apps and Azure SQL databases are primary components in Sitecore environments deployed on Azure PAAS. For example a XP Topology, Sitecore 9.0.2 ARM Templates deploys around

  • 9 web apps in 6 different App Service Plans(Except XP-Single)
  • 12 SQL databases in various service tiers


As for Azure consumption, an Azure month is typically counted for 732 hours. Azure web apps in Shared/Basic/Standard/Premium/Isolated tier will get charged even when the web-app in the App Service Plan is stopped.

Say we have a scenario where developers don’t use the Sitecore environment during weekends i.e approx 4 weekends or 8 days/month. That will be around 192 hrs/month of used usage per month.

If we can scale down our web apps to Free tier during the non-business hours, that would save around 10(web apps) * 192 = 1920 hrs/month for all the web apps in a specific environment. Using Automation we can scale up the web apps to their normal tiers on a monday morning.

For Sitecore web apps, where we have Application settings with below setting, scaling down to Free tier is not a straight forward choice.

  • Always On enabled for all Basic/Standard web apps
  • Platform = 64-Bit
  • xConnect, Marketing Automation web apps use client certificate a specific flag is set in to enable Client certificate settingsalwaysonxconnect

In order to scale down the web apps to Free tier we might have set the settings to false. This is a sample script which can help achieve that. Obviously this script doesn’t cater to all the requirements and scenarios so definitely can be improved.

Script for Scaling Down- Web apps:


Script for Scaling Up- Web apps:


VNet Integration With Azure Web apps


Azure web apps are by design not deployed in a Virtual network. For scenarios where we need to setup a Site-Site VPN to On-Premise network using Azure Virtual Network gateway (VPN Gateway), VNet Integration (azure web app) is the way to go to provide better continuity for your workloads in hybrid cloud setup with Azure.

Integrate Azure App Service with an Azure Virtual Network

The Azure App Service gets deployed in two forms.

  • The multi-tenant web apps which are deployed in shared environment in Azure comes with Basic/Standard/premium pricing plans
  • The App Service Environment (ASE) premium feature, which deploys into your VNet.

In this blog we are going to look at VNet Integration with multi-tenant web apps and not App Service Environment.

VNet Integration gives your web app access to resources in your virtual network but does not grant private access to your web app from the virtual network. A common scenario where you would use VNet Integration is enabling access from your web app to a database or azure resources running in your Azure virtual network.

The VNet Integration feature:

  • requires a Standard, Premium, or Isolated pricing plan
  • works with Classic or Resource Manager VNet
  • supports TCP and UDP
  • works with Web, Mobile, API apps, and Function apps
  • enables an app to connect to only 1 VNet at a time
  • enables up to five VNets to be integrated with in an App Service Plan
  • allows the same VNet to be used by multiple apps in an App Service Plan
  • supports a 99.9% SLA due to the SLA on the VNet Gateway

Accessing on-premises resources

One of the benefits of the VNet Integration feature is that if your VNet is connected to your on-premises network with a Site-to-Site VPN then your apps can have access to your on-premises resources from your app. For this to work though customer may need to update their on-premises VPN gateway with the routes for your Point-to-Site IP range. When the Site to Site VPN is first set up then the process used to configure it should set up routes including your Point-to-Site VPN. If you add the Point-to-Site VPN after you create your Site-to-Site VPN, then you need to update the routes manually.

Azure costs involved to setup VNet Integration

Below are the related charges to the use of this feature

  • App Service Plan pricing tier requirements
  • Data transfer costs
  • VPN Gateway costs

For your apps to be able to use this feature, they need to be in a Standard or Premium App Service Plan. Due to how Point-to-Site VPNs are handled, you always have a charge for outbound data through your VNet Integration connection even if the VNet is in the same data center.

The last item is the cost of the VNet gateways. If you do not need the gateways for something else such as Site-to-Site VPNs, then you are paying for gateways to support the VNet Integration feature.

Process to setup VPN Integration for Azure Webapps

Create Virtual Network in Azure portal



Create Virtual network gateway

  • Map the virtual network to the Gateway
  • Create Public IP Address for gateway


Once Virtual Network gateway is created you can see that the Gateway subnet has been added to the virtual network automatically.


Next step is to configure point-to-site configure in the VPN gateway. You can select the tunnel type. The two tunnel options are SSTP and IKEv2. The strongSwan client on Android and Linux and the native IKEv2 VPN client on iOS and OSX will use only IKEv2 tunnel to connect. Windows clients try IKEv2 first and if that doesn’t connect, they fall back to SSTP. You can choose to enable one of them or both.


Setup VNet Integration


Click on setup link on the VNet Integration screen and then it opens up a screen to select the Virtual network enabled with Point-Site configuration for selection.


Once the virtual network is selected, we can see that the VNet Integration setup starts and the web app integration with virtual network gets initiated.

addvnetto webapp

Once the VNet Integration is completed in the Azure portal, you will be able to see the “Connected” status shown in the Networking tab for the web app selected.


Subnetting from CIDR Notation

This blog post will help you understand subnetting using CIDR notation assuming you already have an idea about IP Address, Network ID and broadcast ID in networking. There are many approaches for understanding this but hopefully this approach might help you.

I have an IP Address with a CIDR notation /20. Using this I need to figure out what the Network ID and the broadcast ID is and by doing that we understand what the CIDR notation of /20 means.

Example 1:

IP Address:

The CIDR notation indicates how many bits are turned on with my subnet. The below table format helps to explain the example better.

empty table

I am making a simple looking chart to get better understanding of this. When we talk about subnet masking we generally see  and something on that lines. Let’s go ahead and start doing them in 8-bit rotations keeping all the 1’s turned ON for 20 bits and 0’s turned ON for the remaining 12 bits

Let’s go ahead and start doing them in 8-bit rotations keeping all the 1’s turned ON for 20 bits and 0’s turned ON for the remaining 12 bits.


This would mean that equivalent of turning them all ON and adding them together will be  = 255

So the subnet for this particular IP range will be

255.255.(128+64+32+16).0 =

IP Address: Subnet:

We will just go further and understand how to determine the Network ID, Broadcast ID. At this position, The 3rd Octet is our focus. Since we already know the 1st and 2nd octets are already turned ON we know the values 1st, 2nd of the possible Network ID & Broadcast ID for the IP range. For the 4th Octet since all the bits are turned off it will be 0 otherwise it will be 255.


So here we just need to find the 3rd column values. To find out the 3rd column values we need to translate the value 60 into a binary value and map it into our existing Binary notation using x’s. We will be converting 60 into binary notation that is (00111100)


Using the above values, we are going to build a logic table on binary numbers which results in the below

11111111.11111111.00110000.00000000 –> 3rd octet in Network id become 48


So now we need to figure out what would be the next possible network ID in the list and whatever the next one is, the number right before would be the Broadcast ID for the IP range. This can be determined by the last bit that is turned on in the 3rd octet.


So, the 3rd octet in my Broadcast ID would be 48 + 16 -1 = 63 as shown below. When deciding on the usable IP range we can’t start with 0 and end with 255 so the IP range would be –

Network ID: 192 168 48 0
Broadcast ID: 192 168 63 255
Usable IPs: –

Example 2: IP Address: 


So the subnet for this particular IP range will be 255.255.(128+64+32+16+8+4).0 =

IP Address: Subnet:

To find the Network id:



Using the above values, we are going to build a logic table on binary numbers which results in the below

11111111.11111111.01010100.00000000 –> 3rd octet in Network id become 84

So now we need to figure out what would be the next possible network ID in the list and whatever the next one is, the number right before would be the Broadcast ID for the IP range. This can be determined by the last bit that is turned on in the 3rd octet.


So, the 3rd octet in my Broadcast ID would be 84 + 4 -1 = 87 as shown below. When deciding on the usable IP range we can’t start with 0 and end with 255 so the IP range would be –


Hope this helps.


AWS VPC vs Azure VPN


Amazon has been a fore runner in the cloud computing arena and pioneered many industry revolutionizing services like EC2, VPC etc. AWS’s initial offering EC2-classic platform allowed customers to run ec2 instances on a flat global network shared by all the customers, also there were other attributes including shared tenancy, restrictions on Security Groups and lack of Network Access control lists concerned security minded customers. AWS then introduced EC2-VPC, an advanced platform which provisions logically isolated section of the AWS Cloud. AWS EC2-VPC supports Shared/Dedicated Tenancy, Improved Network Security Groups/Network Access Control etc., Enterprise Customers and SMB customers gained more confidence with the VPC architecture and started adopting AWS better than before.

In 2013, Azure turned its focus from being just a PaaS provider into a Full-fledged IaaS provider to avoid the competitive edge and market loss. In order to compete with the early starter AWS, Azure introduced many new services and importantly Virtual Networks, “a Logically Isolated network” the VPC version of Azure within its Datacenter. Azure’s Virtual Network resembles VPC in many aspects and in fact behaves similar in many cases but there are few differences as well.

In this blog, we’ll see those differences in detail and off course the similarities as well. It’s all about Networking, so let’s begin with


  • Azure VNet and AWS VPC segregate the networks with subnets.
  • An AWS VPC spans all the Availability Zones (AZs) in that region, hence, subnets in AWS VPC are mapped to Availability Zones (AZs). After creating a VPC, you can add one or more subnets in each Availability Zone. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones.
  • Azure VNet subnets are defined by the IP Address block assigned to it.
  • Communications between all subnets in the AWS VPC are through the AWS backbone and are allowed by default. AWS VPC subnets can either be private or public. A subnet is public if it has an internet gateway (IGW) AWS allows only one IGW per VPC and the public subnet allow resources deployed in them access to the internet.
  • AWS creates a default VPC and subnets for each region. This default VPC has subnets for each region where the VPC resides, and any image (EC2 instance) deployed to this VPC will be assigned a public IP address and hence has internet connectivity.
  • Azure VNet does not provide a default VNet and does not have private or public subnet as in AWS VPC. Resources connected to a VNet have access to the Internet, by default.

IP Addresses

  • Both AWS VPC and Azure VNET use non-globally routable CIDR from the private IPv4   address ranges as specified in RFC 1918 – addresses from this RFC are not globally routable — but customers can still use other public IP addresses.
  • Azure VNet assigns resources connected and deployed to the VNet a private IP address from the CIDR block specified. In Azure VNet, the smallest subnet supported is /29 and the largest is a /8.
  • AWS also allows IP addresses from the same RFC 1918 or publicly routable IP blocks. Currently, AWS does not support direct access to the internet from publicly routable IP blocks, hence they are not reachable from the internet even through the Internet gateway (IGW). They are only reachable via the Virtual Private Gateway.
  • For the subnet, AWS recommends a minimum address block of /28 and maximum of /16.

Routing Table

  • AWS uses the route table to specify the allowed routes for outbound traffic from the subnet.
  • All subnets created in a VPC is automatically associated with the main routing table, hence, all subnets in a VPC can allow traffic from other subnets unless explicitly denied by security rules.
  • In Azure VNet, all resources in the VNet allow the flow of traffic by using the system route. Azure VNet uses the system route table to ensure that resources connected to any subnet in any VNet communicate with each other by default. However, there are scenarios when you might want to override the default routes. For such scenarios, you can implement the user-defined routes (UDR) — control where traffic is routed for each subnet — or/and BGP routes (your VNet to your on-premises network using an Azure VPN Gateway or ExpressRoute connection). The UDR applies to only traffic leaving the subnet and can provide a layer of security for Azure VNet deployment, if the goal of UDR is to send traffic to some kind of inspection NVA or the like.
  • The UDR applies to only traffic leaving the subnet and can provide a layer of security for Azure VNet deployment, if the goal of UDR is to send traffic to some kind of inspection NVA or the like. With UDR, packets sent to one subnet from another can be forced to go through a network virtual appliance on a set of routes.


Security is the primary driving force why Virtual network is preferred over public facing endpoints. AWS provides various virtual Security services to provide maximum security both at Virtual Instance level, subnet level and overall network Level.

Security Group

  • AWS “Security Groups” helps protecting instances by configuring inbound and outbound rules. Users can configure what ports to open to accept traffic from what source and similarly configure outbound ports from EC2 instances. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level.
  • Azure’s naming convention is “Network Security Group” can be associated to subnets, individual VMs (classic), or individual network interfaces (NIC) attached to VMs (Resource Manager). When an NSG is associated to a subnet, the rules apply to all resources connected to the subnet. Traffic can further be restricted by also associating an NSG to a VM or NIC.

Virtual Network Interfaces

Virtual Network interface card (NIC) is a virtual appliance that can be plugged and unplugged with VMs. When you move a network interface from one instance to another, network traffic is redirected to the new instance.

In AWS each instance in your VPC has a default network interface (the primary network interface) that is assigned a private IPv4 address from the IPv4 address range of your VPC. You cannot detach a primary network interface from an instance. You can create and attach an additional network interface to any instance in your VPC.

A network interface enables an Azure Virtual Machine to communicate with Internet, Azure, and on-premises resources. When creating a virtual machine using the Azure portal, the portal creates one network interface with default settings for you.

DNS Service

The Domain Name System, or DNS, is responsible for translating (or resolving) a website or service name to its IP address. It’s very essential to avoid latency and unnecessary networking hopping. AWS Route53 provides a highly available and redundant DNS service that connects user requests to various services of AWS such as EC2, ELB, or S3 and it can also be used to route users to infrastructure outside of AWS.

Azure DNS is a hosting service for DNS domains, providing name resolution using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records using the same credentials, APIs, tools, and billing as your other Azure services. Azure DNS now also supports private DNS domains.


Inter connectivity lets different networks connect each other. Cloud providers provides 3 basic inter connectivity option

1. Direct Internet Connectivity

AWS allows users to associate Public IPs to EC2 instances there by allowing internet connectivity to those machines and similarly VMs in the private subnet gain internet access by routing through NAT instances in the public subnet.

Azure lets users to configure public endpoints aka Public IP addresses to VMs inside the subnet thereby VM’s can be connected with other systems.

2. VPN over IPsec

VPN over IPsec is an IP based connection methodology to interconnect two different networks, irrespective of networks within cloud/ outside, cloud to on premise network etc., broadly there are two types of VPN routing protocols used 1. Static Routing protocol 2. Dynamic Routing protocol.

Azure and AWS provide support for Static and Dynamic Routing with Routing Support (BGP). BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. When used in the context of Azure Virtual Networks, BGP enables the Azure VPN Gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange “routes” that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved.

3. Private Connectivity using Exchange Provider

Private connectivity option mainly focused towards enterprise customers who have bandwidth heavy workloads.  Private connection by ISPs can provide much better performance than Internet. AWS has partnered with major Telecom and ISVs to offer private connectivity between their clouds and customer’s on premise infrastructure while Azure runs a Microsoft backbone network between regions to support Express Route. Azure supports most of their features through Express Route except certain features like

  • CDN
  • Visual Studio Team Services Load Testing
  • Multi-factor Authentication
  • Traffic Manager

Similarly AWS supports All AWS services, including Amazon Elastic Compute Cloud (EC2), Amazon Virtual Private Cloud (VPC), Amazon Simple Storage Service (S3), and Amazon DynamoDB can be used with AWS Direct Connect. As far as the SLA is concerned, AWS doesn’t provide SLA for this service, but Azure on the other hand promises 99.9% SLA, otherwise the customer can claim service credits.


The intention of this article is to highlight certain intricate differences and not an in-depth comparison guide. AWS being the pioneer in the IaaS space has lot of matured options and tools set to offer, but Azure on the other hand has closed the gap at a rapid pace in the past few years. Azure being Conventional Software provider focused mainly on enabling their windows environment to suit and operate within IaaS offering, hence all the services newly launched and services in preview seems to be more Windows focused. Microsoft welcomes partners and vendors to build the Providers/Adaptors/Connectors/APIs for the Open Source programming languages like Python or Ruby n Rails etc. Azure from its inception focuses Enterprise customers and goes with Hybrid Story, AWS on the other end tasted their success with startups and SMB customers now trying to build Enterprise story line to take AWS to the next level.

Create a free website or blog at

Up ↑