Securing Multi-tenant Azure App services using Azure Private Link

Microsoft has recently released a Public Preview of Private Link for Azure App Service. ┬áThis preview is available in limited regions for all PremiumV2 Windows and Linux web apps. Until this point securing App Services through Virtual Network Isolation was only possible through App Service Environments(ASE). ASE’s are generally expensive and have long initial deployment cycles as a drawback.

Private Link exposes your app on an address in your VNet and removes it from public access. This not only secures the app but can also be combined with Network Security Groups to secure your network.

The feature is currently available in East US and West US 2. For the scope of this blog post i will be creating the azure resources in West US 2 region.

Create an P1V2 App Service Plan in West US 2 region and create a Private Endpoint in the same region.

Once the Private Link Endpoint is created, you would see a Network Interface created with a Virtual Network mapped to a subnet.

Public access to the App Service(https://use-xpsingle-93-test-single.azurewebsites.net) will be disabled
Private DNS zone will be created mapping the App Service Host name to a Private IP that cannot be reached from Internet.

Typically testing such a topology can be done by creating a VM in the same virtual network under different subnet and updating Etc/Hosts file. But for Production scenarios i would suggest deployment of Application gateway and get the Private Endpoint mapped to the App gateway Backend to provide Production ready solution.

Create an Application Gateway(V1) Standard tier mapping the Virtual Network used earlier to create the private link end point.

Create a public IP for mapping the Private End point Private IP through Backend Pools
Add a BackEnd Pool using Hostname use-xpsingle-93-test-single.azurewebsites.net
as target type

Create a Https-Routing rule to map the Https-Listener and backend targets and the .pfx certificate if your website needs a secure HTTPS access.

While creating the HTTP/HTTPS Settings please ensure you configure

  1. Request time-out as 120 secs for the backend instances.
  2. Update the Host name domain to the App service host name to avoid 400 Invalid Host name error
  3. Add a custom probe for HTTP/HTTPS and map it to the setting

Important point to note while creating the HTTPS Probe is to create the probe with “PickHostNameFromBackendHTTPSettings” as Yes so that the host name is correctly picked from HTTPS Settings and avoid multiple Host name overrides.

Once the above steps are completed you can verify the Backend health of the Application Gateway and makesure the Status column shows healthy

Once the backend status shows healthy you should be able to access the website through App Gateway’s Public IP. So even though the website is publicly available through a public endpoint the Actual App Service hosting the website will be secured by a Private IP with in Azure Network Perimeter and the traffic communications goes through a Virtual Network

Hope this helps securing Multi-Tenant App Service deployments on Azure PAAS.

Reducing latency through Proximity Placement groups in Azure

As a followup to my earlier blog post on Accelerated networking on Azure i am looking at other options available in Azure to reduce latency for Azure VM’s.

When you place your Azure VM’s in a single region, the physical distance between VM’s is reduced. Placing them within a single availability zone is another step you can take to deploy your virtual machines closer to each other. However, as the Azure footprint grows, a single availability zone may span multiple physical data centres resulting in network latency that can impact your overall application performance. If a region does not support availability zones or if your application does not use availability zones, the latency between the application tiers may increase as a result.

Enabling Accelerated networking reduces latency for Azure VM’s to a certain extent but Proximity placement group(PPG) provides Azure Virtual Machine logical grouping capability to further decrease inter-VM network latency.

As per Microsoft documentation, PPG can be used in the following cases

  • Low latency between stand-alone VMs.
  • Low Latency between VMs in a single availability set or a virtual machine scale set.
  • Low latency between stand-alone VMs, VMs in multiple Availability Sets, or multiple scale sets. You can have multiple compute resources in a single placement group to bring together a multi-tiered application.
  • Low latency between multiple application tiers using different hardware types. 

For the scope of this blog post we will be looking at the point #02(PPG in a single availability set). In the case of availability sets and virtual machine scale sets, you should set the proximity placement group at the resource level rather than the individual virtual machines.

Please see the github repository here for more detailed deployment steps

When proximity groups are enabled along with accelerated networking and availability sets the latency improves from ~72.9 ms to ~68.8 ms for an average of 10 test results on a set of 2 D3v2 VM’s within a single region.

Proximity groups enables moving VM’s closer to reduce latency but it compromises High availability if the VM’s are not placed in availability zones. To address this issue proximity groups can also be configured such that the VM’s are placed in separate availability zones to provide HA.

Some best practices to follow while creating PPG’s for new or existing virtual machines here

Create a free website or blog at WordPress.com.

Up ↑