Thursday, May 27, 2021

Azure Series - Steps to expose an on-premise .NET Core API through Azure API Management

Let's walk through an example of how to expose an on-premise .NET Core API through Azure API Management.

Scenario:
You have a .NET Core API running on-premise, and you want to make it securely accessible through Azure API Management. Azure API Management acts as a gateway, providing features like authentication, authorization, rate limiting, and caching for your API.

Step 1: Set Up On-Premise .NET Core API
Ensure that your .NET Core API is running and accessible on your on-premise server. Make sure it's properly secured and can handle incoming requests.

Step 2: Create an Azure API Management Service
Go to the Azure portal and create a new API Management service. Choose a name, subscription, resource group, location, and pricing tier that best suits your needs.

Step 3: Import API into Azure API Management
In the Azure API Management service, navigate to the "APIs" section and click on "Add a new API." Here, you'll have two options: "Blank API" or "API from OpenAPI file." Choose the appropriate method to import your API.

  1. Option 1: If you have an OpenAPI (Swagger) file for your on-premise API, you can directly import it by selecting the "API from OpenAPI file" option and providing the file's URL.

  2. Option 2: If you don't have an OpenAPI file, you can choose the "Blank API" option and manually define the endpoints, operations, and other details of your API.

Step 4: Configure Backend to Point to On-Premise .NET Core API
In the API Management service, navigate to your imported API and click on "Settings." Under the "Backend" section, configure the "Web service URL" to point to the endpoint of your on-premise .NET Core API.

Step 5: Secure the API with Policies (Optional)
If your on-premise API requires authentication or additional security measures, you can apply Azure API Management policies to enforce them. For example, you can add JWT validation or client certificate authentication policies.

Step 6: Publish the API
Once you have configured the API Management service with your on-premise API details, click on "Save" and then "Publish" to make it accessible.

Step 7: Test the API
Now that your on-premise .NET Core API is exposed through Azure API Management, you can test it using the provided developer portal or tools like Postman. Verify that your API is accessible and that any security measures you implemented are working as expected.

Conclusion:
By following these steps, you can securely expose your on-premise .NET Core API through Azure API Management. This allows you to leverage the powerful features of API Management while ensuring secure and controlled access to your on-premise API from external clients or applications.

Wednesday, May 12, 2021

Azure Series - Virtual Network - Communications between Azure Resources and On-Premise Resources

 As organizations increasingly adopt cloud computing, the need for seamless communication between on-premise resources and cloud-based assets becomes paramount. Azure Virtual Networks serve as the linchpin, connecting these disparate environments and enabling secure data transfer. In this article, we will explore how virtual network communication is achieved between Azure resources and on-premise resources, along with the crucial concepts of filtering and routing network traffic.

Understanding Virtual Network Communications

Azure Virtual Networks act as isolated network environments within the Azure cloud, allowing organizations to deploy resources securely. To enable communication between Azure and on-premise resources, 3 approaches are used:

1. Point-to-Site VPN: A Point-to-Site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client's computer. This solution is useful for telecommuters who want to connect to Azure VNets from a remote location, such as from home or a conference. P2S VPN is also a useful solution to use instead of S2S VPN when you have only a few clients that need to connect to a VNet.

Below 2 options are commonly used.

2. Site-to-Site VPN: A Site-to-Site Virtual Private Network (VPN) establishes a secure and encrypted connection between an on-premise and Azure Virtual Network. This allows both environments to act as if they are on the same local network, enabling seamless communication between on-premise resources and resources in Azure. Site-to-site VPNs are particularly useful for organizations that need to extend their on-premise infrastructure to the cloud.

3. ExpressRoute: ExpressRoute provides a private, dedicated connection between an organization's on-premise network and Azure's network. Unlike Site-to-Site VPNs, ExpressRoute offers higher bandwidth, lower latency, and a more reliable connection. It is an excellent choice for enterprises with substantial data transfer requirements, mission-critical applications, and strict performance and security needs.

Filtering Network Traffic with Network Security Groups (NSGs)

Azure Virtual Networks employ Network Security Groups (NSGs) to control inbound and outbound network traffic. NSGs act as virtual firewalls, allowing organizations to define rules for network traffic flow based on source and destination IP addresses, ports, and protocols. The key features of NSGs include:

  1. Inbound Security Rules: Organizations can create inbound security rules to control the traffic coming into Azure resources. For example, a web server's inbound rule might allow HTTP (port 80) and HTTPS (port 443) traffic, while denying all other ports.

  2. Outbound Security Rules: Outbound rules regulate the traffic leaving Azure resources. Organizations can restrict certain outbound connections to ensure data security and compliance.

  3. Network Interface Level: NSGs can be applied at the subnet level or directly to individual network interfaces, providing granular control over network traffic.

Routing Network Traffic with User-Defined Routes

Azure Virtual Networks utilize User-Defined Routes (UDRs) to customize the path of network traffic within the virtual network. With UDRs, organizations can override Azure's default routing behavior and create specific routing tables. Key aspects of UDRs include:

  1. Custom Route Tables: UDRs allow administrators to create custom route tables and associate them with subnets. This enables organizations to direct traffic through specific network appliances or services, ensuring that it follows the desired path.

  2. Forced Tunneling: One common use case for UDRs is forced tunneling, where all traffic from the virtual network is directed back to an on-premise VPN or firewall device for additional security and monitoring.

Through Site-to-Site VPNs and ExpressRoute connections, organizations can establish secure links and seamlessly integrate their environments. By leveraging Network Security Groups, businesses can filter network traffic, enforce security policies, and protect sensitive data. Furthermore, User-Defined Routes provide the flexibility to control network traffic flow, offering enhanced control and efficiency.

Monday, May 03, 2021

Azure Series - Virtual Network - Exploring Key Scenarios with Virtual Networks and Creating an Azure Virtual Network with Subnets

Virtual networks (VNets) are essential components of cloud computing that enable secure and seamless communication between resources within the same cloud infrastructure. In this article, we will delve into the key scenarios where virtual networks play a vital role and provide a step-by-step guide on creating an Azure Virtual Network with subnets, using Microsoft Azure as our cloud platform of choice.

Key Scenarios with Virtual Networks

1. Isolated Environment: Virtual networks offer the ability to create isolated environments, commonly known as subnets, which allow organizations to compartmentalize their resources. This isolation ensures that sensitive data and critical applications remain secure and are shielded from unauthorized access.

2. Resource Segmentation: By deploying multiple subnets within a virtual network, an organization can effectively segregate different types of resources based on their function and security requirements. For example, a three-tier application can have separate subnets for web servers, application servers, and database servers, each with its own set of security rules.

3. Site-to-Site Connectivity: Virtual networks enable secure connectivity between on-premises networks and cloud-based resources. Organizations can extend their on-premises network to Azure by establishing a site-to-site VPN tunnel, enabling seamless and secure data transfer between the two environments.

4. Multi-Region Deployment: For businesses with a global presence, Azure Virtual Networks can span across multiple regions, facilitating a multi-region deployment strategy. This ensures high availability and disaster recovery, as resources can be replicated and distributed across different regions.

5. Network Security Groups: Azure Virtual Networks come equipped with Network Security Groups (NSGs) that allow the implementation of fine-grained security rules for inbound and outbound traffic. NSGs help in controlling network traffic flow and protecting resources from unauthorized access.

Creating an Azure Virtual Network with Subnets

Now, let's walk through the steps to create an Azure Virtual Network with subnets using the Azure portal:

  1. Sign in to the Azure portal and navigate to "Create a resource."

  2. Type "Virtual Network" in the search bar and select "Virtual Network" from the search results.

  3. Click "Create" to start the creation process.

  4. Provide the required details, such as the name of the virtual network, the region where it will be deployed, the IP address space, and the subnet details.

  5. Configure the subnets within the virtual network by specifying the subnet name and its IP address range.

  6. Add additional subnets as needed to fulfill your resource segregation requirements.

  7. Configure the network security groups for each subnet to control traffic flow and enhance security.

  8. Review all the settings, and once satisfied, click "Create" to create the Azure Virtual Network along with the specified subnets.

Saturday, May 01, 2021

Azure Series - Cosmos DB : Managing Indexing Policies in Azure Cosmos DB

 Azure Cosmos DB, a fully managed NoSQL database service, provides flexible indexing policies that allow developers to optimize query performance according to their specific application needs. In this article, we will explore various indexing options and their management in Azure Cosmos DB, including opt-in, opt-out, composite indexing, exclude all, and no indexing, accompanied by practical examples.

Understanding Indexing in Azure Cosmos DB

Indexes in Azure Cosmos DB are key components that facilitate efficient query execution by organizing and optimizing data retrieval. They enable faster access to data, especially when performing filtering, sorting, and aggregations. Cosmos DB offers two primary modes of indexing: automatic indexing and manual indexing.

  1. Automatic Indexing: This mode allows Cosmos DB to automatically index all properties within the containers. It simplifies the development process, as developers don't need to explicitly define indexes. However, it may lead to higher storage costs and slower write performance, as every property gets indexed.

  2. Manual Indexing: In this mode, developers have greater control over which properties get indexed. They can specify which properties should be indexed based on query patterns and data access requirements. Manual indexing reduces storage costs and provides better write performance compared to automatic indexing.

Managing Indexing Policies in Azure Cosmos DB

Let's explore different scenarios of managing indexing policies in Azure Cosmos DB with examples:

1. Opt-In Indexing:

Opt-In indexing allows developers to explicitly specify which properties to index, enhancing query performance for specific queries. Consider a container with documents representing books, and we want to index the "title" and "author" properties for efficient search:

// Define the indexing policy with opt-in indexing
IndexingPolicy indexingPolicy = new IndexingPolicy
{
    IncludedPaths =
    {
        new IncludedPath { Path = "/title/*" },   // Opt-in index for the title property
        new IncludedPath { Path = "/author/*" },  // Opt-in index for the author property
    },
    ExcludedPaths =
    {
        new ExcludedPath { Path = "/*" } // Exclude all other properties from indexing
    }
};

// Apply the indexing policy to the container
await container.ReplaceContainerAsync(new ContainerProperties(container.Id, partitionKeyPath)
{
    IndexingPolicy = indexingPolicy
});

2. Opt-Out Indexing:

Opt-Out indexing enables developers to exclude certain properties from indexing, reducing storage costs and write overhead. In this example, we exclude the "description" property from indexing:

// Define the indexing policy with opt-out indexing
IndexingPolicy indexingPolicy = new IndexingPolicy
{
    ExcludedPaths =
    {
        new ExcludedPath { Path = "/description/*" } // Opt-out index for the description property
    }
};

// Apply the indexing policy to the container
await container.ReplaceContainerAsync(new ContainerProperties(container.Id, partitionKeyPath)
{
    IndexingPolicy = indexingPolicy
});

3. Composite Indexing:

Composite indexing allows developers to create composite indexes for queries involving multiple properties. For instance, if we frequently query books based on both "title" and "category," we can create a composite index for those properties:

// Define the indexing policy with composite indexing
IndexingPolicy indexingPolicy = new IndexingPolicy
{
    CompositeIndexes =
    {
        new CompositePath { Path = "/title", Order = CompositePathSortOrder.Ascending },
        new CompositePath { Path = "/category", Order = CompositePathSortOrder.Ascending }
    }
};

// Apply the indexing policy to the container
await container.ReplaceContainerAsync(new ContainerProperties(container.Id, partitionKeyPath)
{
    IndexingPolicy = indexingPolicy
});

4. Exclude All Indexing:

Exclude all indexing disables indexing for all properties within the container. This can be useful when you want to minimize storage overhead and do not require any query performance optimization:

// Define the indexing policy with exclude all indexing
IndexingPolicy indexingPolicy = new IndexingPolicy
{
    IndexingMode = IndexingMode.None
};

// Apply the indexing policy to the container
await container.ReplaceContainerAsync(new ContainerProperties(container.Id, partitionKeyPath)
{
    IndexingPolicy = indexingPolicy
});

5. No Indexing:

No indexing allows developers to retrieve data without any indexing overhead, ideal for applications that do not require query support:

// Define the indexing policy with no indexing
IndexingPolicy indexingPolicy = new IndexingPolicy
{
    IndexingMode = IndexingMode.Lazy
};

// Apply the indexing policy to the container
await container.ReplaceContainerAsync(new ContainerProperties(container.Id, partitionKeyPath)
{
    IndexingPolicy = indexingPolicy
});

Azure Cosmos DB provides flexible indexing policies that empower developers to optimize query performance according to their application requirements. In this article, we explored various indexing options, including opt-in, opt-out, composite indexing, exclude all, and no indexing, along with practical code examples. Choosing the right indexing strategy is essential for achieving efficient and scalable data retrieval in Azure Cosmos DB. Consider your application's needs, query patterns, and storage constraints to select the most appropriate indexing policy for your Cosmos DB containers.