Category Archives: April 2018

Volunteering: Global Power Platform Bootcamp 2022

A Power Platform Festival

This is free, event-driven by user groups and communities around the world, for anyone who wants to learn more about Microsoft’s Power Platform. In this boot camp, we will deep-dive into Microsoft’s Power Platform stack with hands-on sessions and labs, delivered to you by the experts and community leaders.

Webinar: Boost Azure Analytics with Databricks and Deploying an AI model in Azure


Date and time Sat, February 12, 2022 8:30 AM – 10:00 AM CST

Add to calendar

You won’t want to miss this demo-focused session with industry experts in Azure and AI. Let’s get together and learn how Azure and AI Model

About this event

Let’s take a look at a demo to better understand Azure’s well-architected framework. We will walk through the five pillars of the Azure Well-Architected Framework and conclude with a live demo to demonstrate the functionality.

Session Details:

Deploying an AI model in Azure ML by Rahat Yasir Microsft MVP

In this session we will see a live demo to deploy, test and host an AI model trained in Azure machine learning or, any other platform. We will also show options to configure a ML cluster with ACI and AKS. We will also talk about data drift option in AML to monitor production inferencing.

Modern analytics architecture with Databricks by Deepak Kaushik Microsoft MVP

  • Azure Data Lake Gen 2
  • Auto Loader
  • Unify data, analytics, and AI workloads.
  • Run efficiently and reliably at any scale.
  • Provide insights through analytics dashboards, operational reports, or advanced analytics with PowerBI

Zoom Details:

Topic: Boost Azure Analytics with Databricks and Deploying an AI model in Azure

Time: Feb 12, 2022 08:30 AM Saskatchewan

Join Zoom Meeting

https://us04web.zoom.us/j/76820729684pwd=76FECPD5Ip7MLszompdrj1uyKX_LPb.1

Meeting ID: 768 2072 9684

Passcode: azure

Speakers Details:


		Boost Azure Analytics with Databricks and Deploying an AI model in Azure image

Rahat Yasir

Director of Data Science & AI/ML at ISAAC Instruments | Microsoft MVP – AI | Canada’s Top Developer 30 U 30, 2018


		Boost Azure Analytics with Databricks and Deploying an AI model in Azure image

Deepak Kaushik

4X Microsoft Azure MVP | Azure Architect & Advisor | Trainer | International Speaker|

Webinar: Modern analytics architecture with Azure Databricks

About this event:

Date and time

Sun, February 6, 2022

9:00 AM – 10:00 AM CST

Add to calendar

Let’s take a look at a demo to better understand Azure’s well-architected framework. We will walk through the five pillars of the Azure Well-Architected Framework and conclude with a live demo to demonstrate the functionality.

Session Details:

  • Azure Data Lake Gen 2
  • Auto Loader
  • Unify data, analytics, and AI workloads.
  • Run efficiently and reliably at any scale.
  • Provide insights through analytics dashboards, operational reports, or advanced analytics with PowerBI

Register here:

Event Details:

Join Zoom Meeting

https://us04web.zoom.us/j/72532883599?pwd=RXyt-ltHvFt6UZXwUQxGe3O077Ydaj.1

Meeting ID: 725 3288 3599

Passcode: azure

Speaker:

Deepak Kaushik (Microsoft MVP) & Co-founder IoT Coast 2 Coast

Deepak is a Microsoft Azure MVP. He is Co-Founder of “Azure IoT Coast 2 Coast” focusing on Microsoft Azure & IoT technologies. He is passionate about technology and comes from a development background.

You could find his Sessions/ Recordings at https://channel9.msdn.com – Channel9, C# Corner, Blog and Deepak Kaushik -Microsoft MVP YouTube Channel.

Real Scenario: Manage your Azure expenses like magic

Though Azure Advisor does most of the magic and catches the huge fishes, someone must hunt for the small yet valuable fishes. Azure Advisors are unable to recommend optimization since they are unable to distinguish the resources and tiers used by Developers and Q.A.

Today, I’m going to talk about an underutilized resource called ‘Azure Cache for Redis,’ which caught my eye recently. The maximum resource utilization from January to April 2021 is 2%. This resource is presently on the Premium 6 GB plan, which costs 527.59 CAD per month or $6,331.08 annually.

 If the team could move to Standard C1 (1 GB Cache Replication) for 131.42 CAD per month, with a similar SLA (99.9%), and save 4622.62 (more than 70%) annually for just one resource at South Central.

My suggestion to the client

Suggested Tier Change :

Changing tier to Standard C1 (1 GB Cache Replication) for 131.42 CAD per month, with a similar SLA (99.9%), and save 4622.62 (more than 70%) annually for just one resource at South Central makes my client happy 🙂

Azure Migration to Successful Migration   

Deepak Kaushik (Microsoft Azure MVP)

When migrating into Azure you should consider few things that must define or result to a successful migration.

  1. Lower your TCO (Total Cost of Ownership) more than 60%
  2. Reduce Time to Market (Scope / Release an MVP (Minimum Viable Product))
  3. Hybrid environment
  4. Secure security for your hybrid environment (SSO=Single sign on, Azure IAM solution)
  5. Reuse/extend your on-premises licensing in Azure.
  6. Use the new features in each Azure service and optimize your data within the cloud.

Let’s mention some of the way in migrating your data into Azure.

  1. Storage Services

Azure has numerous ways in storing your on-premises data, but the main question is what kind of data are they? Are they archive data, or transactional data? BI data? What format are they in? file/DB? How is the data moving around? Transactional/Incremental/… as you can see each set of data have a different nature that needs to be treated differently and for that Microsoft has a variety of Azure services as mentioned.

  • Azure Blob Storage (Cold Storage)             (Archiving data)
  • Azure Blob Storage (Hot Storage)               (Streaming data)
  • Azure NetApps Files                                       (File Storage)
  • Azure SQL Database                                       (Transactional database)
  • Azure Cosmos DB                                         (Geo distribution data)
  • ●      Azure Data base For PostgreSQL
  • Azure Data base For MySQL
  • … and many, many more.
  • Transfer Services

One of the most famous services that migrates your on-premises data to the cloud is Azure Database Migration Services, this service can migrate any well knows database software/application to the Azure cloud, it also can migrate your data base in a offline or online approach (minimal downtime).

Also they are numerous ways in migrating to Azure using different services like..

  • Azure Migrate: Server Migration
  • Data Migration Assistant
  • Azure Database Migration Service
  • Web app migration assistant
  • Azure Data Box
  • Azure Data Factory
  • Azure CLI (Command Line).
  • Azure PowerShell.
  • AZCopy utility
  • Third part migration tools certified by Microsoft.
  • On-prem tools, for example SSIS.
  • Analytics and Big Data

The definition of Analytics in the data world is to have the analytics team deal with the entire data, that leads them in dealing with big data and running/process/profiling massive amount of data and for that Microsoft have provided a variety of tools depending on the analytics team needs, or the type/volume of data, some of the most well knows analytics tools within the azure are as mentioned and some of them have embedded internal ETL tools.

  1. Azure Synapse Analytics (formally knows as Azure SQL DW)
  2. Azure Data Explorer (know as ADX)
  3. AZ HD Insight
  4. Power BI
  5. … and many, many more.
  • Azure Migrate Documentation

You might be looking at the definition of migration from a different angle, it may have a different meaning like migrating VM, SQL configuration and other on-premises services, take a look at Azure Migrate Appliance under the Azure Migration documentation. 

Choose an Azure solution for data transfer.

Check out some of Microsoft’s data transfer solution, in this link (click here) you will find few scenario that can help you to understand the existing data migration approaches.

Conclusion

Migrating to Azure is very simple but needs planning, consistency and basic Azure knowledge. You may have a very successful migration, but you need to make sure that the new features in azure services are been used as needed, and finally Microsoft always has a solution for you.

Azure Analytics Using Azure Delta Lake(Databricks): A Step-by-Step Guide

If data sources are installed at a long distance from the stakeholders who control the operation, timely notifications are required.
Agriculture, mining, power generation, oil and gas, and other industries can all benefit

Technologies used: Azure Databricks, Python, pyspark, Scala, Power BI

Data sources: Sensors – like heat, humidity, water leakage, electric meters, drones sending data to iot hub.

Solution description: Data from IOT sources are loaded into ADLS Gen2. A checkpoint is created, and autoloader is configured to load and maintain the data. Lastly Azure data bricks bronze, silver & gold tables are created which supplies data to power BI dashboards based on business rules.

Architecture:

Solution Details:

Step 1: Initial configuration, autoloader & load data

Data from various sources are periodically copied to iot-hub, Now, once file is loaded, autoloader tool ingests this data to azure data lake storage gen2 (ADLS Gen2) databricks mount.

Step 2: Create Delta Lake Bronze table

All raw data in csv format can be ingested to bronze table. Basically, it is complete load of all data received.

Step 3: Data processing in Silver Table

This is one of the major steps and in this solution, data processing is performed and cleaned data is saved in silver table.

Step 4: Data processing in Gold Table

In this step, as per business requirements, business logic is applied, and alerts columns are created. This is applied on current load.

Step 5: Dashboard (Power BI):

Gold table is source for Power BI dashboard. This dashboard can help user to sort the existing alert get more details about it. User can also see different analytics on the dashboard.

Based on business mandate, email notification is sent to stake holders

Code Details:

Initial Connection

import json

import csv

import pyspark

from pyspark.sql import SparkSession

configs = {“fs.azure.account.auth.type”: “OAuth”,

          “fs.azure.account.oauth.provider.type”: “org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider”,

          “fs.azure.account.oauth2.client.id”: dbutils.secrets.get(scope = “ Scope “, key = “<your ID >”),

          “fs.azure.account.oauth2.client.secret”: dbutils.secrets.get(scope = “Scope“, key = “Your Key“),      

          “fs.azure.account.oauth2.client.endpoint”: “https://login.microsoftonline.com/—2222-497b-xxxx-XXXXXX/oauth2/token&#8221; }

adlsPath = “<<my-name>@<my-organization.com>>” # Source

mountPoint = “<Mount point Path>” # Upload Path – # Destination

print(f”adls Path: {adlsPath} “)

print(f”Client Secret : {configs} “)

display(dbutils.fs.ls(“Mount point Path “))

Autoloader configuration

# Run the following code for autoloader configuration

checkpoint_path = ‘/Mount point Path/_checkpointname/’

import json
import csv
from pyspark.sql.functions import input_file_name
cloudfile = {
  "cloudFiles.format": "csv",
  "cloudFiles.schemaEvolutionMode": "addNewColumns",
  "cloudFiles.inferColumnTypes": "true",
  "cloudFiles.includeExistingFiles": "true",
  "cloudFiles.allowOverwrites": "false",
  "cloudFiles.schemaLocation": "/Mount point Path /_checkpointname/", 
  "rescueDataColumn":"_rescued_data",
  "cloudFiles.useNotifications":"false"    
    }
# Set up the stream to begin reading incoming files from the Mount point Path location.
df = (spark.readStream.format("cloudFiles").options(**cloudfile).load('/Mount point Path/StreamingLog) withColumn("filePath",input_file_name()))  # this adds a column with file name 

Create Dataframe using new uploaded data

Here checkpoint is tracking or keeping a record of all the files that are uploaded to mountpoint. newly uploaded files those don’t have any record in checkpoint is loaded at DELTALAKE_BRONZE_PATH

# Start the stream & write the data

DELTALAKE_BRONZE_PATH = "dbfs:/FileStore/Bronze_StreamingLog "
df.writeStream\
.format("delta")\
.outputMode("append")\
.option("checkpointLocation", "/Mount point Path /_checkpointname/")\
.trigger(once=True)\
.start(DELTALAKE_BRONZE_PATH)

Create Delta Lake Bronze table

# Register the SQL table in the database
spark.sql(f"CREATE TABLE IF NOT EXISTS <Bronze_Tablename> USING delta LOCATION '{DELTALAKE_BRONZE_PATH}'") 
# Read the table
streaminglog_stats = spark.read.format("delta").load(DELTALAKE_BRONZE_PATH)
display(streaminglog_stats)

Silver & Gold Table: Once data is uploaded into the bronze table, all data cleaning and ETL can be performed on it and clean data can be saved into the silver table.

# Configure destination path
DELTALAKE_SILVER_PATH = "dbfs:/FileStore/Silver_StreamingLog "
# Write out the table
streaminglog_stats.write.format('delta').mode('overwrite').save(DELTALAKE_SILVER_PATH)
# Register the SQL table in the database
spark.sql("CREATE TABLE if not exists <Silver_Tablename> USING DELTA LOCATION '" + DELTALAKE_SILVER_PATH + "'")
# Read the table
streaminglog _stats = spark.read.format("delta").load(DELTALAKE_SILVER_PATH)
display(streaminglog _stats)

After formation of silver table, all business rules are applied, and gold table is created. This is the source for all reporting, dash boards and reporting tools, (Power BI) in our case.

O’Reilly Books Review opportunity–Application Delivery & Load Balancing in Microsoft Azure

It’s a privilege to review the technical book “Application Delivery and Load Balancing in Microsoft Azure: Practical Solutions with NGINX and Microsoft Azure.”

My role was to assist the authors in identifying and correcting any major technical or structural errors, as well as providing up-to-date technical information.

Experience Global Power Platform Bootcamp 2021 on February 19 – 20, 2021

So happy to contribute as volunteer on ‘Global Power Platform Bootcamp 2021’ with my friends!!

#GlobalPowerPlatformBootcamp#GPPB2021#PowerPlatform

Thanks everyone..